Search Unity

  1. Are you interested in providing feedback directly to Unity teams? Sign up to become a member of Unity Pulse, our new product feedback and research community.
    Dismiss Notice

Showcase LineRenderer2D: GPU pixel-perfect 2D line renderer for Unity URP (2D Renderer)

Discussion in '2D' started by ThundThund, Dec 14, 2020.

  1. ThundThund

    ThundThund

    Joined:
    Feb 7, 2017
    Posts:
    244
    LineRenderer2D: GPU pixel-perfect 2D line renderer for Unity URP (2D Renderer)



    Code repository: https://github.com/QThund/LineRenderer2D
    More on Twitter: @SiliconHeartDev
    Other code I shared:
    Script for generating ShadowCaster2Ds for Tilemaps
    Delaunay Triangulation with constrained edges
    Target sorting layers as assets

    Hi everybody, I have been refactoring and improving an old piece of code I wrote years ago, and adapting it to the new render pipeline. I think this is that kind of feature that should ship with Unity as many people that develop 2D games need it at some point. So, I decided to write an article to share my implementations with you just in case you find it useful. I wrote it first in a document out of this forum and used some background colors in the code snippets, that's why I had to write the color word in some sections, sorry for that.

    1. Introduction
    2. Vectorial solution
    3. Bresenham solution
    4. Line strips drawing
    5. Optimizations
    Introduction

    Unity provides developers with a great line rendering tool which basically generates a 3D mesh that faces the camera. This is enough for most games but, if you want to create 2D games based on pixel-art aesthetics, “perfect” lines do not fit with the rest of sprites, especially if the size of the pixels in those sprites do not match the size of the pixels of the screen. You will need lines that fulfill one main rule: each pixel may have a neighbor either in the same column or in the same row, but not in both. Unity does not help in this case, you need to work on your own solution.

    There are several alternatives, you can just draw the line into a sprite, which will look awful in case you rotate it; you can use a texture and change it dynamically, drawing the line in the CPU side, with C#, using the SetPixels method and the Bresenham algorithm, which can be slow and is limited by the size of the texture (although it allows resizing the sprite to achieve whatever line-thickness you need); our you can use a shader in the GPU and either vectorial algebra along with some “magic” or a modified version of the Bresenham algorithm, as I am going to explain here.

    Both shading methods have the following inputs in common:
    • Current screen pixel position.
    • The position of both line endpoints, in screen space.
    • The color of the line.
    • The line thickness.
    • The position of the origin (0, 0), in screen space (for screen adjustment purposes).
    In Unity, we need just 1 sprite in the scene with whatever texture (it can be 1-pixel-wide repeating texture), a material with a shader (made in Shadergraph, in this case) and a C# script to fill the parameters of the shader in the OnWillRenderObject event. Since we are using a sprite and Shadergraph with the 2D Renderer, it works with both the 2D sorting system and the 2D lighting systems. In the C# script there must be something like this:

    Code (CSharp):
    1. protected virtual void OnWillRenderObject()
    2. {
    3.     Vector2 pointA = m_camera.WorldToScreenPoint(Points[0]);
    4.     Vector2 pointB = m_camera.WorldToScreenPoint(Points[1]);
    5.     pointA = new Vector2(Mathf.Round(pointA.x), Mathf.Round(pointA.y));
    6.     pointB = new Vector2(Mathf.Round(pointB.x), Mathf.Round(pointB.y));
    7.  
    8.     Vector2 origin = m_camera.WorldToScreenPoint(Vector2.zero);
    9.     origin = new Vector2(Mathf.Round(origin.x), Mathf.Round(origin.y));
    10.  
    11.     m_Renderer.material.SetVector("_Origin", origin);
    12.     m_Renderer.material.SetVector("_PointA", pointA);
    13.     m_Renderer.material.SetVector("_PointB", pointB);
    14. }
    Vectorial solution

    The vectorial solution is not perfect but it is the fastest. The main idea is to calculate the distance of a point in the screen to the line defined by other 2 points; if such distance is lower than or equals half of the thickness of the line, the screen point is colored.

    The main problem of this approach is that the screen is not composed of infinite points, it is a grid whose rows and columns depend on the resolution and the physical screen. If we want to draw a line whose thickness is 1 pixel, we cannot compare the distance of the point to the line to 0.5, because that will make any pixel crossed by the imaginary line to be colored, producing that some parts of the line look wider.


    We need to find a way to compare distances that gives us the appropriate points to color. I have to be honest, I am not a mathematician and did not have enough time to analyze the values to find the best method to calculate the adjustment factor, so I only found some constants by trial and error based upon an assumption: it seems that the slope of the line is related to the distance to compare, such distance is inversely proportional to how close the slope is to 45º. This relation is not exact, erroneous results are unavoidable using this method. The constant values I discovered were:

    fBaseTolerance (minimum distance in any case): 0.3686
    fToleranceMultiplier (applied depending on the slope): 0.34935

    Code (CSharp):
    1. #define M_PI 3.1415926535897932384626433832795
    2.  
    3. vEndpointA = vEndpointA - fmod(vEndpointA, float2(fThickness, fThickness));
    4. vEndpointB = vEndpointB - fmod(vEndpointB, float2(fThickness, fThickness));
    5. vEndpointA = round(vEndpointA);
    6. vEndpointB = round(vEndpointB);
    7.  
    8. // The tolerance is bigger as the slope of the line is closer to any of the 2 axis
    9. float2 normalizedAbsNextToPrevious = normalize(abs(vEndpointA - vEndpointB));
    10. float maxValue = max(normalizedAbsNextToPrevious.x, normalizedAbsNextToPrevious.y);
    11. float minValue = min(normalizedAbsNextToPrevious.x, normalizedAbsNextToPrevious.y);
    12. float inverseLerp = 1.0f - minValue / maxValue;
    13.  
    14. outDistanceCorrection = fBaseTolerance + fToleranceMultiplier * abs(inverseLerp);
    Once we have the distance correction factor, we calculate whether the current screen point is close enough to the imaginary line. There are 2 corner cases when the line is either completely horizontal or completely vertical, in which case an offset is added just to avoid the round numbers that produce bad results (bolder line).

    YELLOW
    Code (CSharp):
    1. // The amount of pixels the camera has moved regarding a thickness-wide block of pixels
    2. vOrigin = fmod(vOrigin, float2(fThickness, fThickness));
    3. vOrigin = round(vOrigin);
    4.  
    5. // This moves the line N pixels, this is necessary due to the camera moves 1 pixel each time and the line may be wider than 1 pixel
    6. // so this avoids the line jumping from one block (thickness-wide) to the next, and instead its movement is smoother by moving pixel by pixel
    7. vPointP += float2(fThickness, fThickness) - vOrigin;
    8. vEndpointA += float2(fThickness, fThickness) - vOrigin;
    9. vEndpointB += float2(fThickness, fThickness) - vOrigin;
    10.  
    BLUE
    Code (CSharp):
    1.  
    2. vEndpointA = vEndpointA - fmod(vEndpointA, float2(fThickness, fThickness));
    3. vEndpointB = vEndpointB - fmod(vEndpointB, float2(fThickness, fThickness));
    4. vEndpointA = round(vEndpointA);
    5. vEndpointB = round(vEndpointB);
    6. vPointP = vPointP - fmod(vPointP, float2(fThickness, fThickness));
    7. vPointP = round(vPointP);
    8.  
    Code (CSharp):
    1.  
    2. const float OFFSET = 0.055f;
    3.  
    4. // There are 2 corner cases: when the line is perfectly horizontal and when it is perfectly vertical
    5. // It causes a glitch that makes the line fatter
    6. if(vEndpointA.x == vEndpointB.x)
    7. {
    8.     vEndpointA.x -= OFFSET;
    9. }
    10.  
    11. if(vEndpointA.y == vEndpointB.y)
    12. {
    13.     vEndpointA.y -= OFFSET;
    14. }
    15.  
    16. float2 ab = vEndpointB - vEndpointA;
    17. float dotSqrAB = dot(ab, ab);
    18.  
    19. float2 ap = vPointP - vEndpointA;
    20. float dotPA_BA = dot(ap, ab);
    21. float normProjectionLength = dotAP_AB / dotSqrAA;
    22.  
    23. float projectionLength = dotAP_AB / length(ab);
    24. float2 projectedP = normalize(ab) * projectionLength;
    25.  
    26. bool isBetweenAandB = (normProjectionLength >= 0.0f && normProjectionLength <= 1.0f);
    27. float distanceFromPToTheLine = length(ap - projectedP);
    28.  
    29. outIsPixelInLine = isBetweenAandB && distanceFromPToTheLine < fThickness * fDistanceCorrection;
    In the blue part of the source code you can see how every input point is adjusted to the bottom-left position of the blocks they belong to. For example, if the line has a thickness of 4 pixels, the screen is divided by an imaginary grid whose cells occupy 4x4 pixels; if the point is at [7.2, 3.4] it is moved to the position [4, 0]. In the following image dark squares represent the bottom-left corner of each 4x4 block and green squares are the pixels that are actually near to the line and that are treated as if they were in each corner.


    This subtract module operation is what makes the line be drawn with the desired thickness. The round operation avoids a jittering effect produced by the floating point calculation imprecisions.

    Since the camera can move 1 pixel at a time and the thickness of the line may be greater than 1 pixel, an undesired visual effect occurs: the line does not follow the camera per pixel, it abruptly jumps to the next block of pixels as the camera displacement is greater than the thickness of the line. To fix this problem we have to subtract the displacement of the camera inside a block (from 0 to 3, if the thickness is 4 pixels) to the position of every evaluated point (vPoint). In the source code, the yellow part uses an input point (vOrigin), whose position is [0, 0] in world space transformed to screen space, that is used for calculating the amount of pixels the camera has moved both vertically and horizontally. The modulo of the position is calculated using the thickness and it is subtracted to the thickness value too, so we know the camera offset inside a block of pixels.

    Here we can see the results of this algorithm, setting the thickness to 4 pixels:





    Bresenham solution

    This solution uses the Bresenham algorithm so the result is perfect but the calculation is more expensive than the vectorial solution. For each pixel occupied by the sprite rectangle, the algorithm is executed from the beginning to the end of the line; if the current point of the line coincides with the current screen position being evaluated, it uses the line color and the loop stops; otherwise the entire line is checked and the time is wasted (the background color is used instead).


    The same adjustment is applied to the input points as in the vectorial solution (yellow and blue parts in the source code). The Bresenham implementations one can find out there use an increment of 1 to select the next pixel to be evaluated, in this version the increment equals the thickness of the line.

    YELLOW
    Code (CSharp):
    1. // The amount of pixels the camera has moved regarding a thickness-wide block of pixels
    2. vOrigin = fmod(vOrigin, float2(fThickness, fThickness));
    3. vOrigin = round(vOrigin);
    4.  
    5. // This moves the line N pixels, this is necessary due to the camera moves 1 pixel each time and the line may be wider than 1 pixel
    6. // so this avoids the line jumping from one block (thickness-wide) to the next, and instead its movement is smoother by moving pixel by pixel
    7. vPointP += float2(fThickness, fThickness) - vOrigin;
    8. vEndpointA += float2(fThickness, fThickness) - vOrigin;
    9. vEndpointB += float2(fThickness, fThickness) - vOrigin;
    10.  
    BLUE
    Code (CSharp):
    1.  
    2. // This fixes every point to the bottom-left corner of the thickness-wide block it belongs to, so all pixels inside the block are cosidered the same
    3. // If the block has to be colored, then all the pixels inside are colored
    4. vEndpointA = vEndpointA - fmod(vEndpointA, float2(fThickness, fThickness));
    5. vEndpointB = vEndpointB - fmod(vEndpointB, float2(fThickness, fThickness));
    6. vEndpointA = round(vEndpointA);
    7. vEndpointB = round(vEndpointB);
    8. vPointP = vPointP - fmod(vPointP, float2(fThickness, fThickness));
    9. vPointP = round(vPointP);
    10.  
    Code (CSharp):
    1.  
    2. // BRESENHAM ALGORITHM
    3. // Modified to allow different thicknesses and to tell the shader whether the current pixels belongs to the line or not
    4.  
    5. int x = vEndpointA.x;
    6. int y = vEndpointA.y;
    7. int x2 = vEndpointB.x;
    8. int y2 = vEndpointB.y;
    9. int pX = vPointP.x;
    10. int pY = vPointP.y;
    11. int w = x2 - x;
    12. int h = y2 - y;
    13. int dx1 = 0, dy1 = 0, dx2 = 0, dy2 = 0;
    14.  
    15. if (w < 0)
    16. {
    17.     dx1 = -fThickness;
    18. }
    19. else if (w > 0)
    20. {
    21.     dx1 = fThickness;
    22. }
    23.  
    24. if (h < 0)
    25. {
    26.     dy1 = -fThickness;
    27. }
    28. else if (h > 0)
    29. {
    30.     dy1 = fThickness;
    31. }
    32.  
    33. if (w < 0)
    34. {
    35.     dx2 = -fThickness;
    36. }
    37. else if (w > 0)
    38. {
    39.     dx2 = fThickness;
    40. }
    41.  
    42. int longest = abs(w);
    43. int shortest = abs(h);
    44.  
    45. if (longest <= shortest)
    46. {
    47.     longest = abs(h);
    48.     shortest = abs(w);
    49.  
    50.     if (h < 0)
    51.     {
    52.         dy2 = -fThickness;
    53.     }
    54.     else if (h > 0)
    55.     {
    56.         dy2 = fThickness;
    57.     }
    58.  
    59.     dx2 = 0;
    60. }
    61.  
    62. int numerator = longest >> 1;
    63.  
    64. outIsPixelInLine = false;
    65.  
    66. for (int i = 0; i <= longest; i += fThickness)
    67. {
    68.     if(x == pX && y == pY)
    69.     {
    70.         outIsPixelInLine = true;
    71.         break;
    72.     }
    73.  
    74.     numerator += shortest;
    75.  
    76.     if (numerator >= longest)
    77.     {
    78.         numerator -= longest;
    79.         x += dx1;
    80.         y += dy1;
    81.     }
    82.     else
    83.     {
    84.         x += dx2;
    85.         y += dy2;
    86.     }
    87. }
    Here we can see the results of this algorithm, setting the thickness to 4 pixels:






    Line strips drawing

    If we want to draw multiple concatenated lines we could create multiple instances of the line renderer and bind their endpoints somehow, but there are cheaper ways to achieve line strips rendering to represent, for example, a rope.

    If we were using ordinary shaders we could send a vector array with all the points of the line to be processed but, unfortunately, Shadergraph does not allow arrays as input parameters for now. A workaround is sending a 1D texture, which is not supported either, so we will have to use a 2D texture whose height is 1 texel and whose width equals the amount of points. Everytime the position of the points changes, the texture has to be updated. This is not the “main texture”, we are talking about an additional texture. Regarding the format of the points texture, it is necessary to use a non-normalized one, for example TextureFormat.RGBAFloat (R32G32B32A32F), otherwise a loss of resolution occurs and the points jitters on the screen. We will need to know also the amount of points and the way the texture is to be sampled so do not forget to pass in both parameters, the float and the sampler state.

    Once we have the data available in our shader, we have to iterate through the array, which means enclosing the Bresenham implementation explained previously into a for loop, sampling the points texture and picking an endpoint A and an endpoint B for calculating that line segment. When all the point pairs have been used, the loop ends. This way we are using only one texture, one sprite and one material.

    Code (CSharp):
    1. void IsPixelInLine_float(float fThickness, float2 vPointP, Texture2D tPackedPoints, SamplerState ssArraySampler, float fPackedPointsCount, float fPointsCount, out bool outIsPixelInLine)
    2. {
    3.     // Origin in screen space
    4.     float4 projectionSpaceOrigin = mul(UNITY_MATRIX_VP, float4(0.0f, 0.0f, 0.0f, 1.0f));
    5.     float2 vOrigin = ComputeScreenPos(projectionSpaceOrigin, -1.0f).xy * _ScreenParams.xy;
    6.  
    7.     // The amount of pixels the camera has moved regarding a thickness-wide block of pixels
    8.     vOrigin = fmod(vOrigin, float2(fThickness, fThickness));
    9.     vOrigin = round(vOrigin);
    10.  
    11.     // This moves the line N pixels, this is necessary due to the camera moves 1 pixel each time and the line may be wider than 1 pixel
    12.     // so this avoids the line jumping from one block (thickness-wide) to the next, and instead its movement is smoother by moving pixel by pixel
    13.     vPointP += float2(fThickness, fThickness) - vOrigin;
    14.  
    15.     vPointP = vPointP - fmod(vPointP, float2(fThickness, fThickness));
    16.     vPointP = round(vPointP);
    17.  
    18.     int pointsCount = round(fPointsCount);
    19.  
    20.     outIsPixelInLine = false;
    21.  
    22.     for(int t = 0; t < pointsCount - 1; ++t)
    23.     {
    24.         float4 packedPoints = tPackedPoints.Sample(ssArraySampler, float2(float(t / 2) / fPackedPointsCount, 0.0f));
    25.         float4 packedPoints2 = tPackedPoints.Sample(ssArraySampler, float2(float(t / 2 + 1) / fPackedPointsCount, 0.0f));
    26.  
    27.         float2 worldSpaceEndpointA = fmod(t, 2) == 0 ? packedPoints.rg : packedPoints.ba;
    28.         float2 worldSpaceEndpointB = fmod(t, 2) == 0 ? packedPoints.ba : packedPoints2.rg;
    29.         float4 projectionSpaceEndpointA = mul(UNITY_MATRIX_VP, float4(worldSpaceEndpointA.x, worldSpaceEndpointA.y, 0.0f, 1.0f));
    30.         float4 projectionSpaceEndpointB = mul(UNITY_MATRIX_VP, float4(worldSpaceEndpointB.x, worldSpaceEndpointB.y, 0.0f, 1.0f));
    31.  
    32.         // Endpoints in screen space
    33.         float2 vEndpointA = ComputeScreenPos(projectionSpaceEndpointA, -1.0f).xy * _ScreenParams.xy;
    34.         float2 vEndpointB = ComputeScreenPos(projectionSpaceEndpointB, -1.0f).xy * _ScreenParams.xy;
    35.  
    36.         vEndpointA = round(vEndpointA);
    37.         vEndpointB = round(vEndpointB);
    38.  
    39.         vEndpointA += float2(fThickness, fThickness) - vOrigin;
    40.         vEndpointB += float2(fThickness, fThickness) - vOrigin;
    41.  
    42.         vEndpointA = vEndpointA - fmod(vEndpointA, float2(fThickness, fThickness));
    43.         vEndpointB = vEndpointB - fmod(vEndpointB, float2(fThickness, fThickness));
    44.         vEndpointA = round(vEndpointA);
    45.         vEndpointB = round(vEndpointB);
    46.  
    47.         int x = vEndpointA.x;
    48.         int y = vEndpointA.y;
    49.         int x2 = vEndpointB.x;
    50.         int y2 = vEndpointB.y;
    51.         int pX = vPointP.x;
    52.         int pY = vPointP.y;
    53.         int w = x2 - x;
    54.         int h = y2 - y;
    55.         int dx1 = 0, dy1 = 0, dx2 = 0, dy2 = 0;
    56.  
    57.         if (w<0) dx1 = -fThickness ; else if (w>0) dx1 = fThickness;
    58.         if (h<0) dy1 = -fThickness ; else if (h>0) dy1 = fThickness;
    59.         if (w<0) dx2 = -fThickness ; else if (w>0) dx2 = fThickness;
    60.  
    61.         int longest = abs(w);
    62.         int shortest = abs(h);
    63.  
    64.         if (longest <= shortest)
    65.         {
    66.             longest = abs(h);
    67.             shortest = abs(w);
    68.  
    69.             if (h < 0)
    70.                 dy2 = -fThickness;
    71.             else if (h > 0)
    72.                 dy2 = fThickness;
    73.  
    74.             dx2 = 0;
    75.         }
    76.  
    77.         int numerator = longest >> 1;
    78.  
    79.         for (int i=0; i <= longest; i+=fThickness)
    80.         {
    81.             if(x == pX && y == pY)
    82.             {
    83.                 outIsPixelInLine = true;
    84.                 break;
    85.             }
    86.  
    87.             numerator += shortest;
    88.  
    89.             if (numerator >= longest)
    90.             {
    91.                 numerator -= longest;
    92.                 x += dx1;
    93.                 y += dy1;
    94.             }
    95.             else
    96.             {
    97.                 x += dx2;
    98.                 y += dy2;
    99.             }
    100.         }
    101.     }
    102. }
    Note: In this version, some additional optimizations have been implemented, see next section.

    Optimizations

    Sprite size fitting

    In order to avoid shading unnecessary pixels, the drawing area should be as small as possible. This area is defined by the sprite in the scene. If a 1x1 pixel texture is used (with its pivot at the top-left corner) then the width and height will match the scale and calculations are simpler.

    Every time the position of the points change, the position and scale of the sprite change too. We only need to calculate the bounding box that contains the points of the line and expand it as many pixels as the thickness of the line, so pixel blocks greater than 1 pixel are not cut off.

    Points texture packing

    The size of the 2D texture used for sending a point array to the GPU can be halved. We are working with 2D points so every texel (Color, in C#) can store 2 points.

    GPU-side point transformation

    Instead of transforming the points of the line in the C# script it is better to postpone that calculation to the GPU. Points can be passed in world space and then, in the shader, multiplied by the view matrix, the projection matrix and the screen size to obtain their screen position. The origin parameter (vOrigin) can be removed and calculated in the shader too.
     

    Attached Files:

    Last edited: Apr 18, 2021
  2. ThundThund

    ThundThund

    Joined:
    Feb 7, 2017
    Posts:
    244
    Last edited: Mar 31, 2021
  3. MelvMay

    MelvMay

    Unity Technologies

    Joined:
    May 24, 2013
    Posts:
    4,382
    Might want to post it here too.
     
    ThundThund likes this.
  4. ThundThund

    ThundThund

    Joined:
    Feb 7, 2017
    Posts:
    244
    Added some code fixes and unlit shaders.
     
  5. ThundThund

    ThundThund

    Joined:
    Feb 7, 2017
    Posts:
    244
    New commit:
    Fixed: The multi line was not working properly with OpenGL due to wrong texture sampler configuration.
    Now you can use standard shaders instead of Shadergraph.
    Standard shaders allow to make the line unlit by enabling a checkbox in the material.
    Files moved to 2 folders: Shadergraph and Shaders.
    The .hlsl files are shared among both versions.
    The test scene has been updated. 2 new lines have been added which use the new standard shaders. A 2D point light has been added to demonstrate how the light affects the lines, unless they are unlit.
     
  6. ThundThund

    ThundThund

    Joined:
    Feb 7, 2017
    Posts:
    244
    New commit:
    Fixed: The inherited scale was not properly calculated.
     
  7. vambier

    vambier

    Joined:
    Oct 1, 2012
    Posts:
    65
    What an awesome solution!!! I imported your project but I get the following errors when opening the SG_BresenhamMultiLine shadergraph:

    Shader error in 'hidden/preview/Branch_31483F37': 'ComputeScreenPos': no matching 1 parameter function at Assets/Plugins/LineRenderer2D/Assets/LineRenderer2D/Shaders/S_BresenhamMultiLine.hlsl(18) (on d3d11)

    Shader error in 'hidden/preview/CustomFunction_A7422E2F': 'ComputeScreenPos': no matching 1 parameter function at Assets/Plugins/LineRenderer2D/Assets/LineRenderer2D/Shaders/S_BresenhamMultiLine.hlsl(18) (on d3d11)

    Any idea what's causing this?
     
  8. ThundThund

    ThundThund

    Joined:
    Feb 7, 2017
    Posts:
    244
    Yes, in the S_BresenhamLine.hlsl shader you have to add an additional parameter to the calls to ComputeScreenPos, a -1.0f, like this:

    float2 vOrigin = ComputeScreenPos(projectionSpaceOrigin, -1.0f).xy * _ScreenParams.xy;

    The reason I haven't fixed that is because in the HLSL version of the line renderer it uses a different ComputeScreenPos function, which only receives 1 parameter. So I had to decide which of both would break, in order to share the same shader file in both versions.
     
  9. betomaluje

    betomaluje

    Joined:
    Mar 23, 2019
    Posts:
    9
    Hi, first of all, amazing work! this is really cool. I wanted to make some sort of "tentacle" using this on Runtime but I can't make it to work. I've tried different approaches. First I think I need to assign the positions and then move the those positions. This is my script but it's still not working (not know why):

    Code (CSharp):
    1. [RequireComponent(typeof(MultiLineRenderer2D))]
    2.     public class WiggleLineRenderer2D : MonoBehaviour
    3.     {
    4.         [SerializeField] private Transform[] positions;
    5.      
    6.         [SerializeField] private float wiggleSpeed;
    7.         [SerializeField] private float wiggleMagnitud;
    8.         [SerializeField] private int wiggleOffset = 3;
    9.  
    10.         private MultiLineRenderer2D multiLineRenderer;
    11.         private List<Vector2> lineRendererPoints = new List<Vector2>();
    12.  
    13.         private void Awake()
    14.         {
    15.             multiLineRenderer = GetComponent<MultiLineRenderer2D>();
    16.          
    17.             foreach (var pos in positions)
    18.             {
    19.                 lineRendererPoints.Add(pos.position);
    20.             }
    21.          
    22.             multiLineRenderer.Points = lineRendererPoints;
    23.          
    24.             multiLineRenderer.CurrentCamera = Camera.main;
    25.         }
    26.  
    27.         private void LateUpdate()
    28.         {
    29.             var newPos = new Vector2();
    30.  
    31.             for (var i = 0; i < lineRendererPoints.Count; i++)
    32.             {
    33.                 var rendererPoint = lineRendererPoints[i];
    34.                 newPos.x = rendererPoint.x;
    35.                 newPos.y = i % wiggleOffset * Mathf.Sin(Time.time * wiggleSpeed) * wiggleMagnitud;
    36.  
    37.                 lineRendererPoints[i] = newPos;
    38.             }
    39.  
    40.             multiLineRenderer.Points = lineRendererPoints;
    41.             //multiLineRenderer.ApplyLayoutChanges();    don't know the difference but it works also without this line
    42.             multiLineRenderer.ApplyPointPositionChanges();
    43.         }
    44.     }
    I can see the changes of the points on the editor but still can't see them rendering properly (even if the Gizmos are there moving)

    Is anything I'm doing wrong? Helps for the help in advance.

    PS: I've tried both prefabs for multiline: SG and S and it's the same outcome
    PS.2: Also this script is attached to the prefab directly and the assigned Transforms are just childs of this prefab and Positions are Local Space is checked


    [EDIT] [SOLVED]

    Ok, the script works just fine! I had some Sorting Layer issues... Soo if anyone wants to use this script, feel free! Both SG and S works like a charm!
     
    Last edited: Aug 6, 2021
    GliderGuy and ThundThund like this.
unityunity