Search Unity

What exactly is LineRenderer doing to draw a line on the screen?

Discussion in 'General Graphics' started by TheLazyEngineer, Apr 27, 2020.

  1. TheLazyEngineer

    TheLazyEngineer

    Joined:
    Jul 22, 2019
    Posts:
    12
    Can someone explain to me how the LineRenderer component works?

    I understand it uses an array of 3D points that define the segments, but how exactly is Unity rendering this to the screen?

    From my understanding, there's no mesh, so I'm kind of confused what's going on because everytime i've rendered something its using a mesh and material.

    The reason I'm asking is because I'd like to implement my own version of a Line Renderer, so i'm trying to figure out what Unity's LineRenderer does to draw on the screen. Can anyone explain?

    Thanks.
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,342
    It’s a mesh.

    At render time it constructs a mesh for the camera rendering it.
     
  3. TheLazyEngineer

    TheLazyEngineer

    Joined:
    Jul 22, 2019
    Posts:
    12
    Ah! So it is! I thought maybe it was doing something else..
    Thank you for the clarification, i'm very new to this!
     
  4. TheLazyEngineer

    TheLazyEngineer

    Joined:
    Jul 22, 2019
    Posts:
    12

    BGolus,

    Another question for you regarding this, if you dont mind.
    Why does the Line Renderer component not have a mesh filter and mesh renderer component on it if it is using a mesh? My understanding is that any time you want to render a mesh you must have both a mesh filter component to hold the mesh and a mesh renderer component that references that filter to render it.

    Im assuming LineRenderer does not use these components (since I do not see them on the gameobject). Does LineRenderer bypass this? If so, how?

    Based on what I've read, there's also the Graphics class with functions like DrawMesh(...) that it could be using?
    Or possibly the lower level GL class?

    I'm trying to get an idea of my options when it comes to mesh rendering.
    Thank you.
     
  5. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,342
    If you want to render a mesh that you already have, or want to have be rendered by a MeshRenderer, then yes, you need a MeshFilter and a MeshRenderer. But plenty of stuff in Unity render without a MeshFilter. Particles, sprites, UI, text mesh, trails, and lines, just to name a few. None of these have a MeshFilter because they get rendered by their own internal systems. Internally it's something more like DrawMesh than anything else, but since for all of those other examples the mesh gets generated just before being rendered (basically during the Camera's OnPreRender). The MeshRenderer is just unique in that it exposes the mesh being rendered as a separate component rather than being part of the renderer component. Many renderer components do allow you to access the generated mesh, and in some cases even directly modify it to some extent, but they didn't originally.

    Here's the thing. For a GPU to render something, it needs a mesh of some kind* otherwise it has nothing to render. So everything uses meshes, just not all of them are exposed to the user.

    * Compute shader and fully procedural rendering is a thing too, but fairly pretty rare in modern games.
     
    DotusX and tonialatalo like this.
  6. TheLazyEngineer

    TheLazyEngineer

    Joined:
    Jul 22, 2019
    Posts:
    12
    Great info.
    Any recommendations on how to handle procedural / dynamic meshes that may change topology / geometry over time?

    Just thinking out loud here..

    Lets say I have a subdivision curve, c(t), where t is in the range [0,1].
    The user controls the number of mesh elements by some level of refinement parameter.
    The curve itself is only represented by several Vector3 points, along with an algorithm that uses these points to evaluate a point on the curve and its normal / tangents at any parametric coordinate, 0 <= t <= 1.

    So this curve's mesh topology and geometry may change at runtime as the user controls the number of subdivisions with that refinement parameter.

    Whoever is responsible for building the mesh will need:
    1) the array of Vector3 control points
    2) the algorithm for calculating points on the curve given a parametric coordinate
    3) the algorithm for calculating the normals at a given parametric coordinate (in order to add thickness to the curve and define a triangle)

    Now, we could delegate this task of building the mesh to the CPU, but this would have to be calculated potentially on update() as the mesh is a function of the Control Points AND the level of refinement - so if the user messes with either of those, it will have to recomputed the next frame. And the algorithms mentioned in (2) and (3) may be somewhat expensive!

    But, doing this on the cpu would allow me to just use a mesh renderer and mesh filter and be done with it! Easier probably, but im worried it will be slow (I honestly have no reference as to how much the cpu can actually handle - but lets say this becomes an issue).

    So, I'm thinking another route could be to make use of a geometry shader somehow?
    My knowledge on geometry shaders is limited, but if I have a way of supplying an array of Vector3's into a shader (maybe as a few float arrays or a texture), I can construct the mesh directly on the GPU and it would be much faster.

    Not sure which way is generally preferred for these kinds of problems..

    Any insight would be helpful of course!
     
    tonialatalo likes this.
  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,342
    I’d say try it purely on the CPU and see how bad it is. There are also the new job system friendly mesh APIs that can make this much faster.
    https://docs.unity3d.com/2020.1/Documentation/ScriptReference/Mesh.MeshData.html

    The GPU is almost always guaranteed to be faster, though. This is something geometry shaders and/or tessellation shaders could probably do an amazing job at. Geometry shaders and tessellation shaders get a lot of (well deserved) flak for being kind of terrible for GPUs, and a well written Compute shader would be much faster, but those are also a lot more work to use, and the worst geometry shader is still likely faster than a CPU approach.

    Something like Facebook’s Quill does all of the line rendering on the GPU with tessellation shaders from simple control mesh data on the PC, but the Quest version appears to do both depending on where they have the most free perf.
     
    tonialatalo likes this.
  8. TheLazyEngineer

    TheLazyEngineer

    Joined:
    Jul 22, 2019
    Posts:
    12
    Thanks! I'll explore both options. I'll post an update here with some performance studies after I implement it.
     
  9. TheLazyEngineer

    TheLazyEngineer

    Joined:
    Jul 22, 2019
    Posts:
    12
    Using only the CPU, I can calculate points, first, second, and third derivatives of NURBS geometry at about 4000 locations on Update() while maintaining >60 fps. So, this seems reasonable to me. I can always implement an event system so that values can be computed and stored for use only to recalculate during an "Changed" event instead of on Update().

    However, it just seems like a much better solution to do this on the GPU.. I need to do more research on how I can use shaders to render splines.

    One idea I had is that I can send in a mesh with only topology data (triangles array), and with dummy vertex data, then use a simple vertex shader program to reposition the vertices in the proper place according to a spline algorithm. But, im not quite sure how this will work with the rendering pipeline. Because its my undrstanding that the positions of this dummy vertex data in world space will determine if its going to even be considered with rendering or not due to frustum culling!

    Anyway.. more research to do here. Thanks for your input. If I make any progress going down this GPU route I'll probably make another thread on the topic for further discussion.