Search Unity

Point cloud too slow: is it possible to pass to a shader an array of Vector position without a mesh?

Discussion in 'Shaders' started by andSol, May 8, 2016.

  1. andSol

    andSol

    Joined:
    May 8, 2016
    Posts:
    22
    I have been using a custom mesh with a custom-crafted shader to render a very efficient point cloud in my game (I only use that custom mesh as a venue to get the positions of the points, which are assigned to the mesh vertices).

    However, whenever my point cloud grows in size and I happen to have to update the position of any of the points, a huge performance bottleneck arises because to best of my knowledge, one cannot update a vertex of a mesh without updating the whole mesh - which is pretty expensive.

    So, I was wondering if there is a way of directly passing to a shader the information I am currently only using the custom mesh as a venue for. Which means, basically, passing to mesh an array of Vector3 for position and an array with information for the color of the points.

    That way, in my understanding (correct me if I am wrong) it would become cheaper to update the points in my shader-based point cloud. Otherwise, does anyone have any better suggestion on how could I eventually update the position of the points in the point cloud without have to incur in huge update-mesh costs?

    Thanks in advance for your time and for any ideas you might have.
     
  2. cblarsen

    cblarsen

    Joined:
    Mar 10, 2007
    Posts:
    266
    This could maybe be done more efficiently with a geometry shader, but what I have done previously is to use intermediate rendertextures.
    - First I put all the information that varies for each particle into a simple mesh. That would typically be position, color, and size (which you can fit into vertex position, and either tangent or color). The uv coordinates are fixed and point to fixed positions in the rendertexture. The index array for the mesh is always the same 0,1,2,3,4, etc. sequence (in point mode).
    - I then render the mesh (in point mode) to a couple of intermediate rendertextures (SetRenderTarget with all target textures). The uv of the mesh is used for the position, while the vertex position of the mesh is used as an output value. So each pixel in one texture will store the position, and the corresponding pixel in the other will store colour and size.
    - I then perform the actual rendering with one or more meshes that are completely static. They just read the variable info for each particle from the intermediate rendertextures.
    If you are only updating a few points, you could in principle perform a render of just those points to the intermediate rendertexture to update their information.
    Remember to MarkRestoreExpected() on the rendertextures to preserve the contents from frame to frame.
     
  3. andSol

    andSol

    Joined:
    May 8, 2016
    Posts:
    22
    Hi there, @cblarsen. Thanks for this insightful reply! So, what I have been doing is precisely to build quads on the fly in the geometry shader - using as a basis the position of each of the vertices of the mesh to which the shader is attached. That way, each vertex ends up representing a point in the point cloud. So:

    1) you said that a geometry shader could maybe used to do this efficiently, but while I am using a geometry shader, I still can't see how to pass the new positions to the shader efficiently (i.e. updating the mesh vertices is super costly);

    2) now turning to your implementation using rendertextures, conceptually it sounds a great idea, but I got confused about how exactly are you passing the position of the 'particles' to the shader if not trough the mesh vertices? Are the info on their position stored in the rendertextures somehow, instead of in the mesh vertices?

    Would you be able to share a snippet of your implementation for the sake of illustration (specially the part on reading from the textures, if I understood it correctly)? Because what I am trying to do is exactly what you focused on in your last sentences: updating just a few points every once in a while.

    Many thanks
     
  4. cblarsen

    cblarsen

    Joined:
    Mar 10, 2007
    Posts:
    266
    I don't own the rights to the code I wrote (my employer does), so sorry can't show it, but I think I can explain.

    You are correct, that I _do_ have to use mesh vertices. The point is, that if only some particle positions change. I can keep the information on the graphics card in rendertextures, and only update those pixels in the rendertextures for which I have a change per frame.
    If all particles change position per frame, then you _have_ to use mesh vertices or do the particle math in a compute shader (I know nothing about compute shaders in practice)

    Below I will give some more details about my RenderTexture algorithm, but I also wonder if part of your performance problem comes from your scripting? Do you for example use generic lists when building your vertex lists, or reallocate arrays every frame? If you do, that is probably going to hurt on top of the time to send the vertices to the graphics card.

    Part of my algorithm in pseudocode:
    Allocate a RenderTexture of type ARGBFloat, so each pixel can hold a position
    It should have FilterMode.Point
    and another RenderTexture of type whatever, that can hold any additionalInfo

    Then for every frame

    mesh.vertices = vertexPositions; // this array holds particle positions as usual
    mesh.tangents = additionalInfo; // Could be colors, scale, etc.
    mesh.uv = pixelPositions; // The uvs point out where in the renderTexture i want to write this info. For an ordinary shader, this information would have to go into mesh.vertices, but I am saving a coordinate here :)

    Then my first shader looks something like
    Code (csharp):
    1.  
    2.  
    3. // Still pseudocode, haven't tested this
    4.  
    5. struct appdata
    6. {
    7.     float4 vertex : POSITION;
    8.     float2 uv : TEXCOORD0;
    9.         float4 tan : TANGENT;
    10. };
    11.  
    12. struct v2f
    13. {
    14.         float4 vertex : POSITION;
    15.         float3 particlePos : TEXCOORD0;
    16.         float4 extraInfo : TEXCOORD1;
    17. };
    18.  
    19. v2f vert(  appdata v )
    20. {
    21.         v2f o;
    22.         o.vertex = float4( v.uv, 0, 1);  // Write all the info to this pixel position in the rendertexture
    23.         o.particlePos = v.vertex.xyz;
    24.         o.extraInfo = v.tan;
    25.         return o;
    26. }
    27.  
    28. struct perPixelOutput
    29. {
    30.       float4 position : COLOR;
    31.       float4 extraInfo : COLOR1;
    32. };
    33.  
    34. perPixelOutput frag( v2f i )
    35. {
    36.        perPixelOutput result;
    37.        result.position = float4( o.particlePos, 1);
    38.        result.extraInfo = i.extraInfo;
    39.        return result;
    40. }
    41.  
    Once I have rendered the mesh to these two rendertextures, the positions and any other info is now on the graphics card.
    I can then use a previously created mesh that never changes to actually render.
    That mesh is structured like this:
    drawMesh.vertices = pixelPositions; // These are the same positions that went into the uv of the original mesh. However these should be constant, so no need to change them every frame
    //Since I didn't use a geometry shader, I actually had to duplicate the vertices 4 times and also add uvs like this
    drawMesh.uv = quadCornerUvs. // an endless series of (0,0), (0,1), (1,1), (1,0)

    and the vertex shader for the final render goes something like
    Code (csharp):
    1.  
    2. v2f vert( drawAppData v )
    3. {
    4.      v2f o;
    5.     o.position = tex2Dlod( _MyFirstRenderTexture, float4( v.vertex.xy, 0, 0)); // Use the position as uv
    6.     o.uv = v.uv; // This is only one corner of a quad
    7.    o.extraInfo = tex2D(_MySecondRenderTexture, float( v.vertex.xy, 0, 0));
    8.   return o;
    9. }
    10.  
    Hope that made it a little clearer
     
  5. andSol

    andSol

    Joined:
    May 8, 2016
    Posts:
    22
    Ah, many thanks for such a detailed example. Very cool idea, and yes it got much clearer! I think I got the concept and will try to implement it soon, before jumping into any broader questions. The only thing that called my attention by reading and thinking of the design is that in the first code snippet of your pseudo-ish code, is that line

    Code (CSharp):
    1. result.position = float4( o.particlePos, 1);
    Should be actually:

    Code (CSharp):
    1. result.position = float4( i.position, 1);
    Besides that, I will go forward with playing around with the whole thing before any further questions. Thanks again!
     
  6. cblarsen

    cblarsen

    Joined:
    Mar 10, 2007
    Posts:
    266
    You were right that it was wrong, but it should actually be
    Code (CSharp):
    1. result.position = float4( i.particlePos, 1);
    Happy coding!