Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Vertex shader vertex position interpolation data

Discussion in 'Shaders' started by uwdlg, Feb 25, 2019.

  1. uwdlg

    uwdlg

    Joined:
    Jan 16, 2017
    Posts:
    142
    Hi,

    I'm (ab)using Unity to visualize scientific fluid simulations. The data I'm working with is a collection of text files with comma-separated values describing positions of point cloud points, where each file represents the same point cloud at consecutive moments in time, a few milliseconds apart.
    My approach so far is to first create meshes for every file containing only vertices at the point positions (one-time pre-processing step). I then use a geometry shader which places billboard quads centered around the vertices and a fragment shader which discards pixels in the corners of each quad to get cheap round points without alpha blending and sorting issues. Finally I spawn the meshes in sequence with a bit of delay between each step.
    All that works very well, but I now had the crazy idea to interpolate the point positions between consecutive time steps. For an initial prototype, I stored the positions of each vertex in mesh i + 1 in one of the UV channels of mesh i, and then use this "target position" in my vertex shader to lerp the vertex positions. While this also goes swimmingly, it naturally increases the file size of my generated point cloud meshes (which are already around 40-80MB, up to 2 million vertices) by a factor of almost 2, meaning I would have several tens of gigabytes of meshes if I applied this to all 300+ time steps I have.
    My question: is there some other way to get the info I need to interpolate vertex positions into my vertex shader without having to store it in the mesh? Or a better way to do what I'm doing now?
    I thought about getting the vertex positions directly from the next mesh, but I don't see how I could do that efficiently.

    Thanks in advance,
    uwdlg
     
    Rs likes this.
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    Ignoring the amount of data you have for a moment, a common technique for this kind of thing is to store the data in a texture (or multiple textures).

    The idea would be to create a RGBAHalf Texture2D to store all of the positions with rows as the vertex index and columns as the frame index (or vice versa). You can either store the "vertex index" texture position in a UV channel, ie:

    float animTextureU = ((float)vertexIndex + 0.5f) / (float)(vertexCount);

    Or do it in the shader using the SV_VertexID semnatic.
    Code (csharp):
    1. sampler2D _AnimTex;
    2. float4 _AnimTex_TexelSize; // automatically set with the xy = 1/texture resolution and zw = texture resolution
    3.  
    4. struct v2f {
    5.     float4 pos : SV_Position;
    6. }
    7.  
    8. v2f vert(appdata_full v, uint vertexID : SV_VertexID)
    9. {
    10.     v2f o;
    11.  
    12.     float4 animTextureUV = float2(
    13.         _Time.y, // animate the frame
    14.         ((float)(vertexID) + 0.5) * _AnimTex_TexelSize.y, // set the vertex index
    15.         0, // unused
    16.         0); // mip level
    17.     float4 animPos = tex2Dlod(_AnimTex, animTextureUV);
    18.  
    19.     o.pos = UnityObjectToClipPos(animPos.xyz);
    20.  
    21.     return v2f;  
    22. }
    This won't necessarily decrease the amount of memory usage, but should be a little easier to manage as you only need a single mesh and texture.

    To get around the amount of data, you would need to load the data from the disk as you go and keep updating the texture, or ping-pong between two different animation textures with a limited number of frames, assuming you can load the data in real time.
     
    Rs likes this.
  3. uwdlg

    uwdlg

    Joined:
    Jan 16, 2017
    Posts:
    142
    Wow, thanks for the quick and intriguing answer, that sounds interesting.
    I forgot to mention, that over the "lifetime" of the simulation, points present at one point in time may vanish and new ones may appear (in fact, the number of points in a single frame ranges from about 7000 to almost 2 million). I haven't given this much thought yet, as the input files don't store the points in any particular order right now, so to test whether the whole idea could work, I generated 20 time steps from a single large mesh by moving and rotating the vertices, while I wait for new files with point IDs. The problem with varying vertex counts never occured to me until now.
    I still like the idea of using textures as vertex position lookups, is there some clever way I could deal with varying vertex counts?
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
  5. uwdlg

    uwdlg

    Joined:
    Jan 16, 2017
    Posts:
    142
    Sorry for the delay in replying, I fell ill.
    That PDF is an interesting read, thank you very much for that. However, during my sickness, it was decided that interpolation is not actually needed and shall not be pursued further.
    So interpolation aside, would it still be beneficial to go the texture lookup and repositioning one set of vertices route instead of swapping different meshes (especially in terms of realtime performance - disk space should decrease I think)?
     
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    Disk space may not be any different, might even be worse, but the texture based method may be slightly more efficient.
     
  7. Rs

    Rs

    Joined:
    Aug 14, 2012
    Posts:
    74
    Oh bless you all! @uwdlg I had the same idea and that's why I came here. Thanks @bgolus for the distilled solution. I would love to know from @uwdlg : did you manage to implement this? How did it go? And do you have any code you can share and/or some videos of the results?
    Thanks