Search Unity

Question Can I capture all triangle vertices at the vertex shader stage?

Discussion in 'Shaders' started by DryerLint, Jan 28, 2023.

  1. DryerLint

    DryerLint

    Joined:
    Feb 7, 2016
    Posts:
    68
    Hey all,

    I am curious about whether it's possible to capture all of a primitive's vertices at the vertex shader stage, and not just the operated-upon vertex, without doing any redundant packing of vertex attributes, and without using a geometry shader.

    In searching the Unity forums, StackOverflow, etc., I've seen users describe a method that uses SV_VertexID to isolate each triangle vertex and store it in the output struct. Three additional fields (e.g., uv1, uv2, and uv3) are denoted in the vertex-to-fragment struct, and they're assigned by using a conditional statement. See the code below.

    Code (CSharp):
    1. vOut vert(appdata vIn, uint vertexID : SV_VertexID)
    2. {
    3.     vout vOut;
    4.  
    5.     // ... The usual vertex setup goes here ...
    6.  
    7.     int triangleVertex = vertexID % 3;
    8.  
    9.     if (triangleVertex == 0) vOut.uv1 = vIn.uv;
    10.     if (triangleVertex == 1) vOut.uv2 = vIn.uv;
    11.     if (triangleVertex == 2) vOut.uv3 = vIn.uv;
    12.  
    13.     return vOut;
    14. }
    This approach seems to work, but it's glitchy.

    I realize that, for this to work, the mesh can't have any shared vertices (or else SV_VertexID modulo 3 will not reliably map to 0, 1, and 2), and that's fine for my use case. However, I'm still having trouble. The data seems to still interpolate across the triangle, which is not ideal. I've tried the nointerpolation qualifier, but it often seems to zero out the data, and I can't explain why.

    My objective with all of this is to be able to perform my own barycentric UV interpolation at the fragment level. I recently asked about this in another thread, but I have since decided derivatives are not the answer (for converting world space coordinates to UVs, and back again).

    Can anybody offer me any insight?

    Thanks a lot.
     
  2. burningmime

    burningmime

    Joined:
    Jan 25, 2014
    Posts:
    845
    EDIT: Ah, I see you mentioned that no verts will be shared. However, you're still getting zeros because it's not just merging the first nonzero attribute on each field. It chooses to take all the values from a single vetex (the "leading vertex" which is defined somewhere), where 2/3 of those values are zero.

    You could store the vertex data in a UAV/compute buffer and then access it from any point in the pipeline you want.
     
    Last edited: Jan 28, 2023
    DryerLint likes this.
  3. DryerLint

    DryerLint

    Joined:
    Feb 7, 2016
    Posts:
    68
    Thanks for the reply burningmime!

    I remember reading posts that allude to this leading vertex principle, but I didn't know it was responsible for the problems I was having. Thanks a lot for the clarification! I think I'll give up on trying to access a primitive's vertex list from the vantage point of the vertex shader.

    I'm definitely leaning more towards the ComputeBuffer idea now, even though I'm frustrated by the redundancy of it. As far as I understand it, bound mesh data (i.e., the vertex buffers that store the mesh attribute data) are stored in a different part of GPU memory that can't also be referenced as a compute shader buffer. Please correct me if I'm wrong!
     
  4. burningmime

    burningmime

    Joined:
    Jan 25, 2014
    Posts:
    845
    You can bind a VB as ByteAddressBuffer. I haven't done it with the thing currently being drawn; I don't see why that would be a problem but there might be API or hardware limitations that prevent it. Here's some code I have for getting a GraphicsBuffer from a mesh: https://gitlab.com/burningmime/arch...olume-shadows/src/internal/ShadowMesh.cs#L116 .

    Alternatively, you can access the compute buffer from the vertex shader. If you use DrawProcedural, you won't have a vertex or index buffer at all; you just manually look up the vertex from the compute buffer. This can be slower than using a traditional geometry pipeline, though, especially on older GPUs.
     
    DryerLint likes this.
  5. DryerLint

    DryerLint

    Joined:
    Feb 7, 2016
    Posts:
    68
    Damn, in all my experience with the Mesh class, I never noticed the vertexBufferTarget field. I'm going to have to try that out. Thank you for bringing my attention to this, burningmime! Thanks to your tip, my current idea is to write my own rudimentary compute shader rasterizer that will be automatically executed upon a draw call (i.e., during OnPreRender()).

    The effect I'm trying to achieve:
    1. Given the current fragment's world position vec3, snap it to the voxel grid using a round();
    2. Using the normal and the world position, calculate the plane definition of the current triangle;
    3. Given the snapped voxel position (calculated in step 1), find the closest point, pNearest, on the plane;
    4. Calculate the new UV for pNearest, and sample the texture;
    5. Draw only if the fragment intersects with pNearest, and discard all other fragments;
    The rendered surface will look like it's made up of individual points. As the camera moves backwards, the surface converges to a solid appearance.