Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Question Trying to read an arbitrary mesh's triangle data in a compute shader

Discussion in 'Shaders' started by Nomad_Artisan, May 17, 2023.

  1. Nomad_Artisan


    Jun 29, 2013
    For context, I'm working with Sebastian Lague's marching cubes (planet) project as a base. In it, he uses a RenderTexture with a volume depth to pass to various compute shaders using it as a 3d array of floats.
    I'm modifying the project so that I can check every element within this 3d array of floats to see if its xyz index, when treated as a vector3, is inside a mesh in the Unity scene.
    I've been able to get the mesh's vertex buffer working using this code:
    Code (CSharp):
    2.             mesh.vertexBufferTarget |= GraphicsBuffer.Target.Raw;
    3.             var vertexBuffer = mesh.GetVertexBuffer(0);
    4.             MyDensityCompute.SetBuffer(0, "meshVertexBuffer", vertexBuffer);
    5.             MyDensityCompute.SetInt("vertexCount", mesh.vertexCount);
    6.             var locToWorldMatrix = theMeshGameObject.transform.localToWorldMatrix;
    7.             MyDensityCompute.SetMatrix("localToWorldMatrix", locToWorldMatrix);
    Code (Compute Shader):
    2.     ByteAddressBuffer meshVertexBuffer;
    3.     int vertexCount;
    4.     float4x4 localToWorldMatrix;
    6.     float3 load_vertex(int i) {
    7.     int i_location = i * 56;
    8.     float3 vert = float3(0.0f, 0.0f, 0.0f);
    9.     vert.x = asfloat(meshVertexBuffer.Load(i_location));
    10.     vert.y = asfloat(meshVertexBuffer.Load(i_location + 4));
    11.     vert.z = asfloat(meshVertexBuffer.Load(i_location + 8));
    12.     return mul(localToWorldMatrix, float4(vert, 1)).xyz;
    13. }
    where i is >= 0 and <= vertexCount

    I'm struggling with figuring out how to use the mesh's index buffer to access the indices of each triangle of the mesh. Here's what I have so far:
    Code (CSharp):
    1.             mesh.indexBufferTarget |= GraphicsBuffer.Target.Raw;
    2.             mesh.indexBufferTarget |= GraphicsBuffer.Target.Index;
    3.             var indexBuffer = mesh.GetIndexBuffer();
    4.             MyDensityCompute.SetBuffer(0, "meshIndexBuffer", indexBuffer);
    5.             MyDensityCompute.SetInt("triangleCount", mesh.GetTriangles(0).Length);
    Code (Compute Shader):
    2.     ByteAddressBuffer meshIndexBuffer;
    3.     int triangleCount;
    5.     int3 getTriangleIndices(int idx){
    6.     int i_loc = idx * 16;
    7.     int id1 = asint(meshIndexBuffer.Load(i_loc));
    8.     int id2 = asint(meshIndexBuffer.Load(i_loc + 4));
    9.     int id3 = asint(meshIndexBuffer.Load(i_loc + 8));
    10.     return int3(id1, id2, id3);
    11. }
    where idx is >= 0 and <= triangleCount.
    I then call load_vertex, passing the x, y, or z value from the in3 returned by getTriangleIndices, and Unity hard crashes.

    Any direction on where to go from here would be greatly appreciated, thank you!
  2. burningmime


    Jan 25, 2014
    One thing that stands out is that you are calculating the indices wrong. If you are sure that the indices are 32-bit (many meshes have 16-bit indices; you can check using Mesh.indexFormat), you should multiply by 12 (not 16) in your getTriangleIndices function. The more optimized version would be like
    uint3 triangle = meshIndexBuffer.Load3(idx * 12)

    If the mesh has 16-bit indices, then it gets a mite more complicated since you'll need to load 8 bytes of data and do the shift.

    Code (csharp):
    1. uint3 loadTriangleIndices(uint nTriangle)
    2. {
    3. #if defined(INDEX_FORMAT_16)
    4.     // stolen from:
    5.     uint offsetBytes = nTriangle * 6;
    6.     uint dwordAlignedOffset = offsetBytes & ~3;
    7.     uint2 four16BitIndices = meshIndexBuffer.Load2(dwordAlignedOffset);
    8.     uint3 indices;
    9.     if (dwordAlignedOffset == offsetBytes)
    10.     {
    11.         indices.x = four16BitIndices.x & 0xffff;
    12.         indices.y = (four16BitIndices.x >> 16) & 0xffff;
    13.         indices.z = four16BitIndices.y & 0xffff;
    14.     }
    15.     else
    16.     {
    17.         indices.x = (four16BitIndices.x >> 16) & 0xffff;
    18.         indices.y = four16BitIndices.y & 0xffff;
    19.         indices.z = (four16BitIndices.y >> 16) & 0xffff;
    20.     }
    21.     return indices;
    22. #elif defined(INDEX_FORMAT_32)
    23.     uint offsetBytes = nTriangle * 12;
    24.     return meshIndexBuffer.Load3(offsetBytes);
    25. #else
    26.     #error "Must define INDEX_FORMAT_16 or INDEX_FORMAT_32"
    27. #endif
    28. }

    EDIT: Took another look, and this is also wrong...

    Code (csharp):
    1. MyDensityCompute.SetInt("triangleCount", mesh.GetTriangles(0).Length);
    That gets a copy of the indices on the CPU (very slow) and then gets the length of that array which is 3 times as many triangles as the mesh actually has. To get the number of triangles in the first submesh, you can do...

    Code (csharp):
    1. MyDensityCompute.SetInt("triangleCount", mesh.GetIndexCount(0) / 3);
    Note that it divides by 3, and that it only gets the count instead of copying the data.
    Last edited: May 17, 2023
    Nomad_Artisan and Unifikation like this.
  3. Nomad_Artisan


    Jun 29, 2013
    Thank you for this great answer. This clears up a lot of the confusion I was having. I'm currently testing my code with one each of Unity's built in cube and sphere meshes, which are indeed 16-bit indices.
    This loadTriangleIndices method is working great for the cube, however it's failing for the sphere due to the attached device reset/removed error.
    Does this mean I'm out of luck using something as simple as a primitive sphere? or is there a way I can specify that this compute shader is expected to take longer than normal, and allow the GPU to spend as much time as needed on it?

    Attached Files:

  4. burningmime


    Jan 25, 2014
    You probably have an infinite loop in your compute shader.
  5. Nomad_Artisan


    Jun 29, 2013
    There is no infinite loop in the compute shader.
    The value of triangle count for the sphere ends up being 2,304.
    If I pass a lower arbitrary value for "triangleCount", such as 100 to the compute shader, the dispatch call to the compute shader while using the sphere mesh works without crashing, but obviously I'm not using all of the information I need.
    I'm wondering if I am not handling the disposal of the buffers properly, or if I'm disposing of them too soon.
    Currently, I'm looping over the meshes in my scene and I have a local variable for the buffers within each loop, I call dispatch on my compute shader, and the following two lines release the buffers. Could this be the source of the error?
    Is it unsafe to dynamically create local buffers within a loop?
    If the compute shader's dispatch is asynchronous, is there a way for me to force my program to wait for the compute shader to finish it's work before continuing the C# function?

    Thanks again for your replies.
    Last edited: May 17, 2023
  6. Nomad_Artisan


    Jun 29, 2013
    I ran a few more tests and did some research. If I understood what I read correctly, dispatching a compute shader doesn't execute the shader code immediatley, but executes it at some point during Unity's update, is that correct?

    If so, I think I have a possible solution. I don't need this code to finish in a single frame. I'm going to split the compute shader's execution across multiple frames. I tested this in theory by giving the compute shader a start and end range to iterate over instead of going from 0 to triangleCount.
    It seems to work for each range individually, so long as I'm not dispatching to many of them in Start().
    I'm going to try distpatching the compute shader with a different range in each update until all vertices have been processed.

    I'll post with my results.