Search Unity

Question rendering point meshes

Discussion in 'General Graphics' started by P_jvillar, Feb 11, 2022.

  1. P_jvillar

    P_jvillar

    Joined:
    May 28, 2019
    Posts:
    19
    hi, I am working in unity 2021.1.28f1 and I have to dispay some meshes only made of vertex (they dont have any faces) with millions of points. All of the use the same material with a simple vertex color shader. My problem is that the draw calls are very high and it dosen´t let me make the objects static (because they have no faces).

    Is there any way I could batch them together? I am using the standard render pipeline, would LWRP or HDRP help to improve render performance?

    thans
    Javi
     
  2. mabulous

    mabulous

    Joined:
    Jan 4, 2013
    Posts:
    198
  3. P_jvillar

    P_jvillar

    Joined:
    May 28, 2019
    Posts:
    19
    hi, thans fo the help, but, for point topology static batching utility dosent work since unity looks for tris and quads. I will try to combine some meshes
     
  4. mabulous

    mabulous

    Joined:
    Jan 4, 2013
    Posts:
    198
    something like this (didn't try it, but I think it should work):

    Code (CSharp):
    1.  
    2. GameObject batched_go = new GameObject("batched", typeof(MeshFilter), typeof(MeshRenderer));
    3. // position the gameobject wherever you want the origin of the batched pointcloud to be relative to the current placement of your pointclouds. For best numerical accuracy, place it in the center of all the points.
    4.  
    5. Matrix4x4 batchFromWorld = batched_go.transform.worldToLocalMatrix;
    6.  
    7. MeshFilter[] filters = your_root_object.GetComponentsInChildren<MeshFilter>();
    8.  
    9. int total_vertex_count = 0;
    10. foreach(MeshFilter filter in filters) {
    11.   // use number of indices rather than mesh.vertexCount in case not all vertices are to be rendered. Also, if you have multiple submeshes, you should iterate over those too.
    12.   total_vertex_count += filter.sharedMesh.GetIndexCount(0);
    13. }
    14.  
    15. Vector3[] batched_vertices = new Vector3[total_vertex_count];
    16. Color32[] batched_colors = new Color32[total_vertex_count];
    17. int[] batched_indices = new int[total_vertex_count];
    18.  
    19. int i = 0;
    20. foreach(MeshFilter filter in filters) {
    21.   Matrix4x4 worldFromIndividual = filter.transform.localToWorldMatrix;
    22.   Matrix4x4 batchFromIndividual = batchFromWorld * worldFromIndividual;
    23.   int[] indices = filter.sharedMesh.GetIndices(0); // loop over submeshes too if more than one
    24.   for(int j = 0; j < indices.Length; j++) {
    25.     batched_vertices[i] = batchFromIndividual.MultiplyPoint(filter.sharedMesh.vertices[indices[j]]);
    26.     batched_colors[i] = filter.sharedMesh.colors32[indices[j]];
    27.     batched_indices[i] = i++;
    28.   }
    29. }
    30.  
    31. Mesh batched_mesh = new Mesh();
    32. batched_mesh.subMeshCount = 1;
    33. batched_mesh.SetVertices(batched_vertices);
    34. batched_mesh.SetColors(batched_colors);
    35. batched_mesh.SetIndices(batched_indices, MeshTopology.Points, 0);
    36. batched_go.GetComponent<MeshFilter>.sharedMesh = batched_mesh;
     
  5. joshuacwilde

    joshuacwilde

    Joined:
    Feb 4, 2018
    Posts:
    727
    If you are still interested in a solution, I would very much recommend using a compute shader and writing to a texture using atomics. This will probably be the fastest possible way to render points.
     
  6. P_jvillar

    P_jvillar

    Joined:
    May 28, 2019
    Posts:
    19
    hi, do you know any example that shows how to use it and I could follow it. is it usefull for a few meshes of millions of points? is there a way to handle culling or LODs?
    thanks
     
  7. joshuacwilde

    joshuacwilde

    Joined:
    Feb 4, 2018
    Posts:
    727
    Hmm, maybe you can explain more what you want. Culling and LODs with point clouds? That is challenging because point clouds are not water-tight, and LODs exacerbate the issue. Although LODs themselves are not very challenging. As long as you mean in the simplest case where you just render a different point cloud depending on the distance/screen space height.

    For the compute shader, basically you want to use a 64 bit uint texture. Then in your compute shader you get the current thread index, use that to index into an array of world positions for the points. Then convert the point to view space, and now that you have a screen space pixel position, you can use atomicMin to write to that pixel position. The uint that you write is setup such that the most significant bits contain the view space depth of the point, and the least significant bits contain the point color (or other data).
     
  8. P_jvillar

    P_jvillar

    Joined:
    May 28, 2019
    Posts:
    19
    thanks for helping, I´m very new with the shader and rendering.
    right now what I have is a point cloud with a few hundred millions points. I have it divided in different mesheses (for frustrum culling) and in different qualities (point quantities, for LODs) for performance reasons (its going to be shown in VR)
    would the computer shader have alocated all the points always or woul it take only the once that need to be rendered?
    I am really looking if there is a better way (better performance) to display the points than the mesh renderer
     
  9. joshuacwilde

    joshuacwilde

    Joined:
    Feb 4, 2018
    Posts:
    727
    Ah ok VR, so in that case in your compute shader you will write to 2 textures instead of one.

    Oh I thought you meant occlusion culling. For frustum culling this is much simpler (the culling part). You just need to cull on the CPU and upload the results to the GPU for the compute shader to process.

    Also definitely you can get away with a lot of points...but a few hundred million is a LOT. Not unheard of ofc for point cloud rendering though. I would see if there is any way that can be reduced though. Even with very fast methods, not sure how realtime that will be in VR (also mobile VR? or PC VR?)

    Yes basically anyway you approach this will be faster than MeshRenderers. This is a literal worst case scenario for how MeshRenderers are setup to be used. A slower method than the custom method I mentioned would be something like using DrawMeshInstancedIndirect, but even that will still be far faster than using MeshRenderers. But your biggest problem is that rendering that many small triangles is going to kill performance, so better to just avoid triangles altogether and write straight to texture (using a compute shader).

    All this being said, compute shaders are more complex than day to day Unity stuff, but if you are willing to learn it a bit you will benefit greatly from the performance.

    Also if you have an image, it will help to explain your use case better (and maybe others can chime in as well).
     
  10. P_jvillar

    P_jvillar

    Joined:
    May 28, 2019
    Posts:
    19
    it is PC vr

    i´m sorry bu for privacy reasons I cant share any image

    I will try to find some tutorials and start working from there

    thanks
     
  11. P_jvillar

    P_jvillar

    Joined:
    May 28, 2019
    Posts:
    19
    Could be possible a DM?, it would be great if you could help us directly working together in a pilot project. We have a small budget to do it and I would like to share with you.
     
  12. joshuacwilde

    joshuacwilde

    Joined:
    Feb 4, 2018
    Posts:
    727
    sure you can dm me