Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Other Point cloud selection via pixel color

Discussion in 'Shaders' started by facePuncher7, May 19, 2023.

  1. facePuncher7

    facePuncher7

    Joined:
    Mar 9, 2013
    Posts:
    16
    I posted this is Unity Answers but just realized here may be a better location. Mods, please nuke if necessary.

    *****

    I currently have a custom point cloud shader for Unity.

    The point clouds are represented as a collection of meshes, each point being a vertex in a mesh. The shader displays a billboard/quad with transparent texture at each vertex location.

    This is what one of our point clouds looks like with the shader:





    You can see in the close-up view that each point is a quad with a kind of gradient circle texture.

    I have developed the ability for the user to select a point by indexing all the points in an octree structure and traversing it based on camera/mouse input.

    This method is rather heavy though, both for indexing and for traversing, and I believe it would be possible to instead develop a shader to record the positions of vertices in a texture buffer (placing XYZ world positions into the buffer instead of RGB vertex values).

    Then we could simply reference the XYZ buffer texture at Input.mousePosition and retrieve the color, which we would interpret as XYZ, and we would have the world position of the point where the user clicked.

    The issues I'm having in implementing this idea are numerous, it seems to require shader knowledge much deeper than I have. I am able to color pixels based on world coordinates using:

    1. float4 frag(VertexOutput o) : COLOR {
    2. float4 worldPos = mul(unity_ObjectToWorld, float4(0, 0, 0, 1));
    3. float4 returnCol = float4(worldPos.rgb * _Scaler, 1.0);
    4. return returnCol;
    5. }
    But I am having no success getting it to color based on the vertex positions, only polygons. All vertex pixels are colored black.

    Even if that portion would work, I then still have the problem of trying to integrate it into the existing shader, which I'm assuming would somehow need to output the XYZ values to a separate buffer, and continue sending the the regular RGB values for rendering. I'm very unsure how to best structure this and then how to execute.

    Please let me know if my issue makes sense, if not I can try to explain more.
     
  2. burningmime

    burningmime

    Joined:
    Jan 25, 2014
    Posts:
    845
    Your idea is a good one, and is used quite a bit (in fact, that's how Unity selects the object you click on in the editor). However, you are definitely going to run into some precision/numerical issues if you're not careful with the texture formats. The default texture format stores 8 bits per pixel in R,G,and B channels, so you can represent a 256x256x256 grid of points. However, note that it stores these as values between 0 and 1, so if you want to use that, you'll need to scale your positions into that range. Any values you output that are below 0 will be clamped to 0, and any values you output above 1 will be clamped to 1 -- which is likely why all you're seeing is black in the output texture.

    If you want to store the full world precision, you could use R32G32B32A32_SFloat. Then you could put whatever numbers you want in there, and it would have the same floating-point precision as on the CPU. This is fairly well supported on most desktop GPUs, not sure about mobile. You could also quantize them to some sort of grid and store the integer coordinates.

    But if I were doing this, I'd probably store the index of each point in a R32_UInt texture. You just write out the point ID to the texture, and whatever the user clicks on is the point they selected (which I assume you can convert back to a world position on the CPU if you need it).

    Yes, you will need to execute it as a custom pass into a separate render target. The way to do this differs based on pipeline and Unity version (and woe betide you if you're using URP, since the best-practice way to do it is different in all of Unity 2021, 2022, and 2023 lol). Basically, you will need to get a CommandBuffer, and then manually draw it with the correct material. CommandBuffer is a pretty low-level thing which lets you directly execute graphics commands.

    Once you have drawn it, you'll also need to find a way to get the data back from the GPU onto the CPU. The best solution to this is AsyncGPUReadback, but note that takes a frame between when the user clicks and when you get the data back. At 60 FPS, or even 30 FPS, this is imperceptible to the user, but makes your input processing code more complicated. The other option is a pipeline stall (usually manifested as a framerate drop/hitch) which may be fine or not depending on your needs.