Search Unity

Obtaining a point cloud from depth information

Discussion in 'General Graphics' started by dkirilov, Oct 26, 2017.

  1. dkirilov

    dkirilov

    Joined:
    May 25, 2017
    Posts:
    12
    Hi,

    I've been trying to extract a point cloud from Unity which has depth information for a while now, but have had some trouble. I found some links online to help me get started and I feel as if I'm almost there, but there is slight problem - what should be straight lines appear to be curved. I was hoping someone could help me locate the issue / point me in the right direction.

    To start off, to obtain the depth information, I used this approach: http://answers.unity3d.com/questions/877170/render-scene-depth-to-a-texture.html

    TLDR for the link: The shader and the code produce a greyscale image which represents the depth in the scene, where black is closest to the camera, and white is the farthest. The code and shader used is the same as the ones in the reply of the user who posted this.

    Running the code produces this image in Unity which looks proper:



    From there, I obtain the render texture in the same way as the link above and store the pixel information in an array. Here is a quick reference:

    Code (CSharp):
    1. int resolutionX = Screen.width;
    2. int resolutionY = Screen.height;
    3. RenderTexture tempRt = new RenderTexture(resolutionX, resolutionY, 0);
    4. camera.targetTexture = tempRt;
    5. camera.Render();
    6. RenderTexture.active = tempRt;
    7. Texture2D tex2d = new Texture2D(resolutionX, resolutionY, TextureFormat.ARGB32, false);
    8. tex2d.ReadPixels(new Rect(0, 0, resolutionX, resolutionY), 0, 0);
    9. string[] output = new string[resolutionX * resolutionY];
    10. Color[] pixelInfo = tex2d.GetPixels()
    From here, I run the following code on each pixel to attempt to translate it into real x, y, and z coordinates and output it into a file which is to be my point cloud.

    Code (CSharp):
    1. float r, g, b;
    2. double x, y, z, fl, range;
    3. double realX, realY, realZ;
    4. // Obtain X and Y Pixel Coordinates
    5. double pixelX = i % resolutionX;
    6. double pixelY = i / resolutionX;
    7. range = pixelInfo.r;
    8. x = (pixelX / resolutionX) - 0.5;
    9. y = (-(pixelY - resolutionY) / resolutionY) - 0.5;
    10. fl = -0.5 / (Math.Tan((Fov / 2) * Math.PI / 180));
    11. z = -fl;
    12. double vecLength = Math.Sqrt((x * x) + (y * y) + (z * z));
    13. // r = g = b because we are getting the value from the depth grayscale image
    14. r = (int)(pixelInfo.r * 255);
    15. g = (int)(pixelInfo.g * 255);
    16. b = (int)(pixelInfo.b * 255);
    17. // unitize the vector
    18. x /= vecLength;
    19. y /= vecLength;
    20. z /= vecLength;
    21. // multiply vector components by range to obtain real x, y, z
    22. realX = x * range;
    23. realY = y * range * -1;
    24. realZ = z * range;

    When reading the file into Meshlab, all looks more or less okay, EXCEPT my straight lines are not as straight as they should be.

    Image of the scene in Meshlab, rotated to show curvature.



    As you can see, the rectangle which is a cube in the Unity world is curved when it should be straight and I have no idea why. Any help is appreciated. If anything is unclear I'll gladly clarify. Thanks!
     
    sdyby2006 likes this.
  2. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,520
    Two pointers:
    1. The depth buffer stores depth, not distance. So you should normalize the z component to 1 instead of the length of the entire vector.
    2. The values in the depth buffer are not linear to have better precision close to the camera.
     
    sdyby2006 and dkirilov like this.
  3. dkirilov

    dkirilov

    Joined:
    May 25, 2017
    Posts:
    12
    Normalizing the z component removed the straight lines :D! Thank you!!!
     
    douglas-healy likes this.
  4. DVD_Rodriguez

    DVD_Rodriguez

    Joined:
    Feb 25, 2019
    Posts:
    6
    Hello, maybe one of you can help me. I have two textures one is the depht info and the other is the color associated. I want to represent both as point cloud in an Scene. My idea was to create a shader thats combines position of the first with color form the second but that doesn't went very well. Any ideas?
     
  5. DaveL99

    DaveL99

    Joined:
    Jul 5, 2018
    Posts:
    22
    Can you give any more specifics about what didn't go so well? :)

    The way I am doing it, is that I have an OnRenderImage function on my camera that runs a shader that reads direct from the depth-buffer, transforms each point back into camera-space, and outputs the results to another render-texture (ARGBFloat format, same dimensions as the original colour/depth buffers).

    For my usage, I want to save the point cloud data out to disk, so I read both the original colour texture and the new point-cloud info texture back on the CPU and combine them together then.
     
  6. DVD_Rodriguez

    DVD_Rodriguez

    Joined:
    Feb 25, 2019
    Posts:
    6
    I'm sending the two textures over a network to a client app and try to represent them on a OnRenderObject() function but the visualization wasn't the correct one (seems like the two textures doesn't match) maybe was a problem with the shader but in the server app the shader works good. Right now i cannot give too many details because I'm away from the office. Thanks for the response.
     
  7. ademord

    ademord

    Joined:
    Mar 22, 2021
    Posts:
    49
    @dkirilov could you share the way you did the PC exctraction?
     
  8. sdyby2006

    sdyby2006

    Joined:
    Mar 31, 2021
    Posts:
    1
    @dkirilov At present, I am dealing with problems related to depth camera. Could you share your relevant handling methods? thank you.