Search Unity

How to get point cloud in ARKit

Discussion in 'AR' started by Loui_Studios, Sep 10, 2020.

  1. Loui_Studios

    Loui_Studios

    Joined:
    Feb 19, 2016
    Posts:
    6
    I'm trying to get the point cloud generated by the LiDAR sensor on the iPad Pro. What's the simplest way to access the points?
     
  2. KyryloKuzyk

    KyryloKuzyk

    Joined:
    Nov 4, 2013
    Posts:
    1,142
    As far as I know, currently, the ARMeshManager can only generate meshes.
    You can use ARPointCloudManager to get the point cloud.
     
  3. Loui_Studios

    Loui_Studios

    Joined:
    Feb 19, 2016
    Posts:
    6
    Is there a way to do that like Frame.PointCloud from ARCore?
     
  4. KyryloKuzyk

    KyryloKuzyk

    Joined:
    Nov 4, 2013
    Posts:
    1,142
  5. Misnomer

    Misnomer

    Joined:
    Jul 15, 2013
    Posts:
    28
    The problem is ARPointCloud contains only feature points and doesn't have colors. Therefore it's not suitable for reproducing apple's example with dense point cloud. The solution I am trying to work out now is basically reproduce what apple does in Unity. Take ARCamManager frame, take AROcclusionManager depth frame. Project color from pixels onto a depth image, save them as colored points in mesh with MeshTopology.Point. But so far I am struggling to get any visually good results let alone results which have good performance. Performance wise there is an ARKitBackground.shader which contains some calls to native Metal functions it seems. I guess you could try translating the code apple has in their shader to hlsl. By Apple's example I mean this one. I'll share any code and progress I make and would be glad if you do the same https://developer.apple.com/documentation/arkit/visualizing_a_point_cloud_using_scene_depth
     
    Last edited: Sep 11, 2020
  6. Loui_Studios

    Loui_Studios

    Joined:
    Feb 19, 2016
    Posts:
    6
    I managed to access the point cloud from ARPointCloudManager but it doesn't appear to be using the depth sensor, just feature points like you said. Is there something I'm missing? Howcome I'm only getting points from optical data and not the points from the superior LiDAR sensor?
     
  7. Misnomer

    Misnomer

    Joined:
    Jul 15, 2013
    Posts:
    28
    That's a question for Unity devs, prolly it's to much of a hassle. You can try going with the approach I outlined earlier.
     
  8. Misnomer

    Misnomer

    Joined:
    Jul 15, 2013
    Posts:
    28
    @Loui_Studios, I managed to create a point cloud using image from iPad camera and depth image from lidar. All you need to do is acquire depth image, camera image, unity's scene camera and map each pixel from depth image to pixel from camera image which gives you color, and to pixel from camera which gives you world x,y position of said pixel.

    Code (CSharp):
    1. var depthValues = _depthTexture.GetPixels().Select(x => x.r).ToArray();
    2.  
    3.             for (int x = 0; x < DepthWidth; x++)
    4.             {
    5.                 for (int y = 0; y < DepthHeight; y++)
    6.                 {
    7.                         var colX = Mathf.RoundToInt((float)x * _camTexture2D.width / DepthWidth);
    8.                         var colY = Mathf.RoundToInt((float)y * _camTexture2D.height / DepthHeight);
    9.                    
    10.                         var pixelX = Mathf.RoundToInt((float)x * _mainCam.pixelWidth / DepthWidth);
    11.                         var pixelY = Mathf.RoundToInt((float)y * _mainCam.pixelHeight / DepthHeight);
    12.                    
    13.                         var depth = depthValues[x + y * DepthWidth];
    14.  
    15.                         var scrToWorld = _mainCam.ScreenToWorldPoint(new Vector3(pixelX, pixelY, depth));
    16.  
    17.                         _colors.Add(_camTexture2D.GetPixel(colX, colY));
    18.                         _vertices.Add(scrToWorld);
    19.                 }
    20.             }
     
  9. Rich_XR

    Rich_XR

    Joined:
    Jan 28, 2020
    Posts:
    9
    Hi @Misnomer did you manage to re-create the entire example project code from Apple in Unity?
     
  10. Misnomer

    Misnomer

    Joined:
    Jul 15, 2013
    Posts:
    28
    Hey! I copy pasted their code from metal shader into Unity but it didn't work correctly out of the box. But basically they do the same thing only on GPU and probably faster due to native calls. If you wish I can post it here as a starting point. Unity meshing got fixed in latest release so we switched to making a scan with that, because drawing point cloud would entail too much pain with optimization. Also there is a big question with how you gonna draw points on iPad. All the PC visualizers I saw use Geometry shaders which are not supported by Metal. If I understood correctly you can do the same thing using compute shaders but you will have to write them yourself. Other option is VFX graph but I doubt that it will eat millions of points which is what you get if you scan a living room (256 * 192 per frame, 1 474 560 per second at 30 fps).
     
  11. borisysenbaert

    borisysenbaert

    Joined:
    Jan 13, 2020
    Posts:
    2
    Hello, I'm trying to make something similar. I would like te scan a room and save the scan with textures. Could you share more of your code? I can't seem to get it to work for me.
     
  12. borisysenbaert

    borisysenbaert

    Joined:
    Jan 13, 2020
    Posts:
    2
    That was my first thought to, to save the generated mesh from the meshmanager, but I would like to texture that. Do you know if this is possible?
     
    lele1592 likes this.
  13. Misnomer

    Misnomer

    Joined:
    Jul 15, 2013
    Posts:
    28
    Yes we did that but it's difficult. You basically have to project a camera image onto your mesh somehow. We did it by saving an image along with a camera position and rotation info and texturing as a postprocessing step using that info. It was done in blender. Theoretically you can do the same in Unity but be prepared to spend a lot of time on it
     
  14. MartinToutirais

    MartinToutirais

    Joined:
    Mar 25, 2021
    Posts:
    6
    Hey @Misnomer, thanks a lot for precious information you're sharing. I try to texture my mesh (scanned with ARMeshManager) but I don't know where to begin.

    At first I try to create a point cloud using image from iPad camera and depth image from lidar with your code example but i've issues to get depth and color data. Have you got a git or can you share your code about this part more precisely ? Thank's a lot.
     
  15. Misnomer

    Misnomer

    Joined:
    Jul 15, 2013
    Posts:
    28
    Hey, Martin, I don't have this project in git since it's a commercial one. But I took all the code which is used to get depth and color info from Unity's examples. The one you need is here https://github.com/Unity-Technologi...es/blob/main/Assets/Scripts/CpuImageSample.cs
     
  16. MartinToutirais

    MartinToutirais

    Joined:
    Mar 25, 2021
    Posts:
    6
    Misnomer likes this.
  17. tabearebekka

    tabearebekka

    Joined:
    Dec 6, 2019
    Posts:
    3
    Thanks for the code snippet! How do you access the _mainCam?
     
  18. Misnomer

    Misnomer

    Joined:
    Jul 15, 2013
    Posts:
    28
    Hey depends on your use case, but usually you do it by Camera.main
     
  19. Ikaro88

    Ikaro88

    Joined:
    Jun 6, 2016
    Posts:
    300
  20. aditya0998

    aditya0998

    Joined:
    Jul 6, 2021
    Posts:
    3
    happening with me as well, the RGB values are messed up , R and G are good, but the B value makes it into grey scale
     
  21. aditya0998

    aditya0998

    Joined:
    Jul 6, 2021
    Posts:
    3
    sorry that I am noob in these context but can u please explain from where we are getting _camTexture2D
    and colors.Add in last line please