Search Unity

  1. Unity Asset Manager is now available in public beta. Try it out now and join the conversation here in the forums.
    Dismiss Notice

Resolved Output Camera Data in ROS2 (Pointcloud camera)

Discussion in 'Robotics' started by d_pepley, Dec 21, 2021.

  1. d_pepley

    d_pepley

    Joined:
    Oct 2, 2020
    Posts:
    18
    Hello,

    I was wondering if there was a way to get data sent from the RGB and Depth data sent out of Unity via ROS2? I know how to establish a ROS2 connection, and I have played with RGB and Depth capture in the perception module. However, I am not sure how to directly connect to a camera's output and send that data using ROS2.
     
    AlejandroLorite likes this.
  2. vidurVij-Unity

    vidurVij-Unity

    Unity Technologies

    Joined:
    Feb 26, 2020
    Posts:
    8
    To clarify, do you want to send image data from a camera to Unity via ROS2 or first receive camera data directly in Unity and send it over ROS2 ?
     
  3. d_pepley

    d_pepley

    Joined:
    Oct 2, 2020
    Posts:
    18
    Receive Camera data from Unity and send it to ROS2. I will need to send 2 types of data, RGB and depth. I am making some progress on both, but have hit a few roadblocks. I think I can solve the RGB data transfer with a previous answer I found on the forum, but if you have any insight it would be helpful. The depth data (point cloud) is a whole different challenge. I am trying to do things such as using the depth buffer data in a custom shader that gets it's own render pass, but so far that method is restricting me to 0-255 8bit data since the color channels are 0-255. Is there a way to get the depth buffer info out of of shadergraph and read it in a c# script?
     
  4. d_pepley

    d_pepley

    Joined:
    Oct 2, 2020
    Posts:
    18
    I have determined a method of getting depth/pointcloud data out of unity fairly efficiently. The method involves creating a custom shader that writes the depth buffer to the R color channel. Use this to create a custom material where color values equal distance from the camera. Assign that material as a material override in a render texture and assign the depth camera output to the render texture. Then since you have the depth at each pixel, you can calculate a 3D pointcloud from your camera's view.

    Now it is extremely intensive to calculate the location for every pixel (even for resolution as low as 640x480). To deal with this, I am processing within an IJobParallelFor job structure to parallelize the process. I was able to get a camera sending pointcloud data out via ROS2 at a rate of 15Hz with my simulation running roughly around 60Hz. There is a small stutter, but not enough to be an issue.