Search Unity

Render camera image into byte array

Discussion in 'Scripting' started by hassanbot, Dec 13, 2019.

  1. hassanbot

    hassanbot

    Joined:
    Sep 16, 2018
    Posts:
    9
    Hi!

    I'm building a simulator, which I want to hook up to an external control system through network communication (ROS/TCP/UDP/WebSockets). After stepping the physics I gather some data (positions of bodies, velocity of motors, etc), and send it to the controller. This works fine with simple data types, but I'm not sure what the best practice would be for sending a complete camera image.

    My controller can take a camera image in the form of a byte array (byte[]), so what I need is to write the pixel values from the Unity camera image to the byte array. The camera should not be the one rendering to screen, but a second one attached to a moving vehicle. I also want to attach a custom shader to render the depth values instead of color.

    My current approach is this:

    Code (CSharp):
    1. class CameraSensor
    2. {
    3.   public Camera cam;
    4.   private Material mat;
    5.   private RenderTexture renderTexture;
    6.   public int resolutionWidth;
    7.   public int resolutionHeight;
    8.   public int bytesPerPixel;
    9.   private byte[] rawByteData;
    10.   private Texture2D texture2D;
    11.   private Rect rect;
    12.  
    13.  
    14.   private void Start()
    15.   {
    16.     mat = new Material(Shader.Find("Custom/DepthGrayscale"));
    17.     renderTexture = new RenderTexture(resolutionWidth, resolutionHeight, 24);
    18.     cam.targetTexture = renderTexture;
    19.     rawByteData = new byte[resolutionWidth * resolutionHeight * bytesPerPixel];
    20.     texture2D = new Texture2D(resolutionWidth, resolutionHeight, TextureFormat.RGB24, false);
    21.     rect = new Rect(0, 0, resolutionWidth, resolutionHeight);
    22.   }
    23.  
    24.   // My own callback after physics has stepped
    25.   private void PostStepCallback()
    26.   {
    27.     cam.Render();
    28.   }
    29.  
    30.   private void OnRenderImage(RenderTexture source, RenderTexture destination)
    31.   {
    32.     Graphics.Blit(source, destination, mat);
    33.     RenderTexture.active = renderTexture;
    34.     texture2D.ReadPixels(rect, 0, 0);
    35.     Array.Copy(texture2D.GetRawTextureData(), rawByteData, rawByteData.Length);
    36.     ... // Code to send rawByteData to controller
    37.   }
    38. }
    This seems to work okay, but I do have some issues with performance, and the whole approach feels a little convoluted. Is there a better way to do this?
     
  2. tonemcbride

    tonemcbride

    Joined:
    Sep 7, 2010
    Posts:
    1,089
    That's the way I would do it too, create a render texture, render to it and then grab the pixels. Unfortunately the only way to do that currently is through GetPixels which blocks until ReadPixels is complete and is pretty slow!

    This article has some nice workarounds and new ideas on how to achieve better performance on that very problem: https://medium.com/google-developers/real-time-image-capture-in-unity-458de1364a4c

    A quick fix is to separate 'GetRawTextureData' until the next frame to give time for ReadPixels to complete naturally without blocking the GPU.

    The article goes into much more depth and suggests moving the 'ReadPixels' to a place where the GPU is already idle so it's not forcing everything to complete first.
     
  3. hassanbot

    hassanbot

    Joined:
    Sep 16, 2018
    Posts:
    9
    Thanks, I'll take a look and give it a try!
     
  4. Boantnara

    Boantnara

    Joined:
    Oct 29, 2018
    Posts:
    1
    Hi @hassanbot, did you ever find a working solution for this? I am currently trying to do exactly the same as you and am struggling with performance issues...

    Thank's a lot!