Search Unity

ARFoundation : I need RenderTexture to Texture2D every single frame

Discussion in 'AR' started by Technoise, Dec 20, 2018.

  1. Technoise

    Technoise

    Joined:
    Nov 20, 2014
    Posts:
    7
    Hi ~!

    I need RenderTexture to Texture2D every single frame.

    How to get camera texture in ARFoundation?

    Not RenderTexture, only I need Texture or Texture2D.

    ReadPixel and Apply is to heavy for every single frame.
     
    Last edited: Dec 20, 2018
  2. tdmowrer

    tdmowrer

    Joined:
    Apr 21, 2017
    Posts:
    605
    Sounds like you're looking for the camera image api.
     
  3. Technoise

    Technoise

    Joined:
    Nov 20, 2014
    Posts:
    7
    Hi ~!

    Thank you for answer and it works ~!

    I have one more question

    I tested it from iPhone XS MAX and I use Portrait mode.

    However when I get single frame image, it's resolution is W:1920, H:1440.

    I think it's Orientation.Landscape mode image.

    I wanna get Portrait mode image from Camera image.

    Is it have any func to make a image like what I look on screen on Camera image API?
     
  4. tdmowrer

    tdmowrer

    Joined:
    Apr 21, 2017
    Posts:
    605
    The image is the raw image that comes from the camera sensor, so it will be in whatever orientation the camera is mounted onto the phone, usually a landscape orientation. There is currently no functionality to apply a rotation to the resulting image.

    Can you provide more details regarding your use case? What are you ultimately doing with the image?
     
  5. Technoise

    Technoise

    Joined:
    Nov 20, 2014
    Posts:
    7

    Thank you for answer.

    I use the image for Vision Project and the project support only Portrait mode.

    Vision Project works analysis the image and detect some target and show the what is it and tracking it on the image.

    Now it works on ARKit Plugin( use OnRenderImage and RenderTexture to Texture2D.ReadPixel ). It's heavy cost >.<

    I'll use ARFoundation, so I try to use CameraImage.

    Do you have a plan to support camera image rotation or orientation output?
     
  6. tdmowrer

    tdmowrer

    Joined:
    Apr 21, 2017
    Posts:
    605
    My assumption was that if you are passing the image to a computer vision processing algorithm, then you would be able to transform (e.g., rotate) the image as part of the post processing. I was trying to avoid building a general purpose image transformation library into ARFoundation, since that seems beyond the scope of something like ARFoundation.

    If there is a need to support a rotation transformation, I would consider adding it. Does your "vision project" not support this sort of input image transformation? Is it your own, or is it part of a third party/open source project?
     
  7. Technoise

    Technoise

    Joined:
    Nov 20, 2014
    Posts:
    7
    thank you for answer ~!

    I solved it to use opencv.

    I have other question.

    == ARFoundation CameraImage Code ==
    CameraImage image;
    if (!ARSubsystemManager.cameraSubsystem.TryGetLatestImage(out image))
    ================================

    == output ==
    image.width : 1920
    image.height : 1440

    I tested on iPhoneX ( 1125, 2436 )

    why image output size is like that?

    I can see extra side image on output but I can't see on display.

    Can I setup output image size?


    I think, I can cut the image to use CameraImageConversionParams.inputRect.

    Can you show how can I get a part of image from output image ?
     
  8. tdmowrer

    tdmowrer

    Joined:
    Apr 21, 2017
    Posts:
    605
    The output image is whatever iOS gives us. There are usually multiple available resolutions. You can enumerate and select them with the Camera Configuration API.

    I'm not sure what that means. Can you post a screenshot or explain further?

    Yes, you can select a subrectangle if you only care about part of the image. inputRect is just a RectInt, so simply select the (x, y) and (width, height) of the subrectangle you'd like to extract from the original.
     
  9. Technoise

    Technoise

    Joined:
    Nov 20, 2014
    Posts:
    7
    Thank you for answer

    I uploaded screenshot.

    It's iPhoneX screenshot.

    "Screen : 1125 - 2436" is device resolution.
    "TexScreen : 1280 - 720" is output image resolution.

    void Start()
    {
    ARSubsystemManager.cameraSubsystem.SetCurrentConfiguration(new CameraConfiguration(new Vector2Int(1280, 720), 60));
    }

    var conversionParams = new CameraImageConversionParams
    {
    // Get the entire image
    inputRect = new RectInt(0, 0, image.width, image.height),

    // Downsample by 2
    outputDimensions = new Vector2Int(image.width, image.height),

    // Choose RGBA format
    outputFormat = TextureFormat.RGB24,

    // Flip across the vertical axis (mirror image)
    //transformation = CameraImageTransformation.MirrorY
    };

    I used this code in OnCameraFrameReceived.

    I marked on image, what I asked "I can see extra side image on output but I can't see on display."

    I tested iPhoneX, iPhoneXS Max, iPhone7 and iPhone8S.

    iPhone7 no problem exactly same on display and image output.

    However, iPhoneX and iPhoneSX Max have problem just I uploaded file.


    thank you for help me.
     

    Attached Files:

    Last edited: Jan 7, 2019
  10. tdmowrer

    tdmowrer

    Joined:
    Apr 21, 2017
    Posts:
    605
    Thanks for the screenshot. This is the expected behavior. The image as rendered to the background is transformed by a display matrix, which accounts for differences in the aspect ratio of different devices. The camera image, on the other hand, is not. Apple explains the difference in their docs:
    Also, it's recommended to pass
    SetCurrentConfiguration
    a configuration that you got back from
    cameraSubsystem.Configurations()
    , since you may only set it to one of the supported formats. There's no guarantee that every device will always support 1280x720 as you have in your code snippet.
     
  11. Technoise

    Technoise

    Joined:
    Nov 20, 2014
    Posts:
    7
    Thank you for answer.

    I didn't resize Camera Image like the aspect ratio on my screenshot because it's just test now.

    I just want to see output result.

    Maybe I didn't explain well, so I uploaded CameraImage output and device screen.

    I marked red rect on CameraImage output. it's not see on device screen, but CameraImage have it.

    When I make a screenshot from RenderTexture, it's just look like device screen.

    So is it possible to hide some parts of camera image on screen for differences in the aspect ratio of different devices?

    I use cameraSubsystem.Configurations() to get lowest supported resolution and use it, thank you for recommend.

    Do you have a plan to support more low resolution like 640 x 360 ? for mobile?
     

    Attached Files:

    • sh.jpg
      sh.jpg
      File size:
      333.6 KB
      Views:
      790
  12. tdmowrer

    tdmowrer

    Joined:
    Apr 21, 2017
    Posts:
    605
    You did explain it well, and I understand the issue. This is the expected behavior, according to Apple's documentation.

    You can extract a sub rectangle (the
    inputRect
    in the
    CameraImageConversionParams
    ) from the original image if that's what you're after.

    The available configurations come from Apple; ARFoundation is just exposing them. If you would like a lower resolution image to pass to OpenCV, you can select any
    outputDimensions
    in the
    CameraImageConversionParams
    that are smaller than the
    inputRect
    , and the converter will downsample it for you.
     
  13. Technoise

    Technoise

    Joined:
    Nov 20, 2014
    Posts:
    7
    Thank you for answer ~!

    I have other question about the same project for iOS, AOS.

    I made Native Plugin and I send Image data to Native every single frame.

    iOS not bad for perfomance but AOS is to much slow.

    Android Device : Galaxy S8

    fixed width ratio image size :
    iOS : 720P
    AOS : 360P

    iOS : 10-30 FPS
    AOS : less than 5 FPS

    I tried 720P but Android send a too slow error.
    So now I try 360P.

    Does AOS Native Call very slow than iOS Native Call ?

    Do you have any tip for send data to native to fast ?

    thank you.
     
  14. tdmowrer

    tdmowrer

    Joined:
    Apr 21, 2017
    Posts:
    605
    1. Can you measure the actual time the converter takes? You should be able to use the Unity Profiler for this, or use Time.realtimeSinceStartup and Debug.Log the result. 5 fps is about 200ms, which would be surprisingly slow, so let's make sure we are measuring the correct thing. On a Pixel (1), it takes 2-4 ms to run the Converter.
    2. The time can vary depending on the input to the converter. Downsampling the image (i.e., if outputDimensions is not the same as the inputRect) is usually faster, but can actually take longer if
      outputDimensions
      is only slightly less than
      inputRect
      . What are your conversion parameters?
    3. Almost all of the work is in converting the image to an RGB format. Depending on your application this may not be necessary. Many computer vision applications only require a grayscale image, for instance. If this is the case, try converting it to grayscale, or just use the raw data (the image planes). The raw image does not require any computation and should not take any time at all.
    4. If you do need an RGB image, but don't mind being a frame behind, there is an asynchronous version of the conversion API.
     
  15. tdmowrer

    tdmowrer

    Joined:
    Apr 21, 2017
    Posts:
    605
    Also, can you explain what you mean by

    What is the "too slow error"?