Search Unity

Difference between AR view and return from cameraManager.TryGetLatestImage

Discussion in 'AR' started by lukechadwick, Sep 5, 2019.

  1. lukechadwick

    lukechadwick

    Joined:
    Jun 8, 2019
    Posts:
    3
    I'm working on an ARFoundation app on iPhone and the image aspect ratio that is displayed in the 'live preview' is different from what is returned from TryGetLatestImage. As far as I can tell the live feed is cropping the top/bottom to allow it to appear fullscreen on my device.

    However, given that my app is saving these photos I would rather have the full frame in the live preview (and a black border if necessary) so that both images are consistent.

    I'm a little unsure how to accomplish this. Sending the camera output to a Render Texture(and canvas) sized to be the correct aspect ratio results in squashing (and the UI components go nuts as well).

    I am currently using ARFoundation 2.1.1 and ARKit Plugin 2.1.1
     
  2. simon-oooh

    simon-oooh

    Joined:
    Apr 11, 2019
    Posts:
    3
    I am having a similar issue. It looks like the ARCameraManager is tied closely to the device screen size and crops the camera input to fill that. I am also looking to run an ARCamera but get the AR view at the same aspect ratio and resolution as the native camera input.
     
  3. FreshlyBrewed

    FreshlyBrewed

    Joined:
    Nov 12, 2019
    Posts:
    4
    Similar issue here. I am running on an LG G6 at 1440x2880 portrait however, TryGetLatestImage gives me a 640x480 image. This is not only a different aspect ratio but also a much lower resolution. The resolution is fine for my needs but I am still curious as to why this is happening.

    Using AR Foundation 3.0.1 with ARCore XR Plugin 3.0.1
     
  4. jsaalva

    jsaalva

    Joined:
    Nov 19, 2019
    Posts:
    5
    One more with the same problem, I access the camera image from the CPU through the TryGetLatestImage method (https://docs.unity3d.com/Packages/com.unity.xr.arfoundation@1.0/manual/cpu-camera -image.html), after converting to RGBA32 and performing postprocessing for object location, i get the coordinates with respect to the image, but I need to map this coordinate to the same image shown on the device screen but with a different size and aspect ratio.

    I understand that this transformation is done by the displaymatrix (https://docs.unity3d.com/Packages/com.unity.xr.arfoundation@2.2/api/UnityEngine.XR.ARFoundation.ARCameraFrameEventArgs.html), which would be the equivalent to ARCore DisplayUvCoords, but I couldn't find information on how it works. In ARCore I think this could be done directly from the method:

    https://developers.google.com/ar/re...eARCore/Frame/CameraImage#transformcoordinate

    Vector2 TransformCoordinate (
    Vector2 coordinate,
    DisplayUvCoordinateType sourceType,
    DisplayUvCoordinateType targetType
    )

    Can anyone provide some information on how to do this coordinate mapping?
     
  5. vincentfretin

    vincentfretin

    Joined:
    Jul 4, 2019
    Posts:
    7
    Hi @jsaalva

    displayMatrix seems to be used by ARCameraBackground
    Library/PackageCache/com.unity.xr.arfoundation@2.2.0-preview.6/Runtime/AR/ARCameraBackground.cs
    that gives it to the shader
    Library/PackageCache/com.unity.xr.arkit@2.2.0-preview.6/Runtime/iOS/Resources/ARKitShader.shader
    Library/PackageCache/com.unity.xr.arcore@2.2.0-preview.6/Runtime/Android/Resources/ARCoreShader.shader
    The flip and rotation is done in the shader.

    The MRTK is looking at the displayMatrix to do the rotation, see
    https://github.com/microsoft/MRLigh...ls/CameraCapture/CameraCaptureARFoundation.cs

    I just found out this recently. I'm actually using another code right now to do the rotation:

    Code (CSharp):
    1. static readonly Matrix4x4 invertZM = Matrix4x4.TRS(Vector3.zero, Quaternion.identity, new Vector3 (1, 1, -1));
    2.  
    3.                     Matrix4x4 myMatrix = localTransform * invertZM;  // localTransform is some Matrix4x4 from object detection with opencv from camera image (the x axis already flipped)
    4.                     float rotZ = 0;
    5.                     switch (Screen.orientation) {
    6.                         case ScreenOrientation.Portrait:
    7.                             rotZ = 90;
    8.                             break;
    9.                         case ScreenOrientation.LandscapeLeft:
    10.                             rotZ = 180;
    11.                             break;
    12.                         case ScreenOrientation.LandscapeRight:
    13.                             rotZ = 0;
    14.                             break;
    15.                         case ScreenOrientation.PortraitUpsideDown:
    16.                             rotZ = -90;
    17.                             break;
    18.                     }
    19.                     Quaternion rotation = Quaternion.Euler(0, 0, rotZ);
    20.                     Matrix4x4 m = Matrix4x4.Rotate(rotation);
    21.                     myMatrix = m * myMatrix;

    But I'm wondering if there may be an additional offset encoded in the displayMatrix in some devices.
    I don't have any issue with iPad 2018 6th gen and Samsung S8, but it appears to have some y offset of the detected marker in my case on iPhone X.
     
  6. jsaalva

    jsaalva

    Joined:
    Nov 19, 2019
    Posts:
    5
    Hi @vincentfretin, thanks for the info

    My application works with a fixed orientation at the moment, so the screen rotations do not give me problems

    Regarding what you write about the offset, the image size that TryGetLatestImage returns by default is 640x480 (1.3:1), several resolutions can be selected
    https://github.com/Unity-Technologi...ster/Assets/Scripts/CameraConfigController.cs.

    The problem is when the aspect ratio between the image you process and the screen is not the same, int this case the offset should be taken into account, if on the contrary the aspect ratio is the same and only changes the resolution, normalizing the coordinates should be enough I believe.

    In the case of the Ipad, the screen is 4:3 (1.33) so it matches the ratio of the arcore image

    Another point to consider when aspect ratios do not match is that when the image is cropped, certain parts of the camera image have no correspondence on the screen.
    So I think the best way is to find a way to maintain the aspect ratio between both images, either by changing the size before processing it, or simply by adding padding on the screen, although I don't know if there is a quick way to do this last in Arfoundation or is necessary modify the background render
     
    ROBYER1 likes this.
  7. cribin

    cribin

    Joined:
    Jul 11, 2018
    Posts:
    9
    Did anyone find a solution for this? I'm doing some computer vision processing on the cpu image and I want to back project detected points in the 2d image to world space, however the camera image and screen image differ, therefore the result looks incorrect.