Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Correct Way to Read Physical Camera FoV

Discussion in 'AR' started by HulloImJay, Mar 30, 2020.

  1. HulloImJay

    HulloImJay

    Joined:
    Mar 26, 2012
    Posts:
    89
    So in several AR apps I've found it necessary to add extra cameras to layer over the main ARFoundation-controlled one. But I still needed them to match that main camera. For example, to place world-space UI which is always drawn on top of the meshes and effects in AR, I use a second camera with the same properties which draws only those UI layers.

    The main challenge I faced is how to correctly copy the properties of the ARFoundation camera. In particular, the field of view doesn't seem to be exposed simply anywhere I can find — the camera.fieldOfView value does not reflect the real/physical FoV because the projection matrix is being assigned directly. And if the second camera has the wrong field of view then the stuff it is overlaying is not positioned correctly (except at screen centre).

    So for a couple years I've been using this code, which is based on some math I found somewhere I cannot recall, and reverse-engineers the projection matrix to get the FoV. "PrimaryCamera" is the one controlled by ARFoundation and "UICamera" is the one I'm using to overlay.

    Code (CSharp):
    1.     float t = PrimaryCamera.projectionMatrix.m11;
    2.     float fov = Mathf.Atan(1.0f / t) * 2.0f * Mathf.Rad2Deg;
    3.     UICamera.fieldOfView = fov;
    (Aside: I've also noticed this calculated value changes during runtime, I believe (plz correct if wrong) because the camera is auto-focusing, changing the physical FoV. So I'm doing this each frame.)

    I believe this works, but golly it's a bit ugly, right?

    My question is this: Is there an easier/better way that I'm missing? This seems like very important data to expose and I feel like I must be missing the obvious somewhere.
     
  2. sam598

    sam598

    Joined:
    Sep 21, 2014
    Posts:
    60
    There are two important things to know about AR Foundation, and how ARKit and ARCore handle the camera frustum.

    1) As you have noticed the the camera's field of view is constantly changing. This is because of several factors, including auto focus and lens stabilization. In the case of iPhones there is are floating lens elements inside the camera that are constantly moving, and the exact position of these elements are not known. Because of this both the camera's focal length and distortion are constantly changing. The API compensates for this under the hood by constantly re-estimating the camera's frustum, and provides and undistorted rectilinear image that can easily be composited with CG elements.

    2) The AR projection matrix is asymmetrical. Since the lens elements are not fixed the center of the lens is not centered with the camera sensor, so the image is slightly skewed. Most of the time you will not notice this, but it can lead to a misalignment of several degrees. This is why the API provides a complete camera projection matrix instead of just the field of view.

    For the most accurate and simplest result, I would recommend copying over the entire projection matrix every frame instead of just the field of view.

    Code (CSharp):
    1. SecondaryCamera.projectionMatrix = PrimaryCamera.projectionMatrix;
     
    HulloImJay likes this.
  3. HulloImJay

    HulloImJay

    Joined:
    Mar 26, 2012
    Posts:
    89
    Sorry for the huge delay responding. Thanks so much for this clarification. I hadn't thought about distortion at all.

    I do hit one key problem when trying to use the entire projection matrix, though. It seems to cause my UI to not render at all, with no errors in the console.

    Here's my very basic attempt, responding to the "frameReceived" event of the ARCameraManager.

    Code (CSharp):
    1.  
    2.     void GetFrame (ARCameraFrameEventArgs args) {
    3.          if (args.projectionMatrix.HasValue)
    4.              cameraRig.UICamera.projectionMatrix =(Matrix4x4) args.projectionMatrix;
    5.     }
     
  4. sam598

    sam598

    Joined:
    Sep 21, 2014
    Posts:
    60
    Okay, so here is where things get a bit more tricky.

    The projection matrix from ARCameraFrameEventArgs is the "raw" projection matrix. It does not take into account your rendering screen dimensions or aspect ratio, so it will not work if you copy it directly from the API into a camera.

    If you look in the ARCameraManager.cs script you can see that this function
    Code (CSharp):
    1. subsystem.TryGetLatestFrame(cameraParams, out frame)
    is used to create a new XRCameraFrame struct. The projectionMatrix from that struct is a combination of the "raw" projection matrix from the AR API, and the screen settings from the XRCameraParams struct that is fed into the function.

    You could try to copy the flow from the ARCameraManager.cs script to get a proper projection matrix. As an alternative you could copy the final projection matrix in the primary camera to the secondary camera.
    Code (CSharp):
    1. SecondaryCamera.projectionMatrix = PrimaryCamera.projectionMatrix;