Search Unity

Question Spawing Object using ARFounation using OpenCV Pose Estimation Results

Discussion in 'AR' started by RahulJain07, Aug 30, 2022.

  1. RahulJain07

    RahulJain07

    Joined:
    Sep 9, 2020
    Posts:
    15
    Hi, I am using OpenCV edge detectors to determine the 6D pose of an object and then to augment a model of the same object in Unity. The points below describes the process I am following to accomplish the task.

    1. Accessing the Camera Intrinsics (fx,ty,cx,cy) using ARCameraManager.TryGetIntrinsics
    2. Accessing the CPU image using ARCameraManager.TryAcquireLatestCpuImage
    3. Passing the above CPU image and Camera Intrinsics to OpenCV for Pose Estimation which outputs the Tvec (in meter) and RVec (in Axis Angles)
    4. Spawing the Object's cad using ARFoundation at Tvec and Rvec from the above step. I am using the following code snippet for taking care of the pose conversion from OpenCV to Unity coordinates

    //x,y,z from step 3 RVec
    void SetRotation(float x, float y, float z){
    float theta = (float)(Mathf.Sqrt(x*x + y*y + z*z)*180/Mathf.PI);
    Vector3 axis = new Vector3(-x, y, -z);
    Quaternion rot = Quaternion.AngleAxis(theta,axis);
    Cylinder.transform.localRotation = rot;

    }

    //x,y,z from step3 TVec
    void SetTranslation(float x, float y, float z){
    Vector3 trans = new Vector3(x,-y,z);
    Cylinder.transform.localPosition = trans;
    }

    The augmentation from the above process is perfect in Rotation but there is a slight offset in z and y translation.

    Possible Causes ( self evaluation )
    OpenCV pose estimation application is heavily dependent on camera intrinsic. I see the intrinsic received from ARCameraManager.TryGetIntrinsics (based on Pin Hole) are quite different from the physical device (android) intrinsic which I receive from the android camera2 API (camera2.characteristics) but at the time of augmentation i suppose ARFoundation overrides the intrinsic matrix, so shall I use the actual device intrinsic when determining pose. If yes, what changes needs on ARFoundation side at the time of augmentation.

    Also, if you seek any other mistake in the process which may lease to translation offset please guide me along

    Thanks.
     
  2. RahulJain07

    RahulJain07

    Joined:
    Sep 9, 2020
    Posts:
    15
    I resolved the issue. The glitch was i had taken a constant value of camera intrinsic which i got from an earlier run of the application using ARCameraManager.TryGetIntrinsics. Later I noticed that each time I start the application TryGetIntrinsics gives different values of fx,fy,cx,cy on the same device (Pixel 6 in my case). Updating the values for each run and then running the opencv pose estimation gave me accurate results.
    Nevertheless, I have noticed a lot of threads on how to import OpenCV translation and rotation Vectors to Unity so here is the actual working solution.

    // x,y,z from opencv in Meters
    void SetTranslation(float x, float y, float z){
    Vector3 trans = new Vector3(x,-y,z);
    transform.localPosition = trans;
    }

    Just negate the y values from opencv as the coordinate system in Opencv follows Y in downward direction and in Unity its upwards. Note the values are all in meters (default unit system followed in unity).

    For rotation there can be several methods to converts but the following one I found the most easy to understand and implement. Convert the rotation vector in OpenCV to Angle Axis and implement the following method in Unity using the resultant values

    //x,y,z Angle Axis in OpenCV
    void SetRotation(float x, float y, float z){
    float theta = (float)(Mathf.Sqrt(x*x + y*y + z*z)*180/Mathf.PI);
    Vector3 axis = new Vector3(-x, y, -z);
    Quaternion rot = Quaternion.AngleAxis(theta,axis);
    transform.localRotation = rot;
    }

    Thanks a lot folks