Search Unity

How is XR.InputTracking.GetLocalPosition() implemented?

Discussion in 'AR/VR (XR) Discussion' started by SoxwareInteractive, Jul 2, 2018.

  1. SoxwareInteractive

    SoxwareInteractive

    Joined:
    Jan 31, 2015
    Posts:
    541
    Hi,
    I'm trying to get a better understanding of how the XR.InputTracking.GetLocalPosition() method calculates it's position. As far as I have understood it, this method returns the predicted position the XR node will have once the rendered frame's photons are presented to the user. Could you share some details about how this prediction is implemented?

    In the case of OpenVR (Vive), is this method using OpenVR's GetDeviceToAbsoluteTrackingPose() behind the scenes (which supports prediction via the "fPredictedSecondsToPhotonsFromNow" parameter)?
    (Unity's XR method is implemented in C++ so you can't see its implementation in unity's CS reference repo.)

    How is the "fPredictedSecondsToPhotonsFromNow" parameter calculated by Unity? I tried calculating that parameter based on the example shown in the OpenVR docs, but the resulting output is ~1 Frame behind of what XR.InputTracking.GetLocalPosition() returns:
    Code (CSharp):
    1. public class Test : MonoBehaviour
    2. {
    3.         private TrackedDevicePose_t[] trackedDevicePose = new TrackedDevicePose_t[16];
    4.  
    5.         // Update is called once per frame
    6.     private void Update ()
    7.     {
    8.         // ------------------------
    9.         // Unity's Implementation
    10.         // ------------------------
    11.         Vector3 unityPosition = InputTracking.GetLocalPosition(XRNode.Head);
    12.  
    13.         // ------------------------
    14.         // OpenVR Implementation
    15.         // ------------------------
    16.         float fPredictedSecondsFromNow = GetPredictedSecondsFromNow();
    17.  
    18.         OpenVR.System.GetDeviceToAbsoluteTrackingPose(ETrackingUniverseOrigin.TrackingUniverseStanding, fPredictedSecondsFromNow, trackedDevicePose);
    19.         SteamVR_Utils.RigidTransform rigidTransform = new SteamVR_Utils.RigidTransform(trackedDevicePose[0].mDeviceToAbsoluteTracking);
    20.         Vector3 openVrPosition = rigidTransform.pos;
    21.  
    22.         // Summary:
    23.         // openVrPosition is around one frame behind unityPosition. Why??
    24.     }
    25.  
    26.     // Source: https://github.com/ValveSoftware/openvr/wiki/IVRSystem::GetDeviceToAbsoluteTrackingPose
    27.     private float GetPredictedSecondsFromNow()
    28.     {
    29.         float fSecondsSinceLastVsync = 0;
    30.         ulong frameCounter = 0;
    31.         OpenVR.System.GetTimeSinceLastVsync( ref fSecondsSinceLastVsync, ref frameCounter );
    32.  
    33.         ETrackedPropertyError error = ETrackedPropertyError.TrackedProp_Success;
    34.         float fDisplayFrequency = OpenVR.System.GetFloatTrackedDeviceProperty(0, ETrackedDeviceProperty.Prop_DisplayFrequency_Float, ref error );
    35.         float fFrameDuration = 1.0f / fDisplayFrequency;
    36.         float fVsyncToPhotons = OpenVR.System.GetFloatTrackedDeviceProperty(0, ETrackedDeviceProperty.Prop_SecondsFromVsyncToPhotons_Float, ref error );
    37.  
    38.         float fPredictedSecondsFromNow = fFrameDuration - fSecondsSinceLastVsync + fVsyncToPhotons;
    39.  
    40.         return fPredictedSecondsFromNow;
    41.     }
    42. }
    43.  
    What I'm ultimately trying to do is to recreate the XR.InputTracking.GetLocalPosition() method using just raw OpenVR method's as I'm trying to send the tracking pose via network and then do the correct prediction (+ latency correction) on the receiving client.

    Any help would be very appreciated :) Thanks!
     
  2. StayTalm_Unity

    StayTalm_Unity

    Unity Technologies

    Joined:
    May 3, 2017
    Posts:
    182
    Hello,
    You are correct that GetLocalPosition relies on some prediction, and in fact it can be slightly different prediction depending on the SDK.
    For OpenVR, we get our tracked positions via this call:
    Code (CSharp):
    1.  
    2. class IVRCompositor
    3. {
    4. ...
    5. /** Scene applications should call this function to get poses to render with (and optionally poses predicted an additional frame out to use for gameplay).
    6. * This function will block until "running start" milliseconds before the start of the frame, and should be called at the last moment before needing to
    7. * start rendering.
    8. *
    9. * Return codes:
    10. * - IsNotSceneApplication (make sure to call VR_Init with VRApplicaiton_Scene)
    11. * - DoNotHaveFocus (some other app has taken focus - this will throttle the call to 10hz to reduce the impact on that app)
    12. */
    13. virtual EVRCompositorError WaitGetPoses( VR_ARRAY_COUNT(unRenderPoseArrayCount) TrackedDevicePose_t* pRenderPoseArray, uint32_t unRenderPoseArrayCount,
    14. VR_ARRAY_COUNT(unGamePoseArrayCount) TrackedDevicePose_t* pGamePoseArray, uint32_t unGamePoseArrayCount ) = 0;
    15.  
    WaitGetPoses is a blocking call that unblocks just in time for the next rendering pass in order to give us as accurate as possible rendering poses.

    This is called right before the previous frames rendering pass, and we use the pGamePoseArray poses for the next Update pass, which could be why you feel like you are one frame behind. Sadly though, this means that the prediction is done internally to OpenVR, and so neither you, nor I have access to the exact prediction timing.
     
  3. SoxwareInteractive

    SoxwareInteractive

    Joined:
    Jan 31, 2015
    Posts:
    541
    Hi @StayTalm_Unity,
    thank you very much for your detailed response. That was of great help!