Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Coordinate Transformation issue - Going from Captured Frame to Screen Coordinates

Discussion in 'AR/VR (XR) Discussion' started by chris_unity888, Sep 9, 2019.

  1. chris_unity888

    chris_unity888

    Joined:
    Sep 9, 2019
    Posts:
    1
    Hi! I'm running an ML model in Core ML on a frame passed through a C++ bridge from Unity to Swift. The model returns a pose with positions normalized from 0 to 1. However, since the resolution of the camera is different from the screen size, the prediction is squished a bit. When the subject is in the center of the screen, it works fine. But when they're towards the top, it's scaled a bit incorrectly.

    Is there an easy way to go from Camera frame to Screen/Viewport coordinates? I've been searching around but haven't been able to find anything super decisive.

    I feel like I could manually calculate the delta, but that's making some assumptions about how the camera frame is projected onto what I imagine is the AR Camera Background.

    Thanks!