Hi! I'm running an ML model in Core ML on a frame passed through a C++ bridge from Unity to Swift. The model returns a pose with positions normalized from 0 to 1. However, since the resolution of the camera is different from the screen size, the prediction is squished a bit. When the subject is in the center of the screen, it works fine. But when they're towards the top, it's scaled a bit incorrectly. Is there an easy way to go from Camera frame to Screen/Viewport coordinates? I've been searching around but haven't been able to find anything super decisive. I feel like I could manually calculate the delta, but that's making some assumptions about how the camera frame is projected onto what I imagine is the AR Camera Background. Thanks!