Hi, I would like to simulate something similar to the Timewarping in Oculus or late stage reprojection in HoloLens. For this, I captured camera poses from the AR glasses and loaded them in Unity. Then, I move the camera from one pose to another and capture the resulting images. My goal is to warp the image resulting from the first pose to obtain the second image (assuming that both camera poses are known, like in Timewarping or LSR). In order to apply such an image warp, I need to compute the homography for which I need the camera intrinsics (calibration) matrix. As a first step, I set the physical camera parameters as in the attached image and computed the focal lengths in pixels using the formula: fx[pixels] = fx[mm] * img_width[pixels] / sensor_width[mm] where I set fx[mm]=25 and sensor_width[mm]=36 in Unity (similarly for fy). I set the image resolution to 1920x1080, so I capture images of this size for my simulation. These are just representative values, I have currently no idea how to apply in this in a real system but for the simulation I just want to capture the images from Unity and apply a perspective transformation using OpenCV. However, I discovered that depending on how the Game view is scaled, the captured images look different. It seems that this depends on the Gate Fit parameter of the physical camera. My questions are: 1. What is the right way to think about gate fit? Depending on how this is set and how the game view is scaled, I get different images, so I can't be sure if my transform is wrong or the capture. 2. Is it correct to set a calibration matrix using the parameters from the Unity physical camera? For optical center (the principal point, in pixels) in the matrix, I use cx=img_width/2 and cy=img_height/2. Thanks.