Hi Folks, Not really sure if this is better posted here or within the XR discussion forum. Rolling the dice. I've been thinking a bit about how to combine the magic of Cinemachine with a VR game. A few things I'd like to be able to do is move through a few different perspectives: world overview, third person follow, first person. Cinemachine brings so much to the table that if I were a non-VR game I would just set a few VCAMS at various points in the scene and adjust the cam-weight at runtime. The complication arises because of how integral that rig setup happens to be with respect to VR. Typically in VR you'd see the "center eye anchor" as the main camera, but the left hand and right hand anchors are positioned relative to the C.E.A ... and then the "tracking space" is your safe play zone. I think that essentially the entire OVRCameraRig would have to move from one VCAM to another ... but then how would you handle smooth blending between them? I've actually got a million questions about leveraging cinemachine in VR - but - before I run through a couple of weekends doing prototyping and research (for example: do I make my VCAMs children of the CEA? Does that cause motion sickness?) Does head tracking break when you use a VCAM followcam? Maybe if I followcam while leaving LOOK CAM set to null? what if I want *some* player guidance with a set look cam, but I still want them to be able to move their head (effectively making the world at the center-eye update while the player retains full head movement within that guided movement)? I'd like to know if there are any resources out there from people who have thought about and solved these problems before? Is there anyone on the internet who's got some insight into the scope of the problem here? Is there any thought leadership out there on this topic?