Search Unity

[SOLVED] Trouble using multiple cameras.

Discussion in 'AR/VR (XR) Discussion' started by ThaBullfrog, Mar 16, 2017.

  1. ThaBullfrog

    ThaBullfrog

    Joined:
    Mar 14, 2013
    Posts:
    49
    I am trying to replicate Google Maps's locomotion method. Well, not the movement exactly, but the way they render a static scene in your peripheral vision to reduce or eliminate VR sickness.


    Summary of what I think the problem is up here with a more detailed explanation of what exactly I am trying to do below:

    The VR cameras have a lot of hidden functionality. I would expect two cameras separated slightly, but that's not what I get. I get one camera that apparently renders both eyes. Okay, I can separate these into two cameras by telling two different cameras to target either the left eye or the right eye only. However, they still aren't physically separated in the game world. It's hidden functionality.

    Where can I find this functionality? Clearly the SteamVR plugin or OpenVR (I'm guessing the latter) is doing some magic. I need to see how this is done.

    More detail:

    I have four cameras: LeftEye, RightEye, LeftEyePeripheral, and RightEyePeripheral.

    LeftEyePeripheral and RightEyePeripheral render an empty scene with nothing but a skybox and a plane with a grid texture as the floor.

    LeftEye and RightEye actually render my game world. Each of them target a 1512x1680 render texture which is in turn displayed on a canvas. They are put on the canvas using a mask shader that makes everything but the center of the image transparent (in other words, I cut off the peripheral part of their images). The canvases use "Screen Space - Camera". My thought was this would display the LeftEye and RightEye images cut out into a circle shape on top the the peripheral images.

    Three weird things happen. First, when the LeftEye and RightEye cameras are given a target render texture they no longer move with the VR headset. This is easily fixed by setting them as children of the peripheral cameras. Second, the cameras lose their offset so in the final product the image in the circle has no depth. Third, the plane offset of the canvas actually has an effect on what the circle looks like in game. This third one is the weirdest. Why would that matter? The circle is scaled up when it's farther away so it should look the same no matter what.

    So how can I retain depth, send the images to render texures, then in turn draw those render textures on top of their respective peripheral camera's view?

    Wow, that was long. This is far more complex than I imagined it would be so if you got through all that, thank you for your time. Hopefully, you can point me in the right direction.
     
  2. ThaBullfrog

    ThaBullfrog

    Joined:
    Mar 14, 2013
    Posts:
    49
    Solved it. Both the LeftEye and RighEye cameras had to be offset from center by half of the distance between the player's eyes. That distance was found using the following:
    Code (csharp):
    1. Vector3.Distance(UnityEngine.VR.InputTracking.GetLocalPosition(UnityEngine.VR.VRNode.LeftEye), UnityEngine.VR.InputTracking.GetLocalPosition(UnityEngine.VR.VRNode.RightEye))
    Then the canvas images had to also be offset by the same distance (note: I accidently used pixel units on my first attempt. They need to be offset using world units).