Search Unity

Oculus Rift : Setting up tunelling

Discussion in 'AR/VR (XR) Discussion' started by Boow_, Apr 5, 2016.

  1. Boow_

    Boow_

    Joined:
    Sep 12, 2014
    Posts:
    1
    Hi all,

    I'm currently trying to implement "tunneling" which is a way to greatly reduce simulation sickness during locomotion in VR. I highly advise to take a look at the following video and try the demo (if you can make it work as it's built for the 0.4 version of Oculus's SDK).



    Trying to reproduce what can be seen in the video I ran into several problems (using Unity 5.3.4f1) :
    • You can not change the viewport rect and FOV of a camera when the Oculus is plugged (my first try wich worked without the oculus was to use a second camera on top of the "main camera" with a higher depth and the "right" viewport rect and FOV)
    • There is no documentation / exemples of using one camera per eye (using Target Eye field of the camera) : after some experimentation, I think that even if both camera are set at the same position by the oculus, they somehow manage to maintain the interpupillary distance.
    • When rendering a camera on a texture, the camera stops having it's fields set by the Oculus Rift (position and rotation stop changing).
    We tried two solutions. In each of them we had two cameras : the main camera that's the one that gets blurred when moving and should display the external part of the screen and the "center camera" that shows in the center of the screen.

    The first solution we tried that works fine when running without an oculus is to specify the viewport rect of the central camera to (x : 0.2, y : 0.2, h : 0.6, w : 0.6) and it's fov to 38 (with the main camera's fov set at 60).

    The second solution, knowing that we couldn't change the center cam viewport rect, was to use a (fragment) shader to set the alpha of the "external" pixels (defined by a mask) to 0. This, obviously for someone who knows shaders better than I, does not work and I'd greatly appreciate it if someone who knows a solution using shaders point me in the right direction.

    We are currently working on a third solution using two camera per eye (one for the central part and one for the external one), rendering the central one to a texture and then displaying it on a quad right in front of the other camera, it should end up working but it's a clunky solution and it ends up being quite messy.

    If anyone have any insight on the best way to reproduce the way the camera behave in the video we'd highly appreciate it.
    Thanks for reading.