Search Unity

Polarized 3D Footage

Discussion in 'AR/VR (XR) Discussion' started by AJ-at-VRSFX, Sep 10, 2015.

  1. AJ-at-VRSFX

    AJ-at-VRSFX

    Joined:
    Jul 10, 2014
    Posts:
    9
    This article shows how to use Unity's Stereoscopic Rendering feature to render 3D footage in an Oculus Rift app:
    http://gamasutra.com/blogs/NBaron/20150902/252357/Quadbuffered_stereoscopic_3D_with_Unity3D.php

    The article instructs us to render each eye as its own texture, but in Unity's documentation, under Automatic stereo display, it says "It is not required to have two Cameras"
    http://docs.unity3d.com/Manual/VROverview.html

    Is this true of 3D footage mapped to a surface? If we can render footage to just a single sphere instead of one sphere per eye, we'll see a performance increase and we won't have to worry about keeping the eyes in sync.

    If Unity's Stereoscopic Rendering feature handles this for us automatically, in what format does the footage need to be to render correctly?
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,348
    The rest of that quoted line is: "It is not required to have two Cameras for stereoscopic displays. Any camera that has no render texture is automatically rendered in stereo to your device."

    There's still two cameras in the end, but Unity handles them for you behind the scenes. The benefit is many things only have to be calculated once and then rendered twice saving quite a bit of performance, but the assumption is both eyes see the same things, just from slightly different perspectives. This is great for a scene that's 3d, but for stereo video footage each eye needs to see a different sphere so you still need two cameras, for now.

    I can't find it anymore but Unity has said they'll look at adding support for stereo video, but there weren't plans yet.