Search Unity

How to implement rendering logic for stereoscopic 360 video

Discussion in 'AR/VR (XR) Discussion' started by dimatomp, Dec 5, 2018.

  1. dimatomp

    dimatomp

    Joined:
    Oct 20, 2016
    Posts:
    16
    Hello,

    I am trying to implement my own 360 stereo renderer and I'm not going to use any existing solution for this task; the reasons are specific to my scenario. I understand that the 360 stereo image is usually produced by rendering two cubemaps (one per eye) and converting them to equirectangular projection - my question generally concerns the first part.
    Currently I know about three implementations of cubemap rendering logic:
    • Camera.RenderToCubemap renders all 6 cubemap parts according to provided stereo separation, target eye, etc. I believe that it works fine but it does not render world space UI (see the issue) which is crucial in my particular case.
    • Helios asset package simply places Camera at -IPD offset for left eye and IPD for right eye. It is not obvious to me why this works at all: e.g. if a viewer looks to the left, they will see the left image being 2*IPD "closer" than the right image when it should be 2*IPD "to the left" from the right image.
    • VR Panorama asset package: I have no experience of using it. Helios was a disappointment for me because of its really poor design; I am not sure if it is not the same with VR Panorama and I don't want to buy it without a preview.
    There is also a Google article about 360 video: it suggests rendering the equirectangular image per-column. While this approach obviously works, it is pretty expensive in terms of rendering time - especially if you want to render 8K stereo and thus would have to perform thousands of passes.

    If I am wrong anywhere above, comments are appreciated. Any suggestions about the right way to render 360 stereo content?
     
    Last edited: Dec 5, 2018
    ivangolod likes this.