Search Unity

  1. Unity Asset Manager is now available in public beta. Try it out now and join the conversation here in the forums.
    Dismiss Notice
  2. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  3. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Documentation on stereo 360 capture?

Discussion in '2018.1 Beta' started by marcusestes, Jan 10, 2018.

  1. marcusestes

    marcusestes

    Joined:
    Feb 2, 2017
    Posts:
    17
    I'm interested in the new experimental stereo 360 capture. Can we get a quick write-up on it in the forums in advance of official documentation?
     
  2. JeanSmall

    JeanSmall

    Joined:
    Aug 2, 2017
    Posts:
    5
    I second that. All we have to work with is the RenderToCubemap method extension. So far I've been able to render to a RenderTexture correctly for the right and left eye.
    One of the great thing about this feature is to render offline stereo video of a realtimeVR experience, but from what I've read, found and tried so far, I can only get one side of the cubemap to disk using C# only in Unity.

    So is there more to it, or do we have to write a native plugin to access the cubemap in DirectX/OpenGL in order to get the full texture to disk? Or is something being written to that effect at Unity?
     
  3. mikesf

    mikesf

    Unity Technologies

    Joined:
    Jul 14, 2017
    Posts:
    14
    There's a stereo 360 capture blog post showing how to do this (coming this month).
    There are two stereo 360 capture API's in 2018.1 which work together to capture stereo 360:

    Camera.RenderToCubemap: https://docs.unity3d.com/2018.1/Documentation/ScriptReference/Camera.RenderToCubemap.html
    RenderTexture.ConvertToEquirect: https://docs.unity3d.com/2018.1/Documentation/ScriptReference/RenderTexture.ConvertToEquirect.html

    This is also integrated with our frame recorder tool if you want to capture 360 video.
     
  4. JeanSmall

    JeanSmall

    Joined:
    Aug 2, 2017
    Posts:
    5
    Awesome, thanks!
     
  5. 3ch3l0n

    3ch3l0n

    Joined:
    Dec 15, 2014
    Posts:
    1
    Hi any news on the that stereo 360 blog post ?
     
  6. LeonhardP

    LeonhardP

    Unity Technologies

    Joined:
    Jul 4, 2016
    Posts:
    3,132
  7. tenshidev1

    tenshidev1

    Joined:
    Dec 6, 2017
    Posts:
    1
    Hi - do you know what the maximum size equirectangular RenderTexture is that can be output? is it limited to 8k, or is higher possible ?
     
  8. mikesf

    mikesf

    Unity Technologies

    Joined:
    Jul 14, 2017
    Posts:
    14
  9. phili_maas

    phili_maas

    Joined:
    Dec 11, 2016
    Posts:
    21
    This is really great and looks promising!
    From the blog article or google ODS document, I'm not entirely sure I understand correctly how it works.
    Is the camera still basically capturing a cube map but per cube map direction, it offsets the vertices in the shader to produce correct stereo pixel pairs for each eye even on the far corner of the cubemap?
    Because I get some cut off edges of cube primitives that stretch over two cubemaps sides. I guess the more vertices the more accurate the offset in the shader? Unless it would tesselate low poly objects before hand?
    Also a question regarding post fx and camera based stuff which is always an issue. I get some seams in the gradients on the cubemap edges.
    But other than working around those limitations, I think it is incredible how fast one can output 8192x8192px with this technique and get good enough stereo images!
    Best,
    Phil

    upload_2018-2-2_11-17-12.png
     
  10. mikesf

    mikesf

    Unity Technologies

    Joined:
    Jul 14, 2017
    Posts:
    14
    @phili_maas The amount of vertex ODS offset would be different between the left and right eye, thus different top and bottom images. The offset also depends on your camera's stereoSeparation value (default to 0.064 cm for average human IPD).

    There's also projection part of calculation in the ODS offset function which distorts the vertices closer to the camera more than distant ones. Having better tesselation would smooth the ODS offset function but not change the amount of offset.

    There's also the process of mapping the cubemaps to a equirectangular map which can affect the stereo results.
    It's also important to tune your capture results by viewing in stereo on a VR device rather than just seeing the equirect images.
     
  11. phili_maas

    phili_maas

    Joined:
    Dec 11, 2016
    Posts:
    21
    @mikesf thanks for your reply. Yes I checked in VR of course. The artifacts I'm seeing are not just differences in stereo, but the cube is basically cut off in a straight vertical line in one eye. I know it is hard to see in my cropped image. That's why I was assuming it had to do with the cubemap process because as I undestand it does take one side of the cube at a time and offsets the vertices per eye and then converts to panoramic in the end. The cut off cube does also line up with the cubemap seams I see in the sky gradient.