Search Unity

Different content in each eye

Discussion in 'AR/VR (XR) Discussion' started by BigRedSwitch, Jun 11, 2015.

  1. BigRedSwitch

    BigRedSwitch

    Joined:
    Feb 11, 2009
    Posts:
    724
    Hi all,

    We've been using the standard Oculus plugin for a while, and aside from the juddering problems caused by the most recent oculus runtimes, we've not had any issue with it.

    We're keen to move to using Unity 5.1, but one thing we've come up against whilst looking to move is that there are no longer 2 cameras in the editor view. As such, situations where we're playing different content to both eyes (stereo video for example) no longer work.

    Is there an obvious way to get around this that we're missing at the moment? There must still be a way to talk to the different eye cameras independently?

    Thanks in advance for any help!

    SB
     
  2. thep3000

    thep3000

    Unity Technologies

    Joined:
    Aug 9, 2013
    Posts:
    400
    Hey,

    We are aware that there is currently no way to draw different things to each eye. We have a solution at the shader level that we're working on. We will set a specific uniform constant shader variable with the eye index (if said variable exists in the shader), and your shader can draw differently based on that value. Would like to know if this is sufficient for your purposes or if you need something higher level?
     
  3. Poopypoo

    Poopypoo

    Joined:
    Apr 18, 2015
    Posts:
    18
    I think it would be very useful if this was available as a Unity function, so that we could detect in OnPostRender which eye was actually rendering.
     
  4. sleekdigital

    sleekdigital

    Joined:
    May 31, 2012
    Posts:
    133
    @thep3000 so in the case of stereo video, we would create 2 shaders, one for left eye one for right eye? And in the shader code we would essentially turn rendering on/off based on the uniform constant? I guess there are also other approaches depending on the details. Just trying to understand how we would use this shader level approach. Currently, I just use layers and set the culling mask on each camera as described here...
    http://bernieroehl.com/360stereoinunity/
     
  5. BigRedSwitch

    BigRedSwitch

    Joined:
    Feb 11, 2009
    Posts:
    724
    Hey thep3000,

    Maybe...

    We produce cross platform applications on Oculus, iOS and Android (in various forms), a chunk of which are stereo rendered movies. The format of these movies is L/R eyes top and bottom split. We then map these to spheres in our scene. With the old Oculus SDK, we simply changed the layers the cameras could see so that each eye saw the right section of the video on the appropriate sphere.

    Would the shader allow us to do something similar? We really don't want to be hacking through all our old code to change this to a completely different methodology...
     
  6. BradHerman

    BradHerman

    Joined:
    Jan 10, 2013
    Posts:
    21
    We need higher level. We need layers with Left and Right. This is how we do stereo movies. We have 1 video file with side by side frames, we have 2 objects in the same place, one for each eye, they have offset UVs.
    We use different video decoding shades across 4 platforms. We are not going to hack 4 different 3rd party sharers to support stereo rendering flags.
     
    MarioV72 and Gruguir like this.
  7. Poopypoo

    Poopypoo

    Joined:
    Apr 18, 2015
    Posts:
    18
    I also need higher level. Some way to draw different elements to each eye, whether by moving their position for each eye, or a layer, or something. I did try just changing the positions of an object in the prerender, but this did not seem to create a difference between the two eyes.
     
    MarioV72 likes this.
  8. edwebb

    edwebb

    Joined:
    Nov 15, 2013
    Posts:
    2
    @thep3000 Reiterating what others here have said. Our approach is also to use separate layers for each eye. So being able to set the culling mask for the different eye cameras would be ideal. Using shaders would only be partially successful for us.
     
  9. Kaspar-Daugaard

    Kaspar-Daugaard

    Unity Technologies

    Joined:
    Jan 3, 2011
    Posts:
    150
    Hi,

    We will bring back an option to have multiple cameras and target each one at a separate eye. We will also support the shader eye index as mentioned. I'm aware this is high priority and will see if we can get multiple camera support into a patch release.
     
  10. JDMulti

    JDMulti

    Joined:
    Jan 4, 2012
    Posts:
    384
    Today I tried to port our main VR application to Unity 5.1. I couldn't find the rendermask option for each eye and hit this topic on Google.

    I hope this can be integrated really soon, as we are one of the biggest companies in the Netherlands delivering a VR application for real estate companies and now our application can't be used with 5.1 since it relies mostly on cullingmask per eye. Currently I'll keep using 4.6 until rendermask is possible for each eye in the 5.x cycle.
    But good to see that your guys notices it and are working on a solution.
     
    Last edited: Jul 7, 2015
  11. sleekdigital

    sleekdigital

    Joined:
    May 31, 2012
    Posts:
    133
    @JDMulti, You can still do this with 5.x ... you just have to stick with your third party integration instead of using the new built-in VR support.
     
  12. JDMulti

    JDMulti

    Joined:
    Jan 4, 2012
    Posts:
    384
    Yes I know. but the whole point of the change to 5.x is because of the default VR support. But you're also right, as 5.x is possible to use but then with sdk's. ;)
     
  13. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,054
    Except that according to Oculus the old integration support for Unity is now legacy and will be discontinued at some point in the future. So workarounds such as that will not be working for long.

    This situation is exactly what I feared with a 'first party' support for VR being integrated into Unity. Its a completely closed off system, meaning without using individual product SDK's and writing your own wrappers to Unity, we are now at the whims of the Unity release/patch schedule to fix over-sights such as not supporting stereoscopic video. Worse with VR being such a fast paced and mostly not standardized I foresee Unity VR easily lagging behind hardware and software updates, not to mention impossible for developers to address bugs themselves ( for example last years mess up with chromatic offsets in one of Oculus's releases, where I beleive people eventualy hex-edited dll's to fix).

    This is one area where I wished UT had made the first party support a pure plugin and released the source on BitBucket like they've done with the Unity UI. That at least would allow developers some degree to fix things themselves.
     
  14. JDMulti

    JDMulti

    Joined:
    Jan 4, 2012
    Posts:
    384
    Before that I hope they implemented the support for culling masks per eye. At the moment we really rely on this feature for our VR application that sells like crazy and I can't think about not having this before support of the SDK's drop. It would be a total disaster. The crisis in Europe did hit us hard and VR is pulling us out of this situation, sounds strange but it really does, without it.... no idea.

    Is it true that features in patch releases are not on the Unity Roadmap? Would be nice to have a tab for patches as well besides the major big releases. Just an idea.
     
  15. sleekdigital

    sleekdigital

    Joined:
    May 31, 2012
    Posts:
    133
    I expect it will work long enough to hold people over until Unity gets support for this built in. But my point was that you don't have to switch back to 4.6 as the same old integration still works in 5.x.
     
    Noisecrime likes this.
  16. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,054
    Oh I agree with your point, but mine was that its not going to be too long before the legacy installation is too outdated in itself for it to be practical as a workaround for issues like this and thus wishing that Unity VR support would be opened up.
     
  17. Jodon

    Jodon

    Joined:
    Sep 12, 2010
    Posts:
    434
    We had a strategy for rendering one view to the Oculus and another view to the main monitor. We achieved this by rendering an ultra-wide view so half of it appears on your desktop, and half on the VR headset. It appears you can no longer do this with the UnityVR integration. UnityVR takes over all aspects of your camera. Can we get support for achieving our goal? Here's an article I wrote after GGJ2015 which explains our setup and how we achieved it.

     
  18. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,054
    Another great use case that an increasing number of games and experiences are exploring. Whilst it could be argued in some cases using networked machines ( or even just networked Unity instances on same machine) would solve this issue, in the example shown where so much content/data is clearly shared by both views it would definitely be nice to get native support for such a feature so it can run on a single machine.

    However i'd argue rather than supporting the old style ( though extended monitor usage is still quite important) perhaps instead a better solution would be too leverage the already supported 'VR.VRSettings-showDeviceView'? In this case instead of showing the left eye view the function could be enhanced/overloaded to accommodate a camera reference or rendertexture, thus allowing developers to display any content on the monitor screen?
     
  19. thep3000

    thep3000

    Unity Technologies

    Joined:
    Aug 9, 2013
    Posts:
    400
    Regarding the original issue: L/R rendering masks is coming in 5.1.2p1 this week. Will update with more details when it lands.

    Regarding the discussion on displaying different content on the main screen: We have been investigating this -- many of our platforms are supporting some form of "multi-view", a standard way of doing this in unity cross platform is being worked on now. This scenario is planned to be supported. I don't have a timeline, but will update when we know more.
     
  20. thep3000

    thep3000

    Unity Technologies

    Joined:
    Aug 9, 2013
    Posts:
    400
    5.1.2p1 was released today, but there was an issue with stereoscopic rendering that was caught too late to patch. So for now it is recommended that users stay on 5.1.2f1.

    Rendering different content in each eye is supported in 5.1.2p1 though. If you want to get a jump start using this feature, given the known monoscopic rendering issue, here is a sample project that works with 5.1.2p1+. p2 will officially support this feature, and we're aiming to have some documentation merged by 5.1.3f1's release.

    Details:
    - Camera now has a drop down for Target Eye: Both, Left, Right
    - If you want to render different objects to different eyes, use two identical cameras, but change the target eye of one to Left, the other to Right. You can then use different culling masks on each camera.
    - See attached project for an example.
    - Note that using two separate cameras with different target eyes like this is only recommended for special use cases like those outlined in this thread. You'll be giving up optimizations by rendering like this.
     

    Attached Files:

  21. sleekdigital

    sleekdigital

    Joined:
    May 31, 2012
    Posts:
    133
  22. snowgundam

    snowgundam

    Joined:
    Apr 17, 2013
    Posts:
    7
    But in my Unity editor, when I check for updates in help menu,it shows "current version 5.1.2 f1 is up to date", then how to get the 5.1.2 p1 version? Thanks a lot!
     
  23. EdBlais

    EdBlais

    Unity Technologies

    Joined:
    Nov 18, 2013
    Posts:
    311
    Patch-releases (marked with a p, as with 5.1.2p1) are not automatically updated to and will not be found by the Updater. They can be found here: https://unity3d.com/unity/qa/patch-releases

    The reason for this is that the patches contain only and small amount of bug fixes and they come out each week. We do our best to make sure that patches are stable, but because there is only a week to make fixes and test, sometimes the fixes don't fix the entire problem.

    Official releases (marked with an f, as with 5.1.2f1) are stable and have been tested throughout the patch cycle. They include all the fixes from the patches of the previous official release as well as some additional fixes.
     
  24. snowgundam

    snowgundam

    Joined:
    Apr 17, 2013
    Posts:
    7
    @Ed unity:Thanks for you answer and hard work to fix this issue by unity staff!:)
     
  25. JVaughan

    JVaughan

    Joined:
    Feb 6, 2013
    Posts:
    23
    I'm glad this is being worked on. I've been integrating 3D video of late and not have control over the eyes has made my job impossible in Unity 5.1.2.f1. Thankfully there is aldreay a patch and we're in good shape!
     
  26. Taylor-Libonati

    Taylor-Libonati

    Joined:
    Jan 20, 2013
    Posts:
    16
    Been trying out this feature today. Seems to work when I push play although Unity will crash when I stop playing in editor. Also standalone apps crash on close. Seems to get fixed when I add a third camera that renders to both cameras. Still experimenting though.
     
  27. EdBlais

    EdBlais

    Unity Technologies

    Joined:
    Nov 18, 2013
    Posts:
    311
    This is a known issue with the OVRPlugin and the Runtime provided by Oculus: http://forum.unity3d.com/threads/psa-5-1-3-known-issues.350644/

    This will be fixed soon. If you turn off MSAA or linear lighting it should fix the problem for now.
     
  28. davidlively

    davidlively

    Joined:
    Aug 1, 2015
    Posts:
    12
    On the off chance that anyone is interested, here's a component and shader which renders different content (red and blue fill in this case) to each eye. Hope it helps.

    Attach the script to the Camera Rig's centerEyeAnchor and make sure that the shader is included in your project.

    **Note** that I haven't used this with the patch mentioned earlier in this thread; this was the solution I came up with before the issue was officially addressed.
     

    Attached Files:

    Last edited: Sep 23, 2015
    Untoldecay likes this.
  29. edwardrmiller

    edwardrmiller

    Joined:
    May 2, 2015
    Posts:
    8
    Could you clarify this for me? Am I creating a new camera, naming it centerEyeAnchor, adding the script to it, before adding the shader to a plane texture within the scene? I'm specifically looking to use this with stereo video.

    Many thanks!
     
    Last edited: Sep 23, 2015
  30. davidlively

    davidlively

    Joined:
    Aug 1, 2015
    Posts:
    12
    If you have added the Oculus OVRCameraRig to your scene, you'll see that it has a couple of sub-objects in the hierarchy, one of which is named "CenterEyeAnchor." You'll attach the script to that.

    The script will automatically load the shader - you don't have to create a material, etc. If the script is attached to the CenterEyeAnchor - again, part of the OVRCameraRig, not something you'd create - everything "Should Just Work."

    This is supposed to be a *VERY* basic example of showing different content for each eye.
     
  31. davidlively

    davidlively

    Joined:
    Aug 1, 2015
    Posts:
    12
    If you want to use this for 3D video, and have a texture for each eye available, just Graphics.Blit() the appropriate eye texture in OnRenderImage() - you don't even need the shader. All you need to know is which eye is rendering. Note that, while Update() is called once per frame, OnPreRender() and OnRenderImage() are called twice per frame - once per eye. The example script I provided tracks which eye is currently being rendered with the m_currentEye private field. 0 == Left, 1 == right.
     
    Untoldecay and edwardrmiller like this.
  32. edwardrmiller

    edwardrmiller

    Joined:
    May 2, 2015
    Posts:
    8
    Thanks for the support! This is incredibly useful for integrating stereo video into CG scenes, similar to what Kite and Lightning are doing for their VR projects.
     
  33. davidlively

    davidlively

    Joined:
    Aug 1, 2015
    Posts:
    12
    Okay, I misunderstood. If you already have a quad or something in your scene, and you just want to texture it differently for each eye, then you were pretty close:

    Attach my script to the CenterEyeAnchor and, in the OnPreRender event, set the texture based on which eye is rendering. My note about Graphics.Blit() was based on the assumption that you wanted your video to occupy the whole screen.

    I hadn't seen the Kite and Lightning stuff before - looks pretty slick.
     
    Untoldecay likes this.
  34. JDMulti

    JDMulti

    Joined:
    Jan 4, 2012
    Posts:
    384
    With the Native dual camera setup in unity, is it possible to have a CenterEyeAnchor like solution? Because when I put both seperate cameras in unity in an empty game object, I don't know how to track the center between the two eyes. With the Oculus rig, this was easy to do, with the native unity dual camera setup I have no clue.

    Someone have experience with this?
     
  35. davidlively

    davidlively

    Joined:
    Aug 1, 2015
    Posts:
    12
    I haven't messed with the "native dual camera setup." However, it should be pretty simple to do this.

    The OVR Camera rig dynamically creates a camera for each eye at runtime. For each frame, the engine:

    1. Calls Update() for the scene
    2. Calls OnPreRender(), OnRenderImage() for each eye.

    My script just tracks which eye is currently being rendered, and adjusts rendering based on that. The variable "m_currentEye" indicates which eye is being rendered. In OnPreRender(), the line

    m_currentEye = 1 - m_currentEye;

    toggles the value of m_currentEye (0, 1, 0, 1, ...). That should be enough information for you to determine what to render and to which camera. In my case, I'm using a very simple shader that just fills the texture with red or blue.

    I imagine that the native dual camera setup works in a similar fashion - two *Render() calls for each *Update(). If so, this script should serve as a usable starting point.

    I'm attaching another version with the OVR references removed, since it wasn't being used in this case anyway, and the shader. Shader is unchanged from previous post.

    If this isn't clear, feel free to email me (david lively at gmail, no spaces).
     

    Attached Files:

    Poppel likes this.
  36. IsaacChan

    IsaacChan

    Joined:
    Oct 14, 2015
    Posts:
    10
    I am new in Unity and found this topic from Google. The version of Unity is updating so fast which made some information I searched on the internet to be completely useless. Even though the information was just a year ago.
     
  37. vilcans

    vilcans

    Joined:
    Mar 31, 2015
    Posts:
    9
    I found this thread after looking for ways to render stereoscopic textures. I would be happy with accessing the eye index in the shader (as @thep3000 suggested back in June). That would be great for performance too. Will it appear any time soon?
     
  38. JDMulti

    JDMulti

    Joined:
    Jan 4, 2012
    Posts:
    384
    Is there support for enable and disable headtracking else then pulling out the USB cable of the Oculus tracking camera? I can't find anything else then InputTracking.Recenter() to be used in update function, but I guess that's not a good idea.
     
  39. davidlively

    davidlively

    Joined:
    Aug 1, 2015
    Posts:
    12
  40. davidlively

    davidlively

    Joined:
    Aug 1, 2015
    Posts:
    12
    I haven't seen built-in support for this, but see my example script and shader above. It's pretty easy to track which eye is rendering yourself. (Really - it's 5-10 lines of code and a global shader uniform.)
     
  41. vilcans

    vilcans

    Joined:
    Mar 31, 2015
    Posts:
    9
    Yup, that's basically what I ended up doing. It would be better with a shader variable though, as then we'd only need one camera, and Unity only has to do frustum culling once.
     
  42. davidlively

    davidlively

    Joined:
    Aug 1, 2015
    Posts:
    12
    One thing that I noticed: if you're running in the editor with the game view open, you may get three renders per update instead of two.
     
  43. jerrytian

    jerrytian

    Joined:
    Feb 27, 2016
    Posts:
    5
    Update: just found that when moving head too quickly, the matrix value is not stable to indicate which eye is rendering.

    Maybe it is documented elsewhere, but for the moment, I can not find any definitive answer on this.

    The first step will be, find out which eye is rendering. @davidlively 's script give me a hint, and after adding log output and playing a lot with them, there is a simple way.

    First, hook the onPreRender event of the CameraRig game object, then, check the "worldToCameraMatrix" member of the camera object transferred in, which is a Matrix4x4;

    When rendering left eye,the matrix's member m03, will be a negative value; and vice vesa.
     
    Last edited: Apr 18, 2016
  44. dm_reflekt

    dm_reflekt

    Joined:
    Mar 20, 2013
    Posts:
    8
    Hi thep3000. Currently I am using the two cameras with different Culling Masks and Target Eyes solution. However, I would like to profit from Unity's VR stereo optimisations. Is there plans to support this, for example by having a Culling Mask for each eye or some other solution?

    Also, if I use davidlively solution (shader global int), will I still get the optimisations?
     
  45. travis-cossairt-uo

    travis-cossairt-uo

    Joined:
    Jun 9, 2016
    Posts:
    5
    I've successfully have a stereo 360 video playing on the GearVR using the two cameras w/ culling mask solution described earlier. However, I notice quite a bit of ghosting/cross-talk around some edges that I do not see for the same video using the Oculus native video player. Any suggestions on how to diagnose/debug this issue? Both of my cameras are at 0,0,0 with each set to one specific eye, culling out all but the one sphere playing the 360 video. I'm tiling my video using the 0.5 tiling (and 0.5 offset for the under video)
     
  46. Gabriel_rise

    Gabriel_rise

    Joined:
    Aug 17, 2016
    Posts:
    2
    Hi , I've created a stereo 360 video player for Gear VR using culling mask and 2 spheres with 0.5 offset for 1 video. I'm using the plug in Easy Movie Texture. It's not working for 4k videos on my Samsung s6, this way it will only play video on one sphere and on the other looks black. So I had to use 2 videos 2048x1024 for each sphere but it plays asynchronously, does anyone had the change to do something similar? Thanks.
     
  47. Yesitsdave

    Yesitsdave

    Joined:
    Mar 19, 2014
    Posts:
    9
    I'm seeing the same issue. Did you ever figure this out?
     
  48. Yesitsdave

    Yesitsdave

    Joined:
    Mar 19, 2014
    Posts:
    9
    In case anyone encounters this issue, it turned out to be having the eye images swapped.
     
    behram likes this.
  49. Selzier

    Selzier

    Joined:
    Sep 23, 2014
    Posts:
    652
    Is there any way to render different content to each eye in Unity 5.6 native VR modes, such as GearVR, Daydream, Cardboard?
     
  50. JDMulti

    JDMulti

    Joined:
    Jan 4, 2012
    Posts:
    384
    Yes there is, if you create a VR dual camera setup, you're able to use cullingmasks to mask certain content in certain camera's. You only need to set per camera which eye is being rendered. However notice for now that it's not recommend to use Vulkan API, as it has it flaws at the moment and will be fixed soon.

    If you want some help with this, I could help you out. However, you don't need a shader to have that working as what you described.