Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

How to do foreground/background cameras with VR?

Discussion in 'AR/VR (XR) Discussion' started by JoeStrout, May 14, 2018.

  1. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,859
    In High Frontier, in any outside space view, we use two cameras: a background camera that renders nearby planets/moons (as well as the Sun) at a greatly reduced scale, and a foreground camera (set to Don't Clear) that renders all the nearby objects at normal scale. This ensures that nearby objects (e.g. a spaceship) properly occlude background objects (planet, moon) even though, in fact, they might all be occupying the same mathematical space.

    Now I'm thinking about doing something similar in VR (for Go). But I don't understand how to do that, as the cameras in VR are weird and not entirely under my control.

    Anybody know how to set up this sort of foreground/background camera setup in VR? Or is there an entirely different approach I should take instead?
     
  2. Arkade

    Arkade

    Joined:
    Oct 11, 2012
    Posts:
    654
    Hey Joe
    I don't know exactly how to do this but (a) I'm interested to know too and (b) I know one thing that I'd check you're thinking about. First I'll check my understanding of what you're saying.

    You might have a 2m tall astronaut in foreground and a 1.2m diameter planet in the background that you wish to appear as if 12Mm (12 thousand kilometres -- I dunno :D). About right?

    So the thing I can offer is setting the scale. I allow my players to change scale in my Cardboard/DayDream app and, after a couple of bugfixes, GoogleVR libraries handled this fine. It correctly changed the inter pupillary distance (e.g. such that the world 'felt' larger when the player was small). So at the very least, I'd imagine you'd have to set the player for the planet render to the appropriately tiny proportion of scale to ensure the depth is correct for your planet. Obviously these numbers raise red flags about numerical accuracy (which I ran into with mine) so you'd likely have to fiddle a bit to ensure nothing wonky happened!

    I guess the obvious question to ask is whether you've done the obvious -- ”just try it”? It /might/ be that two cameras set as VR and one with Don't Clear set will just do what you want!

    Look forward to hearing how it all goes :)
     
  3. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,859
    I haven't tried it. You're right, I should do that!

    I will say, though, that the scale itself isn't the problem — it's getting the layering right. If you just scale your planet down to 1.2 m diameter and put it in the background, all visible to the same camera, then there's a good chance when the astronaut stretches his hand out, he's actually going to poke his fingers through the planet. Oops! Illusion totally broken. This is the problem that I forced me to resort to two cameras in High Frontier.
     
  4. Arkade

    Arkade

    Joined:
    Oct 11, 2012
    Posts:
    654
    yeah you say scale isn't the problem now but you know with stereoscopic vision the user can 'see' depth. So merely layering isn't enough ... if you 'see' what I mean (I know, 'deep' right :p )
     
  5. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,859
    Yes, that's a good point. Though the user can only see depth out to about 20 m or so, so you could fake it, just not with the extreme example we were imagining above. Put it something like 100 m out, and I bet the user wouldn't know the difference.
     
    Arkade likes this.
  6. Arkade

    Arkade

    Joined:
    Oct 11, 2012
    Posts:
    654
    Interesting. Fair, I don't know what the limits are. Is that a known number or guess?

     
  7. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,859
    The 20 m is a known number (my background is in visual perception). The 100 m is just expanding that generously, because why not? (And there will always be some people whose perception is a bit better or worse than average.)
     
    Arkade likes this.
  8. solidearthvr

    solidearthvr

    Joined:
    Jan 23, 2017
    Posts:
    50
    Hi Joe,

    Why not just use a HDR skybox for the background image?
     
  9. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,859
    Because it's not an image. It's a model — a procedurally-generated planet that rotates, has weather, day/night cycle, etc.
     
  10. Arkade

    Arkade

    Joined:
    Oct 11, 2012
    Posts:
    654
    oh nice. Wow, with that background, I bet your VR game will rock the viewers socks off! I'll look forward to it! (erm, man it's hard not to make bad 'vision' puns all the time when talking about VR!)

    I'll have to evaluate my stuff to see what I might be able to fiddle with. I guess that number informs LOD settings quite a lot!
     
  11. Innovine

    Innovine

    Joined:
    Aug 6, 2017
    Posts:
    522
    You can see depth at hundreds of meters when you include parallax. I don't know about unity, but UE4 switches to mono rendering beyond 750m.

    To avoid poking through the scene, you just render the distance camera first, then the near camera without clearing the frame buffer. Everything near will be drawn on top, no problem. This works fine, but be aware that you can't change the FOV of the distance camera, you need to move it closer/further and change it's scale to get the desired size.

    You might also want to look into putting the distant scene on its own layer. The accuracy of lights depend on the camera draw distance, so reducing this for the near camera gets you crisp, detailed shadows in your cockpit. The disadvantage is your lights won't cast shadows across the layers so you need two light sources and a fake ship model in the distant scene which doesnt render but only cart shadows, to cast a shadow onto the distant scenery.

    I think maybe you guys need to actually try out some scenes with multiple cameras before thinking some more about it.
     
    Arkade likes this.
  12. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,859
    Rendering the distance camera first, then the near camera without clearing the frame buffer, is exactly what this thread is about. How exactly do you do that in VR? I don't seem to have control over the camera setup in VR — or if I do, I don't know how.
    Great. How? Here's my current setup:

    upload_2018-5-20_14-56-18.png

    Do I duplicate the entire camera rig? Do I duplicate just the LeftEyeAnchor and RightEyeAnchor cameras within this rig? What about the CenterEyeAnchor?

    I can't find any documentation on how all these things actually work, so it's hard to know how to approach something like this.
     
  13. Innovine

    Innovine

    Joined:
    Aug 6, 2017
    Posts:
    522
    Click on the actual camera, not the components of the camera rig. In the inspector there's a Depth setting. Lower numbers are rendered first.
    There's a "culling mask" where you can get the camera to only render a particular layer, which is also very useful. There's an option to clear the frame buffer or just the depth buffer too.

    To make a second camera, just create a camera. It moves automatically, at least for me. I don't bother with any complex camera rig stuff. Just enabling openvr in the project settings/player menu and using a single camera seems to work just fine for me. If i'm missing out on something someone please let me know.


    In short i do this:
    Create two cameras, under an Empty.
    Change the depth for each camera.
    Put my near scene on one layer, and my distant scene on another.
    Change culling masks so cameras show their intended layer only.
    Enable openvr in the player settings.

    Done. Working multicamera vr scene.

    The lighting and shadowing takes a few more steps but i have a ui camera and a HUD system camera en top of this and its working for me. I havent considered other vr hardware than my vive yet but i dont expect that to be much trouble :)
     
    Last edited: May 21, 2018
  14. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,859
    Sorry, which actual camera are you talking about? There are six in my current setup. Three of them are for in-editor testing, so I guess we can ignore those, but which of the three cameras that are part of the OVR rig do you mean?

    Yes, I know all that. Did you read the first post in the thread? I have used this technique in desktop apps in the past. I understand how it works. I just don't understand how to apply it to the VR rig.

    Really? Are you saying that the three cameras (Left Eye, Right Eye, and Center, which is the one that has the fade component attached and I think is also used for ray-casting) used in all the Oculus VR samples aren't actually needed? But if so, why are they there?

    Well, it could be our confusion comes from differences between the Vive SDK and the Oculus SDK. And then of course Unity's layering their XR stuff on top, which should eventually reduce confusion, but initially adds to it. :)
     
  15. Innovine

    Innovine

    Joined:
    Aug 6, 2017
    Posts:
    522
    No idea. A single unity camera works fine for me. Let me know your results.
     
  16. Arkade

    Arkade

    Joined:
    Oct 11, 2012
    Posts:
    654
    Ah, that's interesting. GoogleVR used to use multiple cameras but when Unity integrated VR came in, we had to remove the extra cameras. I wonder how this differs between the different VR platforms? From @Innovine 's comment, it sounds like OpenVR is similar to GoogleVR. It'd seem very curious and rather annoying if one VR platform wasn't integrated in the same way by Unity (making me wonder if we're missing something).

    @JoeStrout are you 100% that you still need all that stuff? (might I humbly suggest a quick test in a clean project to validate?)
     
  17. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,859
    No, I'm not certain at all. I will try to do that clean-sheet test today.
     
  18. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,859
    OK! So it turns out the complex three-camera rig is not needed for Oculus Go either. I made a simple scene with just a single camera, SDK set to Oculus Go, and camera set to Target Eye = Both. It works fine. (And now I guess I need to go through my starter project, which was adapted from the Oculus samples, and simplify it.)

    However, when I tried a layered camera setup, it didn't work. Here's the setup:
    • Background camera with Depth=0, Clear=Skybox, and Culling Mask set to draw only the Background layer.
    • Main camera with Depth=1, Clear=Don't Clear, and Culling Mask set to draw everything except Background.
    • Big ol' sphere (a "background planet") set to layer Background. Plane and cube set to layer Default.
    When I run this within the editor, it works perfectly: the big sphere and skybox appear behind the plane and cube (even though, geometrically speaking, the sphere actually intersects the plane).

    But when I build and run on Oculus Go, I see only the sphere; the plane and cube are invisible. It's as if it's showing me only the Background camera, and not rendering the Main Camera at all. :(

    Anybody have any ideas at this point?
     
  19. Innovine

    Innovine

    Joined:
    Aug 6, 2017
    Posts:
    522
    Is it possible the camera is pointing the wrong way? For me,all cameras set to "both" move according to the hmd transform, and so ignore the initial rotation you give them. Maybe yours jumps to a weird angle when you put the hmd on. Try adding objects around the camera in different directions.

    Other thoughts include checking that the far clipping plane distance is adequate.
     
  20. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,859
    I'm pretty sure it's not just looking in the wrong direction, since with my setup I'd be able to see at least the plane (and I did look around anyway).

    The near/far clipping planes are quite adequate.
     
  21. Martin_H

    Martin_H

    Joined:
    Jul 11, 2015
    Posts:
    4,436
    Not really, it's frustrating how little info there seems to be out there for the Go or even Gear Vr. As you might have noticed I started going through your threads to see if I can find some useful stuff since I'm working with the same plattform at the moment. I believe the 2 extra cameras on the OVR rig are fallbacks for when singlepass stereo rendering isn't available on a device, but I'm not sure. What I would like to know is where I can put camera image effects, or if those are even supported at all in singlepass stereo VR? But I don't want to derail your thread. As far as I could find out so far it's not possible with the official post fx stack at the moment and the hardware would struggle a lot with it anyway.

    Have you tried looking on the Oculus developer forums? I have the impression their staff is much more active there than here.
     
  22. RustleJ

    RustleJ

    Joined:
    Mar 26, 2018
    Posts:
    1
    I'm doing the same thing with a Vive but I have a different problem, at first I had your problem though and managed to fix it.

    Steam VR has an object labelled Camera Rig and a child called Camera (head) and a child in that called Camera (eye). The confusing thing was that the Camera (head) object has a camera attached to it as well as the Camera (eye). Any settings I change on the head are over written on play, and the child/parent relationship changes so that Camera (eye) becomes the parent (with Camera Rig staying the overall parent). I had to make sure I changed the Camera (eye) settings only.

    I then duplicated it and chucked the scaling script that works on a normal setup. I can see both layers fine doing that. The thing to check is whether settings on the cameras are resetting on play. That might sound a little too simple but it took me an embarrassingly long time to realize...