Search Unity

Project DK2² // Oculus - VR/AR

Discussion in 'AR/VR (XR) Discussion' started by Frankster, Sep 9, 2015.

  1. Frankster

    Frankster

    Joined:
    May 6, 2013
    Posts:
    12
    Maybe it's a dump question but i'm rather new to unity-development + oculus/vr ... The oculus-package gives me a default controller which is pretty easy to use. Just drop it to the scene and done ... beautiful vr :D Anyways .. i'm wondering if having 2 cams (because of stereo-3d) means that everything has to be calculated/rendered twice which would also mean in the end that i'll have to be super carefull with drawCalls, vertex-counts, camera-filters etc. So is like that that everything must be calculated/rendered twice or is this nonsense? I know this is actually a super basic questions but i was googling like crazy but neither unity-forums nor oculus-forums could help me so far. So thanks a lot in advance! (i can't use profiler cos i don't have pro)
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    "everything has to be calculated/rendered twice"

    Yes and no. With the native VR support in Unity there are many things that are only calculated once, like real time shadow maps, visibility, and LODs. However everything does get rendered twice. There's a ton of work going on right now on the side of the graphics engine, graphics APIs, graphics drivers, and GPU hardware trying to figure out the best way to do stereo rendering in the most efficient way possible. The straight forward dumb method (what everyone was doing like 6 months ago) was just to have two cameras in the correct position and render everything twice as two back to back renders. More recently there's been work on being able to only have one draw call but still render from both eyes, perhaps at the same time. I think Unity is somewhere between that where a lot of the obvious things are being done once, but the draw calls are still doubled.

    So yes, keeping low draw calls and vertex counts are somewhat important, though mostly draw calls as modern hardware is kind of stupid fast when it comes to high poly counts for static geometry. Camera filters are important, but not because they're rendered twice. They are, but because they're only being done for half the screen they aren't that much worse than if there was just one view. Where camera filters are problematic is most of them are pretty expensive and already take up an extra few precious ms, and for VR you're rendering at a higher resolution than your average 1080p desktop display, even on the 1080p screen the DK2 uses. Because of the lense distortion in VR you generally want to render at a slightly higher resolution to prevent the center pixels from getting blurry.

    So those bloom, screen space ambient occlusion, motion blur, and god rays filters that take ~4 ms on your 60 fps (16.66ms) 1080p PC game isn't a big deal, but that'll now be more than ~8 ms on your 90 fps (11.11ms) 2808x1560 rendered 2160x1200 resolution Oculus CV1 or Vive. Those are hopefully some really pretty flat triangles you're rendering in those 3ms you have left.
     
  3. Frankster

    Frankster

    Joined:
    May 6, 2013
    Posts:
    12
    That helps! Thank you very much! As we are talking about performance und "abilities" (in terms what the limits are today) i'd like to share what we are actually heading for. We'd like to combine AR + VR with this self-made-setup (see picture below). The 2 cams are getting 60fps with small HD (1280x720) We have to zoom in a bit to get full FOV inside the oculus but the quality is still ok. Since we are more or less completely new to this whole VR-interactive-world I'd like to get an oppinion from somebody more epxerienced developer (if he/she might read this). In the end we want to combine both VR and AR together in one interactive(!) application. We are aware of the fact that we'll have to calibrate and un-distort the video-footage from our cams (we got this to work so far just for the cam-footage) Our next goal is to combine "oculus-vr-world" with "real-footage". But this should actually "just" the framework/foundation for an interactive application where we want to walk through our office, showing animated geometries/high-poly-models we did just like being in a virtual cg-studio. I have no idea if this might be too much for current graphic-carts/cpu's (but our professional background is cg/3d-computer graphics so we own pretty good hardware though). We also want to use OpenCV for developing a (SLAM-like) tracking routine so that we can "walk" through the office without using the oculus-tracking-cam. So in the end we will have to get a combination from: 1. realtime-vr 2. realtime-ar (cam-undistortion + pointcloud-tracking) + the user-framework itself on top. Is this too crazy like it sounds? Maybe ... but it's a big fun anyways. This whole new vr-media stuff makes us feel like being 16 again when "toy-story" came out and we love to discover all parts of it.
     

    Attached Files:

    Last edited: Sep 14, 2015