Search Unity

Accelerate Multi-Cam rendering with VR techniques?

Discussion in 'General Graphics' started by sean_virtualmarine, Mar 1, 2019.

  1. sean_virtualmarine

    sean_virtualmarine

    Joined:
    Dec 1, 2017
    Posts:
    11
    Hey everyone,

    I have a rather demanding hardware setup I'm designing towards where I've got basically >180deg worth of screens rendering all the time.

    I've been creating one Unity window that spans all the screens (5 * 1920 x 1080) and assigning a camera to each screen. This works, but it means my scene is rendered 5 times every frame this gets pretty expensive.

    My question is: Is there a way to leverage any of the work that's gone into VR with double-wide RenderBuffers, stereo-instancing, etc and expand out to more than 2 eyes worth of screens?

    Optimally, I'd have only one camera, but going that high on FOV makes for a warped image.
     
  2. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,520
    There is the stereo rendering option that recent hardware has, but I'm not sure whether that is exposed in Unity. They use it for VR themselves, but I'm not sure whether you can just let any two camera's use that technique.

    Instead of just one camera, you could go for two and then transform those two RenderTextures to the five actual views. Combined with single pass stereo, this would remove a lot of draw call overhead compared to just rendering five times. But even without that, just rendering twice and reprojecting should offer a great reduction in draw calls.
     
  3. pdehn

    pdehn

    Joined:
    Oct 17, 2013
    Posts:
    6
    I've been using the "one camera reprojected onto multiple displays" approach recently and can vouch that it made a big difference in performance (also solved some issues with using off-axis view matrices, but that's it's own problem).

    multidisplay_setup.png
    Basically I fit a single camera so that it fits multiple displays and renders to a texture. I have a couple of "dummy cameras" for each real display that don't render anything in the scene, and just render a mesh to those cameras that copies the appropriate region of the render texture.

    Optimizing camera placement is one problem -- my possible display arrangements (ex. shown above) were fairly simple to solve for but don't generalize. I'd guess 2-3 cameras could cover your displays if they're laid out in a single row/column, maybe 1 if it's something two rows of two/three cameras.

    The tricky bit with reprojection is that a straight linear mapping from the quadrilateral region of the render texture to the view space is going to be distorted. In my case using a grid mesh with ~20x10 verts/UVs isn't noticeably distorted, but something like a UV lookup texture might give better results in other setups.
     
  4. sean_virtualmarine

    sean_virtualmarine

    Joined:
    Dec 1, 2017
    Posts:
    11
    Thanks for your replies!

    My challenge is more that I'm inside a cockpit looking outwards.So, rather than have all cameras pointed in roughly the same direction, they're pointed all over. E, NE, N, NW, W and possibly SW, S, SE depending on the build.

    I don't think I can get one camera to cover that whole field of view since it's an FOV > 180

    Edit:Oh! I get it. Split my views down until I can use the minimum number of cameras then chop up whats left.

    So, Like, 2 90-deg cameras, or 3 60-deg cameras, and chop up the result so each of the 5 original monitors has what it needs. Might than not produce weird seams on the screen than need to sample more than one texture?
     
    Last edited: Mar 7, 2019
  5. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,520
    Right, that is the idea. You won't make it with 1, but you could probably do with less than 5. There are various issues you can run across:
    - Seams, which can probably be solved by mimicking bilinear filtering between RenderTextures in the shader.
    - Result too smooth, because you'll need to reapply bilinear filtering during the reprojection.
    - Straight lines might look wiggly because of the reprojection.

    The last two can be reduced by rendering at a larger resolution than required to the smaller set of camera's and effectively sampling down for the actual camera's. This might be a quality setting. Still, for faster GPU's, it's faster to render things at a higher resolution than to repeat the draw calls multiple times.