Search Unity

HDRP Camera Stacking Now Supported?

Discussion in 'Graphics Experimental Previews' started by Korindian, Jul 28, 2019.

  1. Korindian

    Korindian

    Joined:
    Jun 25, 2013
    Posts:
    475
    Camera Stacking was supposedly removed from HDRP, but according to the HDRP 6.9 docs:

    https://docs.unity3d.com/Packages/c...s.high-definition@6.9/manual/HDRP-Camera.html

    Quote from that page:
    "Cameras capture and display your world to the user. Customize and manipulate your Cameras to present your Unity Project however you like. You can use an unlimited number of Cameras in a Scene and set them to render in any order, at any position on the screen."

    Edit (for clarity): I tested this in 2019.2 with HDRP 6.9.1. Camera stacking works with multiple cameras, but has a high performance cost of anywhere from 1-2ms per camera on both the CPU and GPU.

    Post processing seems to work on the other cameras only when the highest depth camera has it. This means we currently cannot have a UI in World Space or Screen Space - Camera that is unaffected by post processing.

    Is Camera Stacking now a supported part of HDRP? Will it stay or be removed?
     
    Last edited: Aug 1, 2019
  2. Korindian

    Korindian

    Joined:
    Jun 25, 2013
    Posts:
    475
    There's been a lot of confusion about camera stacking in HDRP. Can we get an official word on this @Remy_Unity @SebLagarde ?

    Especially being able to choose which cameras get affected by post processing.
     
    xDavidLeon, Sharlei and Rich_A like this.
  3. Lawina

    Lawina

    Joined:
    Feb 28, 2019
    Posts:
    14
    LWRP as well.
     
  4. jeffcrouse

    jeffcrouse

    Joined:
    Apr 30, 2010
    Posts:
    13
    I am also interested in this.

    If it's not supported in HDRP, can someone suggest another way specify objects which are/aren't affected by a post-processing volume?
     
  5. SebLagarde

    SebLagarde

    Unity Technologies

    Joined:
    Dec 30, 2015
    Posts:
    557
    Hi,

    The documentation is correct. HDRP support many camera (that you can use to do render to texture), and it doesn't say that it support camera stacking.

    so official answer:
    HDRP support multicamrea (i.e split screen)
    HDRP don't support camera stacking

    We have however patched HDRP in 7.2.0 so it could support the stacking of camera within a set of constrain (i.e we manage correctly the clear of depth / color. We are working on a prototype to allow to compose multiple camera or stack them. There is no ETA for this tool but it mean some users could come with custom script for it.
    A big warning: the cost of camera stacking is very heavy (on CPU), and it is not recommended for game context. Prefer the usage of custom pass / custom post process.
    Also in HDRP if you want to draw UI after postprocess, there is a RenderPass mode on Unlit.shader and shader graph to do exactly that, it is name: AfterPostprocess (without the need of using a second camera)

    hope that help.
     
  6. AlexTuduran

    AlexTuduran

    Joined:
    Nov 26, 2016
    Posts:
    11
    @SebLagarde
    Hi. When you say it's very heavy on the CPU, you mean that rendering UI elements on an overlay canvas or with a custom pass / custom post-fx is significantly more efficient than rendering that UI on top of the scene with a different camera that renders only the UI?

    I'm asking because my project setup is so that I render the UI using a UI-only camera, so that the UI lives in world-space and shows up in VR. What's the correct alternative with a single camera in the case you want your UI to show up in VR as well?
     
    keeponshading likes this.
  7. keeponshading

    keeponshading

    Joined:
    Sep 6, 2018
    Posts:
    548
    I have exactly the same issue with the extension that Space UI s interactive Worlda Space UI s are rendered to texture and applied to the car screens. They are up to 8 interactive screens in the car.
    So they din t need TAA because of smearing. This is working with the AfterPostprocessm described above.
    But they need Tonemapping and DOF because when car interior has DOF the screens need it too.

    For us this is the last showstopper to finally switch to HDRP.
     
    Last edited: Mar 9, 2020
  8. AlexTuduran

    AlexTuduran

    Joined:
    Nov 26, 2016
    Posts:
    11
    @keeponshading Just found out how powerful the custom pass / full-screen pass features are.I recommend taking a look at the following link, as it's a Unity project that exercises these features and manages to get some pretty amazing full-screen and per-object effects. You'll need Unity 2020.1.0a24 at least - I have 2020.1.0a25 and works fine.

    https://github.com/alelievr/HDRP-Custom-Passes

    Now speaking strictly about your car screens, you can render each screen separately with a different camera on render textures (make sure that the RT has alpha) and push these RTs in a Lit material as Base Map and / or as Emissive Color. Set the in the material Surface Type to Transparent, Rendering Pass to Default, Blending Mode to Alpha, check Transparent Depth Prepass, Postpass, Transparent Writes Motion Vectors and Depth Write. Set Depth Test to LessEqual. Also check Receive SSR if the Material Type is set to something that looks glossy (Standard, Iridescence, Translucent etc).

    Additionally, crank up the Smoothness and / or Coat Mask to make the surface glossy. Since we're talking about glass, you can enable refraction by setting the Refraction Model to Box or Thin. Of course, put that material(s) on quads, so they show up in the scene and render as geometry.

    HDRP-Transparency-Lit-Material-Settings.jpg

    By doing this, you'll effectively get a glossy refractive surface that although looks transparent it writes into the depth buffer, hence being subject to all post-processing effects that use the depth buffer, Depth-Of-Field being one of these. Also, because you're using the Lit shader, it will also respond well to the lighting in your scene (reflections included).

    Check the 2 uploaded photos for a reference of what you could get. The main difference between the two is that in the second one I've used alpha clipping.

    Why using alpha clipping? Since your transparent surface now writes into the depth buffer, everything that's behind it will not matter to the DOF (you can see that in the focus areas, the background is also in focus, which is wrong).

    HDRP-Transparency-DOF.jpg

    In the second capture, alpha clipping is enabled, so that the shader writes into the depth buffer only the information that has an alpha grater than some value (0.22 in my case). Doing so, you'll get correct DOF on your UI and also on whatever is in the background.

    HDRP-Transparency-DOF-AlphaClipping.jpg

    All the best.
     
  9. keeponshading

    keeponshading

    Joined:
    Sep 6, 2018
    Posts:
    548
  10. AlexTuduran

    AlexTuduran

    Joined:
    Nov 26, 2016
    Posts:
    11
  11. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    26,425
    FYI camera stacking is a bad concept anyway, not one that most commercial games use. They favour drawing to buffers and can draw to another cam or combine, things like that.

    When you have multiple cameras with stacking in Unity, unity does all the work all over again per camera (and always has) from sorting lights to culling, it's a huge amount of waste render and cpu time (and always has been).

    It's better to ask from this point how to achieve your ambitions without an extra camera (you will find all things are possible, just different and much faster to execute)
     
    Rich_A likes this.
  12. Grimreaper358

    Grimreaper358

    Joined:
    Apr 8, 2013
    Posts:
    639
    hippocoder likes this.
  13. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    26,425
    I haven't checked but I'm hoping that URP and HDRP gain parity with that sort of thing so I can port easily if needs be.
     
unityunity