Search Unity

Question Depth issues with an 'opaque-transparent-opaque-transparent' setup

Discussion in 'Universal Render Pipeline' started by Waarten, Apr 13, 2021.

  1. Waarten

    Waarten

    Joined:
    Aug 24, 2018
    Posts:
    38
    I'm trying to render, in order the following:
    1. OPAQUE mesh (terrain)
    2. TRANSPARENT blob shadows
    3. OPAQUE props (trees and such)
    4. TRANSPARENT fog
    Every step is a "Render Objects" step in my forward renderer. Step 2. and 4 need to have access to the depth that was written in step 1 and 3, respectively. What should the render events be for these individual steps?

    Constraints:
    • I want to render these things in the order above.
    • In 2, I need the depth as it just was just after 1.
    • In 4, I need the depth as it just was just after 3.
    I've been trying to get this to work for the last few days, but the results I'm getting do not make any sense. I'm either not getting the right depth buffer, or an outdated one.

    I've build a test case to explore what's going on.

    test_case.jpg

    The plane is (1), opaque geometry.
    The blue quad is (2), transparent and visualizing depth.
    The opaque cube is (3) and the red quad is (4).

    Let's go:

    Test case A

    We'll make sure our forward renderer doesn't render anything, except though our own custom 'render objects' features. For the event, we'll choose 'AfterRenderingOpaques' for now, which is the default Unity suggest. Click here to see the render pipeline:

    render_after_opaque.png

    Our things will now be rendered in the correct order (1, 2, 3, 4), but we quickly see that something goes wrong. When rendering (2), the transparent blue quad, it takes its depth from _CameraDepthTexture, which somehow has depth info, even from the cube in (3)! This mysterious depths comes, I think, from the last frame.

    render_after_opaque_result.png

    So what has happened here? At AfterRenderingOpaques, which is when we execute step 2, Unity has not yet populated _CameraDepthTexture for the CURRENT frame (this happens later, at the green arrow). We'll need our depth from (1) in _CameraDepthTexture before we start rendering step 2.

    We now have two options. The first option is to let Unity handle that for us. We'll just start rendering (2) after unity has written to _CameraDepthTexture, which will be somewhere between the end op the opaque queue and the start of the transparent queue. So we keep step 1 in AfterRenderingOpaques, but bump step 2 to BeforeRenderingTransparents.

    However, this solution does not scale. We can do this once, but we'll need another update of _CameraDepthTexture between steps 3 and 4. Where would these then go?

    So that leaves us with the only other option we have: manually copy over the depth to just before we need it.

    Test case B

    We'll create a custom render feature like so (C# script on pastebin): https://pastebin.com/LGwC7Zzt
    At least, that's how I think it should look like.

    This will allow us to use Unity's inbuilt shader for copying over the depth texture whenever we want. We'll then add these copy-features between steps 1 and 2.
    The renderer now looks like this (click to expand):

    renderer_b.png

    And the result looks very promising! As we can see, there's no leftover depth from the 'previous' frame, and we're using depth as it is after step 1.

    render_after_b_result.png

    BUT!

    Now things are becoming really spooky. Don't continue reading if you're easily scared.

    Even through step 2 has depth from just after (1), it still seems to be lagging a frame behind! We can see this become apparent when we move the camera quickly!

    [see image in reply because of 5-image upload limit]

    This would suggest that the frames would be doing some sort of interlacing dance:
    • Frame 1 step 1: write depth
    • Frame 1 step 2: ???
    • Frame 1 step 3: write depth
    • Frame 1 step 4: ???
    • Frame 2 step 1: write depth
    • Frame 2 step 2: read depth as it was in FRAME 1, STEP 1
    • Frame 2 step 3: write depth
    • Frame 2 step 4: read depth as it was in FRAME 1, STEP 3
    I'm getting this effect consistently, and it doesn't matter whether I use 'BeforeRenderingOpaques', 'AfterRenderingOpaques', 'BeforeRenderingTransparents' or 'AfterRenderingTransparents'.

    What boggles my mind is that:
    • Depth is read 'from' the correct stage within the frame, but still lagging at least 1 full frame!
    • Adding another depth copy between step 3 and 4 seems to makes the lag WORSE
    Note that this lag becomes more than just a nuisance; it fully brakes down visually in depth sensitive situation, such as smoothly zooming, as demonstrated in this thread.

    This test case built from a clean Unity project (version 2020.3.3 LTS) and can be downloaded HERE (be sure to open SampleScene.unity).

    In short, my question is: what causes this lag, and how can I prevent it?
    • Am I getting data from an old buffer instead of the latest?
    • Am I not using the right input/output for my copy operation?
    • Is there some sort of process interfering?
    I'm running out of ideas, so any help is very much appreciated.

    Thanks!
     
  2. Waarten

    Waarten

    Joined:
    Aug 24, 2018
    Posts:
    38
    Here's the image that shows the delay in the depth buffer as mentioned above. Here, I'm moving the camera quickly from left to right:
    moving_the_camera.png

    And a GIF showing that this causes big artifacts in some situations:

    scene_depth_issue.gif

    EDIT: added a better descirption
     
  3. phil_lira

    phil_lira

    Unity Technologies

    Joined:
    Dec 17, 2014
    Posts:
    584
    @Warten the pipeline doesn't make more than one copy of depth for a single camera, when you require depth texture that will be available after rendering all opaque geometry like you pointed out.

    We can't handle this automatically without knowing the dependency graph of resources for the passes. It could be something that we consider doing in the future but as of today that's not supported.

    In this case I recommend that you add an extra copy of depth yourself.
     
  4. Waarten

    Waarten

    Joined:
    Aug 24, 2018
    Posts:
    38
    Hello @phil_lira

    My question is more specific than that:
    As outlined I am already adding an extra depth copy myself. However, the problem that arises is that the resulting depth 'lags' behind one (or multiple?) frames.

    The above GIF with the trees shows the same effect (but I cannot share that project because of production reasons).

    You can take a look at the provided test project and see it, but note that it might be hard to see in this specific case. Panning left-right shows the lagging effect, as well as zooming (the striped cubes take longer to 'stabilize' than is warranted by the camera smoothing alone).
     
  5. phil_lira

    phil_lira

    Unity Technologies

    Joined:
    Dec 17, 2014
    Posts:
    584
    Due to performance reasons by default the pipeline makes depth available after rendering opaques, which can lead to the situation that you have if you try to access depth before the copy happens.

    If you have a pass that requires depth earlier you can call `ConfigureInput` to ask for the depth to be available before that pass. https://docs.unity3d.com/Packages/c...endering_Universal_ScriptableRenderPassInput_

    There's not yet an example of how to use that API, but you can check usage here: https://github.com/Unity-Technologi...rFeatures/ScreenSpaceAmbientOcclusion.cs#L189

    We are also working on exposing for a depth prepass to be asked for the pipeline, with a depth-prepass you will always guarantee that the depth is from current frame. https://portal.productboard.com/8uf...ort?utm_medium=social&utm_source=portal_share
     
  6. Waarten

    Waarten

    Joined:
    Aug 24, 2018
    Posts:
    38
    Thanks @phil_lira

    I'm not sure whether a depth prepass would be useful in this case, except if the feature also means that I can do two different depth prepasses, right? Because in this example case, I need the depth 'as it is' at two different 'moments' of the frame.

    Regarding the ConfigureInput: that's great to hear! I'd rather let handle Unity handle this in the intended way. Thanks for pointing to the right place in the example code.

    Regarding the delay issue:
    I have now found out that changing the code

    Code (CSharp):
    1. _depthDestHandle.Init("_CameraDepthTexture");
    to
    Code (CSharp):
    1. _depthDestHandle.Init(new RenderTargetIdentifier("_CameraDepthTexture"));
    solves the delay issue. I now understand that the second line is the correct way of referencing the render target. But I still don't understand why it doesn't find the current depth yet it does find the previous depth?

    This still puzzles me, and hopefully @phil_lira you could shed some light on this?
     
  7. phil_lira

    phil_lira

    Unity Technologies

    Joined:
    Dec 17, 2014
    Posts:
    584
    I'm not entirely sure to be honest. I'd have expected both to work the same way regarding the delay. One of these overloads was added recently by XR team.

    It needs further investigation, but I'm not sure if that's worth it as we plan to deprecate this API in favor of RTHandle. RTHandle allows much more flexibility, viewport dynamic scaling and is a pre-step for us to know the frame dependencies to improve the original issue you have.
     
  8. Waarten

    Waarten

    Joined:
    Aug 24, 2018
    Posts:
    38
    Thanks for the response. I wasn't aware of RTHandle and I do not yet know how and whether that would affect this case. I will look at this at a later time.

    For me this is still not fully resolved as the 1-frame delay issue remains unexplained. From a tech artist's/graphics programmer's perspective, it would be really useful to know what happens here exactly, as it would give me more insight in Unity URP's inner workings.

    But due to the reality of production, I'll keep using the workaround I found for now.
     
  9. transporter_gate_studios

    transporter_gate_studios

    Joined:
    Oct 17, 2016
    Posts:
    219
    I get a _CameraDepthAttachment not found when using the script you posted for passing the depth.. any ideas as to why? i'm trying to use this to get my overlay camera to get depth for post process effects that use it while also clearing the depth so geometry can overlap the base.