Search Unity

Multiple cameras depth buffer incorrect in shader

Discussion in 'General Graphics' started by Yacker, Oct 19, 2019.

  1. Yacker

    Yacker

    Joined:
    Oct 24, 2018
    Posts:
    34
    Basically, I'm using multiple cameras to render a sort of post processing effect in-world, and so I have them set up to render different layers. This mostly works, but I need it to rely on the depth buffer, and the depth buffer of the previous camera seems to be discarded in the shader, even when the second camera is set to "Don't Clear", even though it actually renders correctly (doesn't render in front of things it shouldn't).

    Example images, red representing the depth buffer here:

    same camera:

    Different cameras:

    Is there any way around this? Could I be doing something wrong?
    Here's the shader, for reference

    Code (CSharp):
    1. Shader "Custom/Example"
    2. {
    3.     Properties
    4.     {
    5.     }
    6.         SubShader
    7.         {
    8.             Tags {
    9.                 "Queue" = "Transparent"
    10.                 "RenderType" = "Transparent"
    11.                 "LightMode" = "ForwardBase"
    12.             }
    13.             Blend SrcAlpha OneMinusSrcAlpha
    14.             LOD 100
    15.             ZWRite Off
    16.             ZTest LEqual
    17.  
    18.         Pass
    19.         {
    20.             CGPROGRAM
    21.             #pragma vertex vert
    22.             #pragma fragment frag
    23.            
    24.             #include "UnityCG.cginc"
    25.  
    26.             #pragma multi_compile_fwdadd_fullshadows
    27.             #include "HLSLSupport.cginc"
    28.  
    29.             struct v2f
    30.             {
    31.             };
    32.  
    33.             sampler2D_float _CameraDepthTexture;
    34.             float4 _CameraDepthTexture_TexelSize;
    35.            
    36.             v2f vert (appdata_full v, out float4 outpos : SV_POSITION)
    37.             {
    38.                 v2f o;
    39.                 outpos = UnityObjectToClipPos(v.vertex);
    40.                 return o;
    41.             }
    42.            
    43.             fixed4 frag (v2f i, UNITY_VPOS_TYPE screenPos : VPOS) : SV_Target
    44.             {
    45.                 float4 screenUV = screenPos / _ScreenParams;
    46.  
    47.                 float rawZ = tex2D(_CameraDepthTexture, screenUV.xy).r;
    48.  
    49.                 return float4(rawZ, 0, 0, 1);
    50.             }
    51.             ENDCG
    52.         }
    53.     }
    54.         fallback "Transparent/VertexLit"
    55. }
    56.  
     
    Last edited: Oct 19, 2019
  2. Shane_Michael

    Shane_Michael

    Joined:
    Jul 8, 2013
    Posts:
    158
    If I understand what you're trying to do correctly, I think you simply want to use "_LastCameraDepthTexture " instead of "_CameraDepthTexture".
     
  3. Yacker

    Yacker

    Joined:
    Oct 24, 2018
    Posts:
    34
    _LastCameraDepthTexture has the same effect, unfortunately.
     
  4. Shane_Michael

    Shane_Michael

    Joined:
    Jul 8, 2013
    Posts:
    158
    The other thing I can think of off the top of my head is that the depth value is encoded non-linearly to make better use of its limited precision so you need to use LinearEyeDepth() or Linear01Depth() to convert it to an actual depth value.
     
  5. Yacker

    Yacker

    Joined:
    Oct 24, 2018
    Posts:
    34
    The issue isn't how it's encoded, the issue is that it just seems to return 0 when rendered on the second camera. The images were posted with a very high near plane and very small far plane to accentuate the issue.
     
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    First:
    depth texture != depth buffer

    Just because your second camera is set to Don’t Clear, which ensures the depth buffer and color buffer from the first camera is retained, the depth texture is a separate thing that’s either generated in a separate pass before rendering each camera and copied to a texture when using forward rendering, or pulled from the gbuffer’s depth buffer when using deferred. Each camera that renders is going to set the camera depth texture to either the one it generates, or to “null” (or really an empty texture).

    The whole _LastCameraDepthTexture and _CameraDepthTexture thing doesn’t seem to work how anyone seems to expect it to. I don’t think I’ve ever seen _LastCameraDepthTexture be anything but the same as the current _CameraDepthTexture, so whatever specific use case it was created for is either now broken, or so specific as to never actually work for anyone but whatever it was originally written for.

    My workaround for this is to set the depth texture to a custom global texture reference using a command buffer.
    Code (csharp):
    1. CommandBuffer keepDepthTexture:
    2.  
    3. void OnEnable()
    4. {
    5.     if (keepDepthTexture == null) {
    6.         keepDepthTexture = new CommandBuffer();
    7.         keepDepthTexture.name = “Keep MainCamera Depth Texture”;
    8.         keepDepthTexture.SetGlobalTexture(“_MainCameraDepthTexture”, BuiltinRenderTextureType.Depth);
    9.  
    10.         Camera cam = GetComponent<Camera>();
    11.         cam.depthTextureMode |= DepthTextureMode.Depth;
    12.         cam.AddCommandBuffer(CameraEvent.BeforeForwardAlpha, keepDepthTexture);
    13.     }
    14. }
     
    DrummerB and Yacker like this.
  7. Shane_Michael

    Shane_Michael

    Joined:
    Jul 8, 2013
    Posts:
    158
    I only mention it because it may only seem to be 0 when it is not. A lot of valid values in a 24-bit buffer will be exactly 0 when written directly out into an 8-bit colour channel, but will have sufficient precision to decode to the correct depth value.

    Of course, I was assuming Unity was doing something reasonable, but as per bgolus's answer, it seems it is just a piece of built-in Unity black magic that stopped working at some point. Or possibly never worked.
     
  8. Yacker

    Yacker

    Joined:
    Oct 24, 2018
    Posts:
    34
    This doesn't seem to resolve it, either, unfortunately. Same problem still.
     
  9. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    Did you change your shaders to use the new texture name, or are you still using _CameraDepthTexture?
     
  10. Yacker

    Yacker

    Joined:
    Oct 24, 2018
    Posts:
    34
    Yes, the shader uses the new texture name.
     
  11. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    And this is on the camera that has the working depth texture? There’s a chance it’s clearing the depth texture someplace then, which is annoying. But that can be worked around by creating another render texture with the type RFloat and the same resolution as the main camera, and using CopyTexture to copy it over.
     
    Yacker likes this.
  12. Yacker

    Yacker

    Joined:
    Oct 24, 2018
    Posts:
    34
    That works! Thank you very much for your help.
     
  13. Yacker

    Yacker

    Joined:
    Oct 24, 2018
    Posts:
    34
    Update: moving this code back to my main project (I was using an isolated "testing" project to remove the possibility of external influence) and now I get a warning that breaks it:
    CommandBuffer: built-in render texture type 3 not found while executing Keep MainCamera Depth Texture (SetGlobalTexture)


    UPDATE update: I just realized I can just use _CameraDepthTexture to get the depth texture to copy and the command buffer is completely un-necessary. Nevermind. Thank you again!
     
    Last edited: Oct 21, 2019
  14. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    Are you using deferred rendering? I think you may need to use
    BuiltinRenderTextureType.ResolvedDepth
    instead in that case.
     
    envy82 and opamped like this.
  15. quizcanners

    quizcanners

    Joined:
    Feb 6, 2015
    Posts:
    109
    Had a similar problem with my depth-based shader (a Shadow decal).
    Turning on a second camera would break the projection. The solution in post 6 did allow me to get the correct Depth texture in my shader:
    Code (CSharp):
    1.  keepDepthTexture.SetGlobalTexture(“_MainCameraDepthTexture”, BuiltinRenderTextureType.Depth);
    But didn't solve the issue.
    It turned out that the _WorldSpaceLightPos0.xyz was changed when the second camera is rendering anything. I have only one light in my scene. A directional one.

    The fix was to set the light direction as a global vector and use that instead.
    Code (CSharp):
    1. Shader.SetGlobalVector(-light.transform.forward);
    But it did look like the Depth was the issue. Which puzzled me a lot.

    So just gonna leave this here in case someone is having similar issues.