Search Unity

URP Render Feature depth mask and _CameraDepthTexture

Discussion in 'Universal Render Pipeline' started by Jonathan_L, Jun 8, 2020.

  1. Jonathan_L

    Jonathan_L

    Joined:
    Jan 26, 2016
    Posts:
    43
    Hi, I am trying to create a render feature that renders a mask and depth value of an object to a texture. So this render feature does a pass after rendering pre pass. I render select objects to this texture using layermask. The shader for the render feature material writes this color to a texture rgba(1,depth,0,0). The graph is shown below.

    upload_2020-6-8_0-13-14.png

    I created another pass to test that the depth was properly being stored in the texture. The below image shows the G value.

    upload_2020-6-8_0-16-0.png

    The problem here is that the depth values here are different from what is found in the texture
    _CameraDepthTexture. Here is the R values from _CameraDepthTexture. (Other objects in the scene are included)

    upload_2020-6-8_0-17-59.png

    As you can see these values very different and I can't use the two to compare depths of objects in one layermask and the depths found in the _CameraDepthTexture. Might anybody know what I am getting wrong here? I thought one might have been the inverse of the other but I tried that and it didn't work. Is there something that I can fix in the shadergraph? Maybe the way I am calculating the depth in the shadergraph is off? Or maybe there is some built in unity function I can call when doing the second pass?

    This is the code I am working on right that I will use to do the depth comparison. For now, I used it to output the above two sample images.
    Code (CSharp):
    1. void OutlinePass_float(float2 UV, out float4 Out)
    2. {
    3.     float4 sceneColor = SAMPLE_TEXTURE2D_X(_CameraColorTexture, sampler_CameraColorTexture, UnityStereoTransformScreenSpaceTex(UV));
    4.     float4 sceneDepth = SAMPLE_TEXTURE2D_X(_CameraDepthTexture, sampler_CameraDepthTexture, UnityStereoTransformScreenSpaceTex(UV));
    5.     float4 outlineMask = SAMPLE_TEXTURE2D_X(_OutlineMaskTexture, sampler_OutlineMaskTexture, UnityStereoTransformScreenSpaceTex(UV));
    6.    
    7.     //Out = outlineMask.g;
    8.     Out = sceneDepth.r;
    9. }
    10.  
     
  2. Jonathan_L

    Jonathan_L

    Joined:
    Jan 26, 2016
    Posts:
    43
    Found the solution. So using scene depth node with sampling on "Raw" was the trick. Also encoding the float in a float2 when rendering to the texture helps with the precision.

    upload_2020-6-8_21-16-39.png

    Code (CSharp):
    1. void OutlinePass_float(float2 UV, out float4 Out)
    2. {
    3.     float4 sceneColor = SAMPLE_TEXTURE2D_X(_CameraColorTexture, sampler_CameraColorTexture, UnityStereoTransformScreenSpaceTex(UV));
    4.     float4 sceneDepth = SAMPLE_TEXTURE2D_X(_CameraDepthTexture, sampler_CameraDepthTexture, UnityStereoTransformScreenSpaceTex(UV)).r;
    5.     float4 outlineMask = SAMPLE_TEXTURE2D_X(_OutlineMaskTexture, sampler_OutlineMaskTexture, UnityStereoTransformScreenSpaceTex(UV));
    6.  
    7.     float outlineMaskDepth = DecodeFloatRG(outlineMask.gb);
    8.  
    9.     Out = sceneDepth;
    10. }
    The encoding and decoding helper functions can be found in UnityCG.cginc.

    Edit:

    I found that the scene depth node doesn't work as I wanted to when I uncheck the layer under the "Opaque Layer Mask" filter setting in my URP renderer asset. This is because the scene depth node samples from the texture, it doesn't calculate the depth of the fragment unless it is rendered to the depth texture. To solve this I replaced the above "Scene Depth" node with a custom subgraph node "Object Depth Raw".

    upload_2020-6-9_22-44-59.png

    The Z Buffer Params is also a custom subgraph which just uses the Unity's built-in _ZBufferParams.

    Code (CSharp):
    1. void ZBufferParams_float(out float4 Out)
    2. {
    3.     Out = _ZBufferParams;
    4. }
    This is what I was originally trying to look for - a way to get the depth of a fragment without sample the depth texture. Now it would be great if I can find some type of function that converts any object/world space coordinate into that float4 "Screen Position". But for now, this works for what I need
     
    Last edited: Jun 10, 2020
    Jahvan, RyanKeable and jackytop like this.
  3. transporter_gate_studios

    transporter_gate_studios

    Joined:
    Oct 17, 2016
    Posts:
    219
    Do you know how to make a renderer feature only apply to a specific camera in the stack?
     
  4. peaj_metric

    peaj_metric

    Joined:
    Sep 15, 2014
    Posts:
    146
    I know it's a late repy but:
    You can create multiple renderers (one per camera) with different render features.