Search Unity

Proper usage of DecodeDepthNormal to extract world-space normal

Discussion in 'Shaders' started by Orangy-Tang, May 13, 2019.

  1. Orangy-Tang

    Orangy-Tang

    Joined:
    Jul 24, 2013
    Posts:
    39
    I've been trying to extract world-space normals in a shader from the DepthTextureMode.DepthNormals pre-pass. However my results don't seem to be correct - after extracting the normal and converting it to RGB for display, my floor plane which should be normal=(0, 1, 0) has an RGB of (188, 255, 188) rather than (128, 255, 128) as I'd expect.

    I've stripped it back to this shader but can't explain the discrepancy:
    Code (CSharp):
    1. Shader "Custom/DecodeDepthUnlit"
    2. {
    3.     Properties
    4.     {
    5.         [MaterialToggle]
    6.         _ShowNormals ("Show Normals", Int) = 0
    7.     }
    8.  
    9.     SubShader
    10.     {
    11.         Tags
    12.         {
    13.             "RenderType"="Transparent"
    14.             "Queue" = "Transparent-1"
    15.         }
    16.         ZWrite Off
    17.         ZTest Off
    18.  
    19.         Pass
    20.         {
    21.         CGPROGRAM
    22.             #pragma vertex vert
    23.             #pragma fragment frag
    24.             #include "UnityCG.cginc"
    25.  
    26.             sampler2D _CameraDepthNormalsTexture;
    27.             int _ShowNormals; // 1=show normals, 0=show depth
    28.  
    29.             struct v2f
    30.             {
    31.                 float4 pos : SV_POSITION;
    32.                 float2 uv : TEXCOORD0;
    33.                 float4 scrPos: TEXCOORD1;
    34.             };
    35.  
    36.             v2f vert (appdata_base v)
    37.             {
    38.                 v2f o;
    39.                 o.pos = UnityObjectToClipPos (v.vertex);
    40.                 o.scrPos = ComputeScreenPos(o.pos);
    41.  
    42.                 o.uv = v.texcoord;
    43.                
    44.                 return o;
    45.             }
    46.  
    47.             half4 frag (v2f i) : COLOR
    48.             {
    49.                 float3 viewSpaceNormal;
    50.                 float viewDepth;
    51.  
    52.                 float2 screenPosition = (i.scrPos.xy / i.scrPos.w);
    53.                 DecodeDepthNormal(tex2D(_CameraDepthNormalsTexture, screenPosition), viewDepth, viewSpaceNormal);
    54.  
    55.                 float3 worldNormal = mul((float3x3)unity_MatrixInvV, float4(viewSpaceNormal, 0.0));
    56.  
    57.                 if (_ShowNormals == 1)
    58.                 {
    59.                     // remap from [-1..+1] to [0..1]
    60.                     float3 col = (worldNormal * 0.5) + 0.5;
    61.  
    62.                     return float4(col, 1.0);
    63.                 }
    64.                 else
    65.                 {
    66.                     // viewDepth is already in range [0..1]
    67.                     return float4(viewDepth, viewDepth, viewDepth, 1.0);
    68.                 }
    69.             }
    70.         ENDCG
    71.         }
    72.     }
    73.  
    74.     FallBack "Diffuse"
    75. }
    I suspect I've been staring at this so long i'm overlooking something stupid. Anyone any pointers?

    Thanks.
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    If your project is using linear color space, you'll need to do some color conversions for the final on screen color to match your expectations. A value of "0.5" in the shader will result in something around "188/255" on screen. 128/255 is ~0.2. Just be aware the quality of the normals from the _CameraDepthNormalsTexture are fairly low quality due to it storing a full 3D vector in a two 8 bit values.

    Try:
    float3 col = GammaToLinearSpace(worldNormal * 0.5 + 0.5);
     
  3. tomekkie2

    tomekkie2

    Joined:
    Jul 6, 2012
    Posts:
    973
    How to get DecodeDepthNormal to work in Universal Render Pipeline?
    Does in work for URP or standard pipeline only?
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    The URP doesn’t support the camera depth normals texture so there’s no reason to use the
    DecodeDepthNormals
    function.
     
  5. tomekkie2

    tomekkie2

    Joined:
    Jul 6, 2012
    Posts:
    973
    Thanks, but I mean the specific texture created using ScriptableRendererFeature
    depthNormalsMaterial = CoreUtils.CreateEngineMaterial("Hidden/Internal-DepthNormalsTexture");
     
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    If you’re trying to add a feature to the renderer it doesn’t normally support, then you’ll probably also need to add the coded needed to support it too. All of the functions you’re looking for are in the UnityCG.cginc file, but that doesn’t get imported into SRP shaders by default, and may cause some issues if you try to.
     
  7. tomekkie2

    tomekkie2

    Joined:
    Jul 6, 2012
    Posts:
    973
    Yes, issues, that is what exactly happens when trying to include UnityCG.cginc into URP shaders.
    Means there are no standard include files with DecodeDepthNormal function for URP, so I should copy and paste the function code into my shader file?