Search Unity

World normal from scene depth

Discussion in 'Shaders' started by Phantom_X, Feb 24, 2021.

  1. Phantom_X

    Phantom_X

    Joined:
    Jul 11, 2013
    Posts:
    314
    Hey,
    I am trying to compute world normals using only scene depth. I am using URP so there is no normal buffer available. In my shader I have access to the scene depth so I tried computing the normals like so:


    Code (CSharp):
    1.     float vectorLength = 0.001;
    2.     float2 y = float2(0, vectorLength);
    3.     float2 x = float2(vectorLength,0);
    4.  
    5.     float depth1 = SampleDepth(_CameraDepthTexture, sampler_ScreenTextures_linear_clamp, uv + y).r;
    6.     float depth2 = SampleDepth(_CameraDepthTexture, sampler_ScreenTextures_linear_clamp, uv + x).r;
    7.  
    8.     float3 p1 = float3(y, depth1 - depth);
    9.     float3 p2 = float3(x, depth2 - depth);
    10.  
    11.     float3 normal = cross(p1, p2);
    12.     normal.z = -normal.z;
    13.  
    14.     return normalize(normal);
    15.  
    However I realized this is creating more like screenspace normal than world normal. Anyone knows how I could do that?

    Thanks!
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    The camera depth texture is by definition in screen space. To get a world space normal you need to first reconstruct the world position from the depth texture. If you search for "unity world position from depth texture" you'll find this project or one of many forks others have made.
    https://github.com/keijiro/DepthInverseProjection

    But to get a normal you need to sample the depth from multiple offset positions to get the normal. That's something you already know. The method used in keijiro's example to get the world position can't be used for that, as the interpolated ray only works for the current pixel and not the neighbors you actually care about. Really you want to calculate the inverse view projection matrix in a c# script and pass it to the shaders. Like this project (which is doing it for both eyes so it supports VR, but that's probably not something you need):
    https://github.com/chriscummings100.../WorldSpacePostEffect/WorldSpacePostEffect.cs

    After that, you'll probably notice really terrible artifacts on the edges of objects. These two articles go into methods of improving the values you get from the normal reconstruction, when possible.
    https://wickedengine.net/2019/09/22/improved-normal-reconstruction-from-depth/
    https://atyuwen.github.io/posts/normal-reconstruction/

    Lastly, be aware that the normals you're going to get are the normals of the actual geometry you're using. They will not be smooth unless you have your mesh density way, way too high.
     
  3. Phantom_X

    Phantom_X

    Joined:
    Jul 11, 2013
    Posts:
    314
    Hey, thanks for the reply!

    So I should've mentioned this is not for a post process shader, it's a regular transparent shader. I don't really need super accurate stuff and I noticed about the normals being not smooth, but that will be good enough for my use case.

    that being said, I tried using worlposition like you said.

    what I used that function to convert the depth to world position (pixel depth is the view Z)

    Code (CSharp):
    1. float3 ProjectedWorldPos(float3 worldPos, float sceneDepth, float pixelDepth)
    2. {
    3.     float3 pos = worldPos - _WorldSpaceCameraPos;
    4.     float depthDiff = sceneDepth / pixelDepth;
    5.  
    6.     pos.xyz *= depthDiff;
    7.     pos.xyz += _WorldSpaceCameraPos;
    8.  
    9.     return pos;
    10. }
    then I try to sample this multiple times with offsets like I previously did

    Code (CSharp):
    1.     float2 uv = data.screenUV.xy;
    2.     float vectorLength = 0.001;
    3.     float2 y = float2(0, vectorLength);
    4.     float2 x = float2(vectorLength, 0);
    5.  
    6.     float depth1 = SampleDepth(_CameraDepthTexture, sampler_ScreenTextures_linear_clamp, uv + y).r;
    7.     float depth2 = SampleDepth(_CameraDepthTexture, sampler_ScreenTextures_linear_clamp, uv + x).r;
    8.  
    9.     float3 worldUV_c = ProjectedWorldPos(data.worldPosition, distortedData.a, data.pixelDepth);
    10.     float3 worldUV_x = ProjectedWorldPos(data.worldPosition + float3(vectorLength,0,0), depth2, data.pixelDepth);
    11.     float3 worldUV_z = ProjectedWorldPos(data.worldPosition + float3(0,0,vectorLength), depth1, data.pixelDepth);
    12.    
    13.  
    14.     float3 p1 = worldUV_z - worldUV_c;
    15.     float3 p2 = worldUV_x - worldUV_c;
    16.  
    17.     float3 normal = normalize(cross(p1, p2));
    It's a bit closer, but still not right... any idea about what is wrong?

     
    Last edited: Feb 25, 2021
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    Using the interpolated world position of the current fragment will let you get the world position of the depth sampled for that screen position. It won't let you get the world position for other positions. The interpolated world position gets you the direction from the camera to that pixel position, and you don't know the offset to get to other pixel positions. That arbitrary
    vectorLength
    variable is just that, completely arbitrary, and are totally different offsets when in world space vs UV space. "0.01" is a different distance in world space depending on far away the position is. And "0.01" in normalized screen space is different in the x and y depending on the aspect ratio.

    Ultimately it doesn't matter if you're doing this as a post process or on an object, you still need to calculate the inverse view projection matrix in a c# script and pass it to the shaders to calculate the world position of pixel positions within the depth texture.
     
  5. Przemyslaw_Zaworski

    Przemyslaw_Zaworski

    Joined:
    Jun 9, 2017
    Posts:
    328
    Maybe it would be helpful. C# script and post processing shader which shows proper world space positions, calculated from depth buffer. Add script to Main Camera and assign shader.

    Code (CSharp):
    1. using UnityEngine;
    2.  
    3. public class DepthBufferToWorldSpace : MonoBehaviour
    4. {
    5.     public Shader DepthBufferToWorldSpaceShader;
    6.  
    7.     private Camera _MainCamera;  
    8.     private Material _Material;
    9.  
    10.     void Start()
    11.     {
    12.         _Material = new Material(DepthBufferToWorldSpaceShader);
    13.         _MainCamera = GetComponent<Camera>();
    14.         _MainCamera.depthTextureMode = DepthTextureMode.Depth;
    15.     }
    16.  
    17.     void OnRenderImage (RenderTexture source, RenderTexture destination)
    18.     {
    19.         Matrix4x4 m = GL.GetGPUProjectionMatrix(_MainCamera.projectionMatrix, false);
    20.         m[2, 3] = m[3, 2] = 0.0f; m[3, 3] = 1.0f;
    21.         Matrix4x4 ProjectionToWorld = Matrix4x4.Inverse(m * _MainCamera.worldToCameraMatrix) * Matrix4x4.TRS(new Vector3(0, 0, -m[2,2]), Quaternion.identity, Vector3.one);
    22.         _Material.SetMatrix("unity_ProjectionToWorld", ProjectionToWorld);
    23.         Graphics.Blit (source, destination, _Material);
    24.     }
    25. }
    Code (CSharp):
    1. Shader "DepthBufferToWorldSpace"
    2. {
    3.     SubShader
    4.     {
    5.         Pass
    6.         {
    7.             CGPROGRAM
    8.             #pragma vertex VSMain
    9.             #pragma fragment PSMain
    10.             #pragma target 5.0
    11.  
    12.             float4x4 unity_ProjectionToWorld;
    13.             sampler2D _CameraDepthTexture;
    14.  
    15.             void VSMain (inout float4 vertex:POSITION, inout float2 uv:TEXCOORD0, out float3 direction:TEXCOORD1)
    16.             {
    17.                 vertex = UnityObjectToClipPos(vertex);
    18.                 direction = mul(unity_ProjectionToWorld, float4(vertex.xy, 0.0, 1.0)) - _WorldSpaceCameraPos;
    19.             }
    20.  
    21.             void PSMain (float4 vertex:POSITION, float2 uv:TEXCOORD0, float3 direction:TEXCOORD1, out float4 fragColor:SV_TARGET)
    22.             {
    23.                 float depth = 1.0 / (_ZBufferParams.z * tex2D(_CameraDepthTexture, uv.xy) + _ZBufferParams.w);
    24.                 float3 worldspace = direction * depth + _WorldSpaceCameraPos;
    25.                 fragColor = float4(worldspace, 1.0);
    26.             }
    27.  
    28.             ENDCG
    29.         }
    30.     }
    31. }
     
    Phantom_X and bgolus like this.
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    That c# script is useful for getting a projection to world matrix which, if I'm not misunderstanding it, allows you to use the normalized screen position as the input. But the shader itself isn't since it's using the interpolated ray method which isn't applicable to this use case for the reasons I mentioned above. Unless you've got some tricks for that @Przemyslaw_Zaworski . I guess it might be plausible with some derivative hackery but I remember trying that before and couldn't get it to work reliably.
     
  7. Phantom_X

    Phantom_X

    Joined:
    Jul 11, 2013
    Posts:
    314
  8. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    That's always existed, but it never actually matched the inverse of the "real" projection matrix. I've honestly never been entirely sure what it is. As best I can tell it's the perspective matrix but with a totally different near & far plane. However I realized there was a way to merge the techniques from several of the above shaders to actually get something that works without any script.
    upload_2021-2-25_16-46-39.png
    Code (csharp):
    1. Shader "Unlit/DepthToWorldNormal"
    2. {
    3.     SubShader
    4.     {
    5.         Tags { "RenderType"="Transparent" "Queue"="Transparent" }
    6.         LOD 100
    7.  
    8.         Pass
    9.         {
    10.             Cull Off
    11.             ZWrite Off
    12.  
    13.             CGPROGRAM
    14.             #pragma vertex vert
    15.             #pragma fragment frag
    16.  
    17.             #include "UnityCG.cginc"
    18.  
    19.             struct appdata
    20.             {
    21.                 float4 vertex : POSITION;
    22.             };
    23.  
    24.             struct v2f
    25.             {
    26.                 float4 pos : SV_POSITION;
    27.             };
    28.  
    29.             sampler2D _MainTex;
    30.             float4 _MainTex_ST;
    31.  
    32.             v2f vert (appdata v)
    33.             {
    34.                 v2f o;
    35.                 o.pos = UnityObjectToClipPos(v.vertex);
    36.                 return o;
    37.             }
    38.  
    39.             Texture2D _CameraDepthTexture;
    40.             float4 _CameraDepthTexture_TexelSize;
    41.  
    42.             float3 rayFromScreenUV(in float2 uv, in float4x4 InvMatrix)
    43.             {
    44.               float x = uv.x * 2.0 - 1.0;
    45.               float y = uv.y * 2.0 - 1.0;
    46.               float4 position_s = float4(x, y, 1.0, 1.0);
    47.               return mul(InvMatrix, position_s * _ProjectionParams.z);
    48.             }
    49.  
    50.             float3 viewSpacePosAtPixelPosition(float2 pos)
    51.             {
    52.                 float rawDepth = _CameraDepthTexture.Load(int3(pos, 0)).r;
    53.                 float2 uv = pos * _CameraDepthTexture_TexelSize.xy;
    54.                 float3 ray = rayFromScreenUV(uv, unity_CameraInvProjection);
    55.                 return ray * Linear01Depth(rawDepth);
    56.             }
    57.  
    58.             fixed4 frag (v2f i) : SV_Target
    59.             {
    60.                 float3 vpl = viewSpacePosAtPixelPosition(i.pos.xy + float2(-1, 0));
    61.                 float3 vpr = viewSpacePosAtPixelPosition(i.pos.xy + float2( 1, 0));
    62.                 float3 vpd = viewSpacePosAtPixelPosition(i.pos.xy + float2( 0,-1));
    63.                 float3 vpu = viewSpacePosAtPixelPosition(i.pos.xy + float2( 0, 1));
    64.  
    65.                 float3 viewNormal = normalize(-cross(vpu - vpd, vpr - vpl));
    66.                 float3 WorldNormal = mul((float3x3)unity_MatrixInvV, viewNormal);
    67.  
    68.                 // if needed, this will detect the sky
    69.                 // float rawDepth = _CameraDepthTexture.Load(int3(i.pos.xy, 0)).r;
    70.                 // if (rawDepth == 0.0)
    71.                     // WorldNormal = float3(0,0,0);
    72.  
    73.                 return WorldNormal.xyzz;
    74.             }
    75.             ENDCG
    76.         }
    77.     }
    78. }
    Basically I'm computing a ray direction like Keijiro & @Przemyslaw_Zaworski 's examples, but for each sample position rather than in the vertex shader.

    However I can't guarantee it'll work on OpenGL platforms, but it should work everywhere else, including with the built in rendering paths!
     
  9. Phantom_X

    Phantom_X

    Joined:
    Jul 11, 2013
    Posts:
    314
    It's quite a bit more calculations that what I expected from the start, but I must say it works quite well and not having to rely on a script is a big plus for me!

    Big thanks for the help!

    Btw, you say it might not work on OpenGL, is it because of the _CameraDepthTexture.Load ?
     
  10. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
  11. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    Minor improvement over the previous. This implements the "improved normal reconstruction" from one of the links above, and adds some comments. And should work with OpenGL.
    Code (CSharp):
    1. Shader "WorldNormalFromDepthTexture"
    2. {
    3.     SubShader
    4.     {
    5.         Tags { "RenderType"="Transparent" "Queue"="Transparent" }
    6.         LOD 100
    7.  
    8.         Pass
    9.         {
    10.             Cull Off
    11.             ZWrite Off
    12.  
    13.             CGPROGRAM
    14.             #pragma vertex vert
    15.             #pragma fragment frag
    16.  
    17.             #include "UnityCG.cginc"
    18.  
    19.             struct appdata
    20.             {
    21.                 float4 vertex : POSITION;
    22.             };
    23.  
    24.             struct v2f
    25.             {
    26.                 float4 pos : SV_POSITION;
    27.             };
    28.  
    29.             v2f vert (appdata v)
    30.             {
    31.                 v2f o;
    32.                 o.pos = UnityObjectToClipPos(v.vertex);
    33.                 return o;
    34.             }
    35.  
    36.             UNITY_DECLARE_DEPTH_TEXTURE(_CameraDepthTexture);
    37.             float4 _CameraDepthTexture_TexelSize;
    38.  
    39.             // inspired by keijiro's depth inverse projection
    40.             // https://github.com/keijiro/DepthInverseProjection
    41.             // constructs view space ray at the far clip plane from the vpos
    42.             // then multiplies that ray by the linear 01 depth
    43.             float3 viewSpacePosAtPixelPosition(float2 vpos)
    44.             {
    45.                 float2 uv = vpos * _CameraDepthTexture_TexelSize.xy;
    46.                 float3 viewSpaceRay = mul(unity_CameraInvProjection, float4(uv * 2.0 - 1.0, 1.0, 1.0) * _ProjectionParams.z);
    47.                 float rawDepth = SAMPLE_DEPTH_TEXTURE_LOD(_CameraDepthTexture, float4(uv, 0.0, 0.0));
    48.                 return viewSpaceRay * Linear01Depth(rawDepth);
    49.             }
    50.  
    51.             // inspired by János Turánszki's improved normal reconstruction technique
    52.             // https://wickedengine.net/2019/09/22/improved-normal-reconstruction-from-depth/
    53.             // this is a minor optimization over the original, using only 2 comparisons instead of 8
    54.             // at the cost of two additional vector subtractions
    55.             half4 frag (v2f i) : SV_Target
    56.             {
    57.                 // get current pixel's view space position
    58.                 half3 viewSpacePos_c = viewSpacePosAtPixelPosition(i.pos.xy + float2( 0.0, 0.0));
    59.  
    60.                 // if depth is at the far plane, then assume skybox
    61.                 // if (abs(viewSpacePos_c.z) >= _ProjectionParams.z)
    62.                     // return 0;
    63.  
    64.                 // get view space position at 1 pixel offsets in each major direction
    65.                 half3 viewSpacePos_l = viewSpacePosAtPixelPosition(i.pos.xy + float2(-1.0, 0.0));
    66.                 half3 viewSpacePos_r = viewSpacePosAtPixelPosition(i.pos.xy + float2( 1.0, 0.0));
    67.                 half3 viewSpacePos_d = viewSpacePosAtPixelPosition(i.pos.xy + float2( 0.0,-1.0));
    68.                 half3 viewSpacePos_u = viewSpacePosAtPixelPosition(i.pos.xy + float2( 0.0, 1.0));
    69.  
    70.                 // get the difference between the current and each offset position
    71.                 half3 l = viewSpacePos_c - viewSpacePos_l;
    72.                 half3 r = viewSpacePos_r - viewSpacePos_c;
    73.                 half3 d = viewSpacePos_c - viewSpacePos_d;
    74.                 half3 u = viewSpacePos_u - viewSpacePos_c;
    75.  
    76.                 // pick horizontal and vertical diff with the smallest z difference
    77.                 half3 h = abs(l.z) < abs(r.z) ? l : r;
    78.                 half3 v = abs(d.z) < abs(u.z) ? d : u;
    79.  
    80.                 // get view space normal from the cross product of the two smallest offsets
    81.                 half3 viewNormal = normalize(cross(h, v));
    82.  
    83.                 // transform normal from view space to world space
    84.                 half3 WorldNormal = mul((float3x3)unity_MatrixInvV, viewNormal);
    85.  
    86.                 // visualize normal (assumes you're using linear space rendering)
    87.                 return half4(GammaToLinearSpace(WorldNormal.xyz * 0.5 + 0.5), 1.0);
    88.             }
    89.             ENDCG
    90.         }
    91.     }
    92. }
     
  12. Beauque

    Beauque

    Joined:
    Mar 7, 2017
    Posts:
    61
    Can this be reproduced using only ShaderGraph or ASE ?
    I am working on a custom decal shader and I guess this would be the way to override the projector mesh's vertex normals
     
  13. PanoTron

    PanoTron

    Joined:
    Oct 21, 2015
    Posts:
    5
    Any Luck? I am also making a custom decal shadergraph, as the built-in URP decal projector is lacking in performance and features.
     
  14. TheCelt

    TheCelt

    Joined:
    Feb 27, 2013
    Posts:
    742
    Sorry for reviving an old post but it seems Unity has a function called SampleSceneNormals but it doesn't seem to do anything in shadergraph.

    I tried with a custom function node:
    upload_2022-12-9_3-34-33.png


    But apparently its an undeclared identifier but it does exist so its a bit confusing.

    They also have this node:

    upload_2022-12-9_3-38-36.png

    But this just gives fully black output when i linked it to the base color...none of this is documented anywhere but it seems like there is some built in scene normal texture and don't require complex calculations on the depth texture but i cannot figure it out at all its frustrating!
     
  15. wwWwwwW1

    wwWwwwW1

    Joined:
    Oct 31, 2021
    Posts:
    769
    Hi, it seems that the normal texture does not exist.

    You can try enabling depth normal texture by adding SSAO or this empty renderer feature to the renderer feature list.