(Using Unity 2020.3.12f1) Hi everyone, I am trying to reconstruct the world space position from the depth normal texture for custom lights. Most of the information I've found is for an image effect shader, but I would like to achieve this in a per-object fashion. From what I understand, the process of reconstructing the world space position involves the following: Retrieve depth from the depth normal texture and remap from (0, 1) to (-1, 1) Create clip space position with X and Y set to the screen position (remapped to (-1, 1), Z set to the remapped depth, and W set to 1 Calculate Inverse View Projection matrix in separate C# script and set as global shader property for access in shader Multiply Inverse VP matrix with clip space position to calculate world space position Divide the world space position XYZ by its W value to compensate for perspective This seems to be the same process used in this tutorial although in SRP rather than the built-in renderer I'm currently using. However, my current implementation of the above doesn't seem to be working properly. In camera script: Code (CSharp): void Start() { cam = GetComponent<Camera>(); cam.depthTextureMode = DepthTextureMode.DepthNormals; } private void Update() { //Code for MVP Matrices https://answers.unity.com/questions/12713/how-do-i-reproduce-the-mvp-matrix.html[/INDENT] bool d3d = SystemInfo.graphicsDeviceVersion.IndexOf("Direct3D") > -1; Matrix4x4 V = cam.worldToCameraMatrix; Matrix4x4 P = cam.projectionMatrix; if (d3d) { // Invert Y for rendering to a render texture for (int i = 0; i < 4; i++) { P[1,i] = -P[1,i]; } // Scale and bias from OpenGL -> D3D depth range for (int i = 0; i < 4; i++) { P[2,i] = P[2,i]*0.5f + P[3,i]*0.5f; } } Matrix4x4 VP = P*V; Matrix4x4 VP_I = VP.inverse; Shader.SetGlobalMatrix("VP_I", VP_I); } In shader: Code (CSharp): //Pass settings Cull Front Ztest Always Zwrite off fixed4 frag (v2f i) : SV_Target { //Other code here // Retrieve Depth from texture float2 scrUV = i.screenPos.xy / i.screenPos.w; float4 depthnormal = tex2D(_CameraDepthNormalsTexture, scrUV); float depth; float3 normal; DecodeDepthNormal(depthnormal, depth, normal); // Remap depth to (-1, 1) depth = depth * 2.0 - 1.0; // Clip space position float4 posCS = float4(scrUV * 2.0 - 1.0, depth, 1.0); // Calculate world space position using inverse VP matrix float4 posWS = mul(VP_I, posCS); //Compensate for perspective posWS.xyz /= posWS.w;[/INDENT] } When I apply this shader to a basic sphere, this is the result I get: Before: After: I'm not too sure what I can do to fix this problem, as the depth, clip space position, and matrices seem to be correct. Thank you in advance.
I know basically every example out there says you need to use a script to pass the inverse view projection matrix to your shader to reconstruct the world position from the depth buffer... but you don't. Not even for post processing. Unity passes that to the shader already. You also don't even need the inverse view projection matrix at all unless it's for a post process. For an object in the world all you need is the world position of the surface. In the below example I use the camera relative world position, which can fix some possible precision issues when you're far from the world origin, but it's fine to pass the world position and subtract the camera position in the fragment shader if you prefer. Code (CSharp): Shader "Unlit/WorldPosFromDepth" { Properties { } SubShader { Tags { "Queue"="Transparent" "RenderType"="Transparent" "IgnoreProjector"="True" } LOD 100 Pass { CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" struct appdata { float4 vertex : POSITION; }; struct v2f { float4 pos : SV_POSITION; float4 projPos : TEXCOORD0; float3 camRelativeWorldPos : TEXCOORD1; }; UNITY_DECLARE_DEPTH_TEXTURE(_CameraDepthTexture); v2f vert (appdata v) { v2f o; o.pos = UnityObjectToClipPos(v.vertex); o.projPos = ComputeScreenPos(o.pos); o.camRelativeWorldPos = mul(unity_ObjectToWorld, float4(v.vertex.xyz, 1.0)).xyz - _WorldSpaceCameraPos; return o; } bool depthIsNotSky(float depth) { #if defined(UNITY_REVERSED_Z) return (depth > 0.0); #else return (depth < 1.0); #endif } half4 frag (v2f i) : SV_Target { float2 screenUV = i.projPos.xy / i.projPos.w; // sample depth texture float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, screenUV); // get linear depth from the depth float sceneZ = LinearEyeDepth(depth); // calculate the view plane vector // note: Something like normalize(i.camRelativeWorldPos.xyz) is what you'll see other // examples do, but that is wrong! You need a vector that at a 1 unit view depth, not // a1 unit magnitude. float3 viewPlane = i.camRelativeWorldPos.xyz / dot(i.camRelativeWorldPos.xyz, unity_WorldToCamera._m20_m21_m22); // calculate the world position // multiply the view plane by the linear depth to get the camera relative world space position // add the world space camera position to get the world space position from the depth texture float3 worldPos = viewPlane * sceneZ + _WorldSpaceCameraPos; worldPos = mul(unity_CameraToWorld, float4(worldPos, 1.0)); half4 col = 0; // draw grid where it's not the sky if (depthIsNotSky(depth)) col.rgb = saturate(2.0 - abs(frac(worldPos) * 2.0 - 1.0) * 100.0); return col; } ENDCG } } } You'll also notice I'm using the _CameraDepthTexture here, and not the _CameraDepthNormalTexture. You can use the camera depth normal texture if you want, but the depth information is much, much lower precision as it's storing the depth value as a 16 bit integer in two 8 bit color channels. The _CameraDepthTexture is a 32 bit float. So most likely you'll have to sample both textures to do your lighting.
What a much simpler method! Thank you for the help, hopefully, this will work perfectly. Just for future reference and because of curiosity, is there anyway to properly pass an inverse VP matrix to the shader? Just from my debugging, the built-in unity variable and the calculated inverse matrix seem to be different. Was it simply a mistake on my part?
To calculate the inverse matrix in c#, you want to use the Gl.GetGPUProjectionMatrix() function. https://docs.unity3d.com/ScriptReference/GL.GetGPUProjectionMatrix.html Code (csharp): Matrix4x4 VP = GL.GetGPUProjectionMatrix(cam.projectionMatrix, false) * cam.worldToCameraMatrix; Matrix4x4 VP_I = VP.inverse; However, you kind of don't want or need to do this. Unity already passes the inverse projection matrix and inverse view matrix to the shader. This example shader is calculating the surface normal from the depth buffer, but it calculates the view position using the inverse projection matrix as part of that. https://gist.github.com/bgolus/a07ed65602c009d5e2f753826e8078a0 That won't work for orthographic cameras, but Unity's own screen space shadow shader does handle that! https://github.com/TwoTailsGames/Un...sExtra/Internal-ScreenSpaceShadows.shader#L63 The main key is the unity_CameraInvProjection both my example and Unity's code are using are passing in the camera.projectionMatrix.inverse as is without going through GetGPUProjectionMatrix. This might seem crazy, but it means it's exactly the same no matter what hardware you're running which means it's actually a bit easier to use if you're trying to reconstruct from the screen UV.
@bgolus, thanks for showing this method. Could you clarify what is a "1 unit view depth"? I am having trouble understanding how this is different from a 1-magnitude vector. Also, what is stored inside unity_WorldToCamera._m20_m21_m22?
As I'm assuming you understand, a normalized vector has a magnitude of 1. Lots of things in graphics programming / shaders rely on normalized vectors. The viewPlane isn't a normalized vector. This is the offset from the camera to a flat plane that is 1 unit in front of the view. An easier to understand example might be something like this: Code (csharp): // transform the world space view direction (camRelativeWorldPos) from world space to view space float3 viewSpaceViewDir = mul((float3x3)unity_WorldToCamera, i.camRelativeWorldPos.xyz); // divide view space view dir by its z so that it represents the offset from the camera to a plane that is 1 unit // in front of the camera view float3 viewSpaceViewPlane = viewSpaceViewDir / abs(viewSpaceViewDir.z); // transform view plane back into world space float3 viewPlane = mul((float3x3)unity_CameraToWorld, viewSpaceViewPlane); The bit of code in the example shader is an optimization version of of all that. The unity_WorldToCamera._m20_m21_m22 is the camera's forward vector in world space, and a dot product of an arbitrary vector with a normalized vector gets you the width of the arbitrary vector align the normalized vector.