Hi everyone. I am currently working on a distance field raymarching shader (see this shadertoy for an introduction). I want the raymarched objects to be able to access Unity's depth buffer (so that they can be occluded by traditional mesh objects). Here is how I do this: Code (csharp): // Fragment Shader: // Convert from depth buffer to true distance from camera // ro=ray origin (camera pos), duv=depth buffer uv (flipped y) float depth = tex2D(_CameraDepthTexture, duv).r; float4 projPos = float4(duv.x * 2 - 1, duv.y * 2 - 1, depth * 2 - 1, 1.0f); float4 posvs = mul(_CameraClipToWorld, projPos); posvs /= posvs.w; depth = length(posvs.xyz - ro); // Image Effect Script: EffectMaterial.SetMatrix("_CameraClipToWorld", (CurrentCamera.projectionMatrix * CurrentCamera.worldToCameraMatrix).inverse); This works, but I feel there are many unnecessary calculations going on here. Do I really need to calculate the absolute worldspace position from the depth buffer (with a matrix multiply per pixel), then run a square root calculation to find the "true depth"? I am having trouble coming up with a more perfomant solution. Any help would be appreciated.