# Optimization: reconstructing distance along ray from depth

Discussion in 'Shaders' started by FlaflaTwo, Feb 17, 2016.

1. ### FlaflaTwo

Joined:
Apr 5, 2014
Posts:
15
Hi everyone. I am currently working on a distance field raymarching shader (see this shadertoy for an introduction). I want the raymarched objects to be able to access Unity's depth buffer (so that they can be occluded by traditional mesh objects). Here is how I do this:

Code (csharp):
2. // Convert from depth buffer to true distance from camera
3. // ro=ray origin (camera pos), duv=depth buffer uv (flipped y)
4. float depth = tex2D(_CameraDepthTexture, duv).r;
5. float4 projPos = float4(duv.x * 2 - 1, duv.y * 2 - 1, depth * 2 - 1, 1.0f);
6. float4 posvs = mul(_CameraClipToWorld, projPos);
7. posvs /= posvs.w;
8. depth = length(posvs.xyz - ro);
9.
10. // Image Effect Script:
11. EffectMaterial.SetMatrix("_CameraClipToWorld", (CurrentCamera.projectionMatrix * CurrentCamera.worldToCameraMatrix).inverse);
12.
This works, but I feel there are many unnecessary calculations going on here. Do I really need to calculate the absolute worldspace position from the depth buffer (with a matrix multiply per pixel), then run a square root calculation to find the "true depth"? I am having trouble coming up with a more perfomant solution. Any help would be appreciated.

2. ### bgolus

Joined:
Dec 7, 2012
Posts:
7,894
#include "UnityCG.cginc"
...

3. ### FlaflaTwo

Joined:
Apr 5, 2014
Posts:
15
This doesn't work correctly. LinearEyeDepth returns the z position from the depth buffer, in viewspace (that is, a plane parallel to the camera frustum would have the same value from lineareyedepth, even if perspective would distort the "true" distance along a view ray). I am trying to find linear distance along the view ray, which is different.

EDIT: Here is a (professionally made) diagram:
r is what I want,
d is what the depth buffer returns through LinearEyeDepth
black lines are the camera frustum Last edited: Feb 17, 2016
a436t4ataf likes this.
4. ### bgolus

Joined:
Dec 7, 2012
Posts:
7,894
For non-image effects you can get the real distance by passing the view space viewDir from the vertex shader and doing a length(viewDir / viewDir.z) * depth. For image effects I don't know if there's a valid view matrix to pull from; I don't know if the geometry constructed to render to the screen is built in world, view, or clip space.

o.viewDir = normalize(mul(UNITY_MATRIX_MV, v.vertex).xyz); // get normalized view dir
o.viewDir /= o.viewDir.z; // rescale vector so z is 1.0

float depth = LinearEyeDepth(tex2D(_CameraDepthTexture, duv).r); // get linear depth
depth *= length(o.viewDir); // scale by vector length

If the geometry isn't in clip space this should work, otherwise you'll get junk.

Doing the normalize and divide in the vertex shader should be fine in this case. Usually you don't want to normalize in the vertex shader as the interpolated value in the fragment shader won't be correct, and won't ever be unit length so you usually need to do the normalize again anyway. In this case the points you're interpolating between are on the view plane and you want the unnormalized interpolated value anyway.

5. ### FlaflaTwo

Joined:
Apr 5, 2014
Posts:
15
Thanks, your technique works! It needed a few alterations for my specific shader (I actually pass the view ray to the shader using a custom blit operation in the image effect script through each vertex's z coordinate, instead of calculating the ray directly in the vertex shader). In any case I was able to perform the same transformations that I talked about in the original post with an obvious performance increase.

Can you elaborate on why this works though? I am struggling to put the whole picture together conceptually.

6. ### bgolus

Joined:
Dec 7, 2012
Posts:
7,894
I actually thought of an optimization. The normalize is completely unnecessary, you only need the divide by z.

Your "professionally made" diagram is actually a good basis for explaining it what's going on. Imagine drawing a line on that diagram straight forward from the center. At the center the depth and the distance are the same, the vector would be (0, 0, depth). At the point your r is pointing to the z value is still the depth, you just need to know the proper x and y for your vector. If you know the view dir vector if you scale it so the z is the same as depth then you have your position. The easiest way to do that is first scale the vector so z is one, which is trivially done by dividing the whole vector by its z value. Then multiply by the depth and get the length, or my example get the length of that z==1 vector and multiply by depth.

The above code could be changed to something like this to make it more understandable
float3 viewPos = mul(UNITY_MATRIX_MV, v.vertex); // get view space position, the unnormalized view direction
float3 viewDir = viewPos / viewPos.z; // get a view dir with a z == 1
float3 viewDepth = viewDir * depth; // view vector with z == depth
float distance = length(viewDepth);