Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Join us on Dec 8, 2022, between 7 am & 7 pm EST, in the DOTS Dev Blitz Day 2022 - Q&A forum, Discord, and Unity3D Subreddit to learn more about DOTS directly from the Unity Developers.
    Dismiss Notice
  3. Have a look at our Games Focus blog post series which will show what Unity is doing for all game developers – now, next year, and in the future.
    Dismiss Notice

Struggling to separate camera from depth texture

Discussion in 'Shaders' started by PixelizedPlayer, Feb 6, 2019.

  1. PixelizedPlayer


    Feb 27, 2013

    I've been trying to separate my camera's world position from having an impact on my depth effect. But i am very confused on how to do this, i opted to compare the fragment distances between my object and the depth texture in world space. But its making no difference the deeper the object is below the plane.

    This is what i am currently using for my fragment shader:

    Code (csharp):
    2. v2f vert (appdata_base v) {              
    3.     v2f o;
    4.     o.pos = UnityObjectToClipPos(v.vertex); //local to clip
    5.     o.screenPos = ComputeScreenPos(o.pos); // clip to scren pos
    6.     o.worldSpacePos = mul(unity_ObjectToWorld, o.pos); //world position of vertex
    7.     return o;
    8. }
    9. half4 frag(v2f i) : SV_Target {
    10.     //sample depth texture      
    11.     float depth = tex2Dproj(_CameraDepthTexture, i.screenPos).r;
    12.     // from perspective to linear distribution
    13.     depth = LinearEyeDepth(depth);
    14.     //far clip minus near clip          
    15.     float cameraRange = _ProjectionParams.z - _ProjectionParams.y;
    16.     //depth texture to world height
    17.     float textureWorldDepth = depth * cameraRange;
    18.     // how deep is the depth texture fragment to this object's fragment
    19.     float worldDepth = i.worldSpacePos.y-textureWorldDepth;
    20.     // limit opacity down to x units[todo move to property]      
    21.     float maxDepthTransluency = 10;          
    22.     // gradient colour for depth from 0 to max eg. (10 units)
    23.     depth = clamp(worldDepth/maxDepthTransluency,0,1);
    25.     return float4(depth,depth,depth,.5);
    26. }

    Here you can see it in action, the colour should change the deeper it goes up to 10 units difference but not happens:
  2. bgolus


    Dec 7, 2012
    Okay ... let's look over what you're doing.

    So far so good. You now have the linear depth. Next all you need is to ...

    Wait, no, that's not ... oh no. The linear depth is already in world space units, there's no need to scale it. Easy thing to fix, just don't do that. But, what's this comment about world height? Height isn't a thing here yet ...

    Ah ha. I think I understand what you're trying to do, and where the misunderstanding is coming from.

    Let’s step back and talk about what the depth texture is. I suspect you're thinking about it as a world space height value, something like depth in the sense of some world space water plane.

    It is not.

    The depth texture is a representation of the screen space depth of each pixel. Depth being the distance along the forward view axis. In other words it is intrinsically linked to the camera, there's no separating it. The depth texture itself stores the depth value in a 1.0 to 0.0 range that is non-linear for various reasons related to perspective projection matrices that aren’t important right now. The LinearEyeDepth function converts that non-linear value from 1.0 to 0.0 to world space units. That is to say if you put a screen facing quad as a child of an unscaled camera game object, the linear depth value for that quad across the entire surface be the same as the z value for the you see on the transform. This is different that distance btw, see the below image representing the difference between depth and distance for a point in the camera's view:

    The most common way the depth texture is used is to test it against the current mesh's depth at that pixel. For that you pass the linear view depth (or eye depth as it's called in Unity's shader functions) of the vertices to the fragment shader, and compare that against the linear depth you get from the depth texture. You can look at the built in particle shaders, here's someone who did a quick breakdown of that:

    You don't need to encapsulate your stuff in the #ifdef SOFT_PARTICLES_ON, that's just something controlled by quality settings.
  3. PixelizedPlayer


    Feb 27, 2013

    Thank you for explaining rather than just giving code i much prefer that - I was not aware it was already in world space. Was not really familiar what eye space really meant. So that means world co-ordinates relative to camera basically?

    This is what i have now:

    Code (CSharp):
    1. v2f vert (appdata_base v) {            
    2.     v2f o;
    3.     o.pos = UnityObjectToClipPos(v.vertex); //local to clip
    4.     o.screenPos = ComputeScreenPos(o.pos); // clip to scren pos
    5.     COMPUTE_EYEDEPTH(o.screenPos.z);
    6.     return o;
    7. }
    9. half4 frag(v2f i) : SV_Target {
    11.     float depth = tex2Dproj(_CameraDepthTexture, i.screenPos).r; //sample depth texture
    12.             depth = LinearEyeDepth(depth); // from perspective to linear distribution
    14.     float fade = saturate (_FadeFactor * (depth-i.screenPos.z) + _MinimumFade);              
    15.     return (float4(1,1,1,1 * fade) * _Colour);
    16. }
    Don't know if this is how water fade is usually done, but it works now :) I don't suppose you know of any resources on making water visuals for shaders at least theory wise ? I can only find theory on vertex functions for waves but not the visuals of them.

    Thanks for the help!
  4. bgolus


    Dec 7, 2012
    World space scale, but not world coordinates. It’s relative to the camera’s position and orientation. A view space position of (0,0,0) is the camera’s position, and a position of (0,0,-1) is one unit in front of the camera along its forward vector, regardless of what position or orientation the camera has (the in shader view space z is inverted vs Unity’s coordinate system).

    Yep, that works.