Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.

Question Depth Comparison in Fragment Shader

Discussion in 'General Graphics' started by DanWeston, Aug 18, 2023.

  1. DanWeston

    DanWeston

    Joined:
    Sep 9, 2014
    Posts:
    32
    I am experimenting with a Rendering process to cut holes out of geometry using a Depth Texture generated from other geometry on specific layer.

    The generated Depth Texture is sampled and compared against the depth of the Fragment being rendered. The fragment is clipped if the depth of the Frag isn't occluded. This works well at certain camera angles:
    depthmask.png

    However, at certain distances and angles the depth test doesn't produce accurate results:
    broken_depth_01.png
    broken_depth_02.png

    I considered that this might be an issue with using just Linear Eye Depth values so experimented using Linear01. Comparison using these values lead to no clipping in the scene view and broken clipping in the game view.
    broken_depth_03.png

    I am sampling my Depth Texture in the same manner as the DeclareDepthTexture.hlsl and acquiring the fragment depth via the position calculated using TransformWorldToView in my vert function ().
    Code (CSharp):
    1.  
    2. //Vert
    3. ...
    4. o.positionVS = TransformWorldToView(TransformObjectToWorld(v.vertex.xyz));
    5. ...
    6. //Vert end
    7.  
    8. //Frag
    9. const float2 uv = i.positionCS.xy / _ScaledScreenParams.xy;
    10. const float customDepth= SAMPLE_TEXTURE2D_X(_CustomDepthTexture,
    11.                     sampler_CustomDepthTexture,
    12.                     UnityStereoTransformScreenSpaceTex(uv)).r;
    13.  
    14. // Get the non-normalised eye depth
    15. const float linearFragDepth = -i.positionVS.z;
    16.  
    17. // Convert to linear eye depth (1-0, camera to far plane)
    18. const float fragmentDepth= LinearEyeDepth(fragmentEyeDepth, _ZBufferParams);
    19. //Frag end
    20.  

    I then compare the Depth like so:
    Code (CSharp):
    1.  
    2. const float depthTestTolerance = 0.01;
    3. if (customDepth > 0 &&
    4.     (fragmentDepth - customDepth) < depthTestTolerance)
    5. {
    6.     clip(-1);
    7. }
    8.  

    The linear approach I attempted looked like this:
    Code (CSharp):
    1.  
    2. const float fragmentLinear01EyeDepth = Linear01Depth(fragmentDepth, _ZBufferParams);
    3. const float proxyLinear01EyeDepth = Linear01Depth(customDepth, _ZBufferParams);
    4.  
    5.  if (proxyLinear01EyeDepth < 1 &&
    6.     (fragmentLinear01EyeDepth - proxyLinear01EyeDepth) >= 0)
    7. {
    8.     clip(-1);
    9. }
    10.  

    Unsure why this approach isn't working all the time, is this just not possible or am I doing something really wrong with the Depth Comparison?
     
    Last edited: Aug 21, 2023
    dancliffAuroch likes this.
  2. Rukhanka

    Rukhanka

    Joined:
    Dec 14, 2022
    Posts:
    179
    Why you don't use builtin depth testing functionality?
     
  3. c0d3_m0nk3y

    c0d3_m0nk3y

    Joined:
    Oct 21, 2021
    Posts:
    581
    I'm assuming that the custom depth texture has been rendered with the same camera at the same position/view direction and camera settings (fov, near plane, far plane), otherwise the entire algorithm wouldn't work.

    As Rukhanka said, you can probably do the same thing easier with just the built-in functionality. You can do all kind of screen-space CSG with just the depth and stencil buffer: http://www.bluevoid.com/opengl/sig00/advanced00/notes/node23.html

    Having said that, here are my 2 cents regarding your code:
    Code (CSharp):
    1.  
    2. //Vert
    3. ...
    4. //>>>>>>>>>>>>>>>>>>>>
    5. // Why not do it in NDC space? That way you don't have to transform the depth buffer values
    6. // Also, doing two matrix multiplications is unnecessary. You can just multiply with UNITY_MATRIX_MVP (or UNITY_MATRIX_MV in your your case
    7. //<<<<<<<<<<<<<<<<<<
    8. o.positionVS = TransformWorldToView(TransformObjectToWorld(v.vertex.xyz));
    9. ...
    10. //Vert end
    11. //Frag
    12. //>>>>>>>>>>>>>>>>>>>>
    13. // You can just do a load at SV_Position.xy. No need to use u/v coordinates. Also, if you sample make sure you use point sampling to avoid interpolation
    14. // float customDepth = _CustomDepthTexture[pos.xy] where pos : SV_Position
    15. //<<<<<<<<<<<<<<<<<<
    16. const float2 uv = i.positionCS.xy / _ScaledScreenParams.xy;
    17. const float customDepth= SAMPLE_TEXTURE2D_X(_CustomDepthTexture,
    18.                     sampler_CustomDepthTexture,
    19.                     UnityStereoTransformScreenSpaceTex(uv)).r;
    20. // Gets the LinearEyeDepth (0-1) for the Fragment we're rendering.
    21. //>>>>>>>>>>>>>>>>>>>>
    22. // The comment is wrong. This is not normalized eye depth. It is non-normalized eye (view-space) depth
    23. // If you were to use NDC depth, you could simply check against pos.z where pos : SV_Position
    24. //<<<<<<<<<<<<<<<<<<
    25. const float linearFragDepth = -i.positionVS.z;
    26. // Using LinearEyeDepth with the a normalised Linear Depth Value appears to get the Non-Linear 0-1!
    27. //>>>>>>>>>>>>>>>>>>>>
    28. // Again, this is non-normalized eye depth. Linear01Depth would give you normalized depth.
    29. //<<<<<<<<<<<<<<<<<<
    30. const float fragmentDepth= LinearEyeDepth(fragmentEyeDepth, _ZBufferParams);
    31. //Frag end
    32.  
    33. //>>>>>>>>>>>>>>>>>>>>  
    34. // This checks if the depths are equal. This is conceptionally wrong. You'd have to check if one depth is closer than the other.
    35. // If you do it in NDC space, keep in mind that Unity uses reverse depth, so you'd have to flip the comparison operator
    36. //<<<<<<<<<<<<<<<<<<
    37. const float depthTestTolerance = 0.01;
    38. if (customDepth > 0 &&
    39.     (fragmentDepth - customDepth) < depthTestTolerance)
    40. {
    41.     clip(-1);
    42. }
    43.    
    44.  
    I think the main issue here is that the algorithm simply doesn't work like that. You'd have to check if one depth is closer than the other but even that is not sufficient. You'd probably also have to render the backfaces of the cutout cubes. Check the CSG link above for details.
     
    Last edited: Aug 19, 2023
    DanWeston likes this.
  4. DanWeston

    DanWeston

    Joined:
    Sep 9, 2014
    Posts:
    32
    @c0d3_m0nk3y Thanks for taking the time to read and respond to my post. I appreciate the feedback on the code.

    To provide more context and confirm your assumption. Yes, the Depth Texture is being generated using the same camera which is rendering the scene. I am creating the texture using a Scriptable Render Feature/Pass, which is done pre-opaques. I wanted all of the cut out information ready in a RenderTexture before rendering the objects which required cut outs.

    I will check your linked resource, thank you for sharing. Maybe I should be using the built-in? My desire was to have all the cut out information before rendering the Opaques and Transparents.

    Sorry for the confusion here, indeed these comments are incredibly misleading and I will update the OP with the correct information. To make the intention clear, I was using the Linear Eye Depth for comparison (1-0/camera-far plane)

    In regards to the comparison logic, my thinking here was if the fragmentDepth-customDepth < 0 then the fragment in the customDepth is closer. However, as I also want to take the custom depth priority over the fragment depth if they were equal or very close I used the tolerance to achieve the clip. The Depth values being used at this point are the Linear Eye depths where 1 is at the camera.

    Perhaps as you've suggested this algorithm won't work as intended. My main goal was to write Depths from a group of objects A ready for rendering a group of objects B. When rendering the objects B I wanted to cut out objects A. Maybe I've over complicated the process?
     
    Last edited: Aug 21, 2023
  5. c0d3_m0nk3y

    c0d3_m0nk3y

    Joined:
    Oct 21, 2021
    Posts:
    581
    My bad, I somehow read that as abs(fragmentDepth-customDepth) < 0.

    I don't think it matters whether you use linear or non-linear, normalized or non-normalized or view-space or NDC space depths, as long as you are consistent. It might only matter for the tolerance. Linear is better to get a constant tolerance, no matter how far the object is away.

    For this simple case, maybe something like this would work (no guarantee):
    - Clear the depth buffer
    - Render objects from group B into the depth buffer but without writing color (see Shaderlab ColorMask)
    - Reverse the culling order (GL.invertCulling or CommandBuffer.SetInvertCulling or ShaderLab Cull)
    - Reverse the depth test to keep the depth furthest away (ShaderLab ZTest)
    - Render objects from group A into the same depth buffer without writing color
    - Restore regular culling order and re-enable color writing
    - Render objects from group B into the same depth buffer with ZTest == Equal

    I think there is no need to do manual depth comparison in the shader or use multiple depth buffers.
     
    Last edited: Aug 21, 2023
  6. c0d3_m0nk3y

    c0d3_m0nk3y

    Joined:
    Oct 21, 2021
    Posts:
    581
    Ignore my algorithm. It doesn't work when the view ray intersects objects from group A or B more than once. I think you don't get around doing an even/odd test with the stencil buffer like the CSG algorithm that I pointed out.

    upload_2023-8-21_9-42-43.png