Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.

Question Depth buffer sampling in fragment shader on mesh

Discussion in 'Shaders' started by Skulltager, Mar 30, 2023.

  1. Skulltager


    Aug 12, 2016

    I'm trying to create a fake skybox room effect that requires me to write to and sample a bunch of depth buffers but I'm running into an issue with the accuracy of the depth buffer sample. The problem is also a lot more noticable on smaller screen resolutions

    This video shows the problem I'm having.

    According to this post [converting-depth-values-to-distances-from-z-buffer] you can accurately calculate the distance to the depthbuffer by calculating the viewDirection in the vertex shader and making use of the interpolation to then get the proper distance in the fragment shader.
    But because I'm not using a post processing script I cannot use the interpolation step.

    I have to do this custom depth check because I have to disable the ZWrite for this script and there are more clipping checks that I removed from the script to focus on the issue I'm having. Here's a video that shows what I'm trying to do.

    Code (CSharp):
    1. struct appdata {
    2.     float4 vertex : POSITION;
    3. };
    5. struct v2f {
    6.     float4 vertex : SV_POSITION;
    7.     float3 worldPosition : TEXCOORD0;
    8.     float4 screenPosition : TEXCOORD2;
    9. };
    11. v2f vert (appdata v) {
    12.     v2f output;
    13.     output.vertex = UnityObjectToClipPos(v.vertex);
    14.     output.worldPosition = mul(unity_ObjectToWorld, v.vertex).xyz;
    15.     output.screenPosition = ComputeScreenPos(output.vertex);
    16.     return output;
    17. }
    19. float4 frag (v2f input) : SV_Target {
    21.     float distanceToFragment = length(input.worldPosition - _WorldSpaceCameraPos);
    23.     float2 screenUV = input.screenPosition.xy / input.screenPosition.w;
    25.     //I think this is the problem.
    26.     float4 direction = mul(unity_CameraInvProjection, float4(screenUV * 2 - 1, 1, 1));
    28.     float roomDepthSample = Linear01Depth (SAMPLE_DEPTH_TEXTURE(roomDepthTexture, screenUV));
    29.     float3 roomViewPos = ( / direction.w) * roomDepthSample;
    30.     float distanceToRoomDepthBuffer = length(roomViewPos);
    32.     bool isInitialWall = distanceToRoomDepthBuffer > distanceToFragment - 0.001;
    34.     if(!isInitialWall)
    35.         clip(-1);
    37.     float4 color = texCUBE(_MainTex, normalize(input.worldPosition -;
    38.     return color;
    39. }
    These are the settings I'm using for the depthbuffer.
    At one point I thought there might be an issue with anti aliasing or something so I disabled all the anti aliasing on all cameras and rendertextures but the issue remained. These textures are remade whenever the screen resolution changes to match the new screen size.

    Code (CSharp):
    1. renderTexture = new RenderTexture(Screen.width, Screen.height, 32, RenderTextureFormat.Depth);
    2. renderTexture.depthStencilFormat = GraphicsFormat.D32_SFloat;
    3. renderTexture.filterMode = FilterMode.Point;
    4. renderTexture.wrapMode = TextureWrapMode.Clamp;
    5. renderTexture.Create();
    6. Shader.SetGlobalTexture("roomDepthTexture", renderTexture);
    I only ever have to do comparion checks with the depthbuffer.

    My question is: How do I more accurately get the world distance from the depth buffer or is there a different/better way to compare the current distance to that of a depth buffer using a mesh rendering shader?
    Last edited: Mar 30, 2023
  2. Skulltager


    Aug 12, 2016
    I also tried to compare the clip space depth to the depth texture but I can't get them to line up properly either. They look identical when using their depth as rgb colors, but when I try the following bit of code they never actually line up.

    Code (CSharp):
    1. float2 screenUV = input.screenPosition.xy / input.screenPosition.w;
    2. float depthTexture = Linear01Depth (SAMPLE_DEPTH_TEXTURE(roomDepthTexture, screenUV));
    3. float depthFragment = Linear01Depth(input.vertex.z / input.vertex.w);
    4. if(depthFragment + 0.001 > depthTexture && depthFragment - 0.001 < depthTexture)
    5.     return float4(0, 1, 0, 1);
    7. return float4(depthTexture, depthTexture, depthTexture, 1);

    I can't seem to find out how the values in the depthTexture are calculated either. The ideal way to fix this is if I can find a value in the fragment shader to directly compare with the result of SAMPLE_DEPTH_TEXTURE(roomDepthTexture, screenUV);
    Last edited: Mar 30, 2023
  3. Skulltager


    Aug 12, 2016
    Basically what I wish I could do is look at all the code that was being used in this project

    He clips the mesh if it's behind a custom depth buffer and it seems he can get very accurate results only having to do 0.00001 of an adjustment to work around the floating point error, but sadly he didn't post the entire script. Only the part where he compares the 2 depth values. I just don't know how to properly get those 2 depth values in order to compare them with eachother.
  4. bgolus


    Dec 7, 2012
    I'm not quite sure why you're doing what you're doing. What do you need the custom depth buffer for? How are you rendering it and what exactly is in it?

    It seems like you're rendering it by having a second camera with that render texture set as its render target? Presumably that second camera has an identical transform, FOV, and clipping planes as the main camera, thus have identical view projection matrices? In which case both the mesh's depth and the custom depth texture are in the same depth "space". If they really are in the same depth space, you don't need to use
    at all and you can compare those values against each other directly.

    However fact they don't quite match up when using
    makes me think one of the things I listed off above don't match, or you're using a replacement shader pass to generate the depth texture, and that shader is outputting something that isn't the correct value for what you're trying to do. Without knowing the rest of what you're doing, I can't say where the issue is. But I think that last snippet of code makes me think the issue isn't in this shader.
    Skulltager likes this.
  5. bgolus


    Dec 7, 2012
    I also can't help but wonder if a setup using stencils wouldn't be cleaner, and much more efficient to render for what you're trying to do. But I also don't quite understand your setup either.
  6. Skulltager


    Aug 12, 2016
    My experience with shaders is still rather basic so I don't doubt there might be a better way to do this. If there is I'd love to hear it.

    But let me explain how my idea was supposed to work.

    A couple of things that might be important to know.
    1: All the rooms are directly adjacent to one another. There is a minimap in the topleft corner of the screen where you can see how the rooms are positioned next to one another. (In my first post) All of it is procedurally generated.
    2: There is no space between the walls of adjacent rooms
    This is a topdown view from the spawn room.


    The first thing I do is assign each room a unique roomColor. All wall tiles have a Skybox Mesh Renderer and a Room Color Mesh Renderer. The Room Color Mesh Renderer is on a unique layer.


    I then use a second camera which shares the exact same properties as the main camera to render all the Room Colors to a custom renderTexture including a depthTexture.

    These are the camera settings for the different cameras I'm using. I copy pasted the main camera so I don't think they are any different from eachother like may have thought.


    Code (CSharp):
    1. float4 frag (v2f i) : SV_Target {
    2.     return float4(roomColor, 0, 0, 1);
    3. }
    Then I have another camera repeat this process for door tiles.
    Door tiles are setup almost identical to wall tiles. But since I need to know what room is infront of the door and what room is behind the door I write both the fromRoomColor and the toRoomColor into a second custom renderTexture (red and green channels) and depthTexture. If the door is not behind any skybox wall then I can just ignore it since the reason I'm doing all this is to draw things behind the skybox wall like they are infront of it.

    Then I do the same thing a few times clipping all doors infront of the last depthTexture to get an array of layered door color and depth textures. I do this because sometimes you look through multiple doors into other rooms and I need to know in what room something is when rendering to make sure it's visible or not.

    Code (CSharp):
    1. roomCamera.SetTargetBuffers(roomColorTexture.colorBuffer, roomDepthTexture.depthBuffer);
    2. roomCamera.Render();
    4. for (int i = 0; i < DOOR_DEPTH_COUNT; i++)
    5. {
    6.     Shader.SetGlobalInt("doorDepth", i);
    7.     doorCamera.SetTargetBuffers(doorColorTextures[i].colorBuffer, doorDepthTextures[i].depthBuffer);
    8.     doorCamera.Render();
    9.     Shader.SetGlobalTexture("doorDepthTexture", doorDepthTextures[i]);
    10. }

    This is the door color shader
    Code (CSharp):
    1. float4 frag (v2f input) : SV_Target {
    2.     float2 screenUV = input.screenPosition.xy / input.screenPosition.w;
    3.     float4 direction = mul(unity_CameraInvProjection, float4(screenUV * 2 - 1, 1, 1));
    5.     float distanceToFragment = length(input.worldPosition - _WorldSpaceCameraPos);
    7.     if(doorDepth == 0)
    8.     {
    9.         float roomDepthSample = Linear01Depth (SAMPLE_DEPTH_TEXTURE(roomDepthTexture, screenUV));
    10.         float3 roomViewPos = ( / direction.w) * roomDepthSample;
    11.         float distanceToRoomDepthBuffer = length(roomViewPos);
    13.         if (distanceToRoomDepthBuffer > distanceToFragment + 0.001)
    14.             clip(-1);
    15.     }
    16.     else
    17.     {
    18.         float doorDepthSample = Linear01Depth (SAMPLE_DEPTH_TEXTURE(doorDepthTexture, screenUV));
    20.         if (doorDepthSample == 1)
    21.             clip(-1);
    23.         float3 doorViewPos = ( / direction.w) * doorDepthSample;
    24.         float distanceToDoorDepthBuffer = length(doorViewPos);
    26.         if (distanceToDoorDepthBuffer > distanceToFragment - 0.001)
    27.             clip(-1);
    28.     }
    31.     return float4(fromRoomColor, toRoomColor, 0, 1);
    32. }

    Then when rendering the scene I draw all the skybox walls using "Queue" = "Geometry-50"
    and ZWrite Off using the shader from my first post.

    And lastly I have an environment shader which figures out based on all the depth textures and room/door color textures if it should be drawn or not. The ShouldDraw function is just a recursive function which finds out if it's visible when looking through one or more doors.

    Code (CSharp):
    1. fixed4 frag(v2f input) : SV_Target
    2. {
    3.     float2 screenUV = input.screenPosition.xy / input.screenPosition.w;
    4.     float4 direction = mul(unity_CameraInvProjection, float4(screenUV * 2 - 1, 1, 1));
    6.     float distanceToFragment = length(input.worldPosition - _WorldSpaceCameraPos);
    8.     float roomDepthSample = Linear01Depth (SAMPLE_DEPTH_TEXTURE(roomDepthTexture, screenUV));
    9.     float3 roomViewPos = ( / direction.w) * roomDepthSample;
    10.     float distanceToRoomDepthBuffer = length(roomViewPos);
    12.     float roomIndexColor = tex2D(roomColorTexture, screenUV).r;
    14.     if (distanceToRoomDepthBuffer <= distanceToFragment + 0.001)
    15.     {
    16.         if(!ShouldDraw(roomIndexColor, distanceToFragment, screenUV, direction))
    17.             clip(-1);
    18.     }
    20.     fixed4 color = tex2D(mainTex, input.uv);
    21.     return color * tintColor.rgba;
    22. };
    I hope that explains the process of how things work right now. Thanks again for trying to help me out with this issue.

    Attached Files:

    Last edited: Mar 31, 2023
  7. Skulltager


    Aug 12, 2016

    This is a quick programmer art looking idea sketch. Each room is color coded. When looking at a skybox wall I sometimes want to make certain things behind it visible and others not. In this example I want to see the yellow, beige and orange room, but not the red and green room.
    I also want to draw the last skybox hit when tracing the view ray. So I want to draw the skybox of the orange room here.

    Anyway that's the idea I was trying to create, which I almost have apart from that one depth buffer comparison bug. If there is a better approach I'd love to hear it though!
    Last edited: Mar 31, 2023
  8. Skulltager


    Aug 12, 2016
    Alright I think I've figured it out.
    The first thing is that the issue in my original video only happens if you select any of unity's scaling resolutions. Even if you manually put in some completely random fixed resolution the issue is gone. I've tried it with ultra wide resolutions, very standard resolutions and also some really weird resolutions like 1234 x 567. I also debugged to see what resolution the screen was when it was set to free aspect, copied the exact screen space and set that as the fixed resolution and my issue disappears.
    The second is that I changed my depth buffer sampling to use
    Code (CSharp):
    1. float roomDepthSample = Linear01Depth(SAMPLE_DEPTH_TEXTURE_PROJ(roomDepthTexture, UNITY_PROJ_COORD(input.screenUV)));
    instead of

    Code (CSharp):
    1. float roomDepthSample = Linear01Depth(SAMPLE_DEPTH_TEXTURE(roomDepthTexture, screenUV.xy));
    bgolus likes this.
  9. bgolus


    Dec 7, 2012
    Those are effectively the same btw. It should not matter which one you use.

    instead of
    , but on modern GPUs,
    is handled by doing uv.xy / uv.w in the shader, and calling
    . The
    part exists to handle some weird funkiness with the PS Vita that used 3 component projective UVs instead of 4 component, doing uv.xy / uv.z, and at this point only exists to maintain compatibility with old shaders.
  10. Skulltager


    Aug 12, 2016
    If that's the case then my entire chase to figure out why my code wasn't working was related to this very strange resolution issue that I assume is some kind of unity bug.
    Since I changed my shader code since my original post the bug isn't as noticable anymore, but in this video you can see that I have some artifacting when I have my resolution set to free aspect or if I have it set to an aspect ratio where the game screen has some padding on the top and bottom sides. But if I then resize the window to be wider the artifacting disappears the moment it has some padding on the left and right sides.
    I also log the resolutions in the console and at the end of the video you can see me change from 16:9 aspect ratio where the artifacting is visible to a fixed resolution with the EXACT same dimensions and the artifacting disappears again.

    And since I was always testing on free aspect mode the issue was pretty much always there. However in fullscreen mode the issue is a lot less noticable and in my current version isn't noticable anymore at all. At least I couldn't find any artifacting.
    I have a second monitor that I thought might cause problems so I unhooked it but that didn't make any changes in this strange behaviour either.

    I haven't tested it out in a build yet thought since I haven't made it build ready yet.
    Last edited: Apr 5, 2023