Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Custom depth buffer using RWTexture2D

Discussion in 'Shaders' started by sewy, Jul 4, 2022.

  1. sewy

    sewy

    Joined:
    Oct 11, 2015
    Posts:
    150
    So due to huge performance benefits of not using separate pass for Depth texture (especialy in VR) and only using it in late passes (transparent, post process) I switched to custom method using RWTexture2D (RGB is used for distortion etc.).
    For some reason I am getting noise shimmering (I'm suspecting floating error or Z fighting).
    upload_2022-7-4_17-34-14.png

    For testing purposes I use this basic fragment shader, which checks previously added value and if the current pixel is closer, it will update the buffer, than return color saved at current pixel coordinates.
    Code (CSharp):
    1. RWTexture2D<float4> textureBuffer : register(u1);
    2. fixed4 frag (v2f i) : SV_Target
    3. {
    4.      int2 pixelUV = UnityPixelSnap(i.pos);
    5.      //pixelUV = floor(i.screenPos.xy/i.screenPos.w * _ScreenParams.xy); // Same result as above
    6.  
    7.      if(Linear01Depth(i.pos.z) - Linear01Depth(textureBuffer[pixelUV].a) < 0)
    8.      //if(textureBuffer[pixelUV].a < i.pos.z) // Same result as above
    9.                     textureBuffer[pixelUV] = float4(0.1.xxx, i.pos.z);
    10.  
    11.      return textureBuffer[pixelUV].a;
    12. }
    The buffer is cleared in compute shader (Dispatched at Camera.OnPreRender()) like so:
    Code (CSharp):
    1. #pragma kernel CSMain
    2.  
    3. RWTexture2D<float4> textureBuffer : register(u1);
    4.  
    5. [numthreads(8,8,1)]
    6. void CSMain (uint3 id : SV_DispatchThreadID)
    7. {
    8.     textureBuffer[id.xy] = 0;
    9. }
    I've also tested it with postprocess shader which just returns the depth buffer, but the result is even worse. upload_2022-7-4_17-42-24.png

    I also tried to get rid of the compute shader and clear the buffer in
    ColorMask 0
    postprocess shader, but result is always same.

    Unity 2021.3, Forward rendering, Default pipeline, VR+nonVR
     
  2. joshuacwilde

    joshuacwilde

    Joined:
    Feb 4, 2018
    Posts:
    725
    What are you actually trying to do? And why are you clearing it in a compute shader? That's very odd. You should be using CommandBuffer.ClearRenderTarget()
     
  3. sewy

    sewy

    Joined:
    Oct 11, 2015
    Posts:
    150
    Custom scene+depth texture while avoiding separate depth pass (performance) using RWTexture2D.

    I need to clear its content before rendering into it, because I need fresh data to do the z-test and store the correct value.

    I am not using Command Buffers
     
  4. joshuacwilde

    joshuacwilde

    Joined:
    Feb 4, 2018
    Posts:
    725
    You are overcomplicating this by a lot. Call SetTargetBuffers on your camera with both a color texture and a depth texture as inputs. You don't have to manually clear the textures as they will be cleared automatically by the camera according to the camera clear flags. And this is a lot more optimized way to do it as well, especially if you are on mobile.

    In OnPostRender, do camera.targetTexture = null. Then do Graphics.Blit(yourColorTexture, null)

    (null in this case means it will be written straight to the screen)
     
    sewy likes this.
  5. sewy

    sewy

    Joined:
    Oct 11, 2015
    Posts:
    150
    SetTargetBuffers is a dream, but in VR there is an ongoing bug which render gray to the screen, so I am suck with my solution so far. Any idea what might cause the noise shimerring as seen in my first post?
     
  6. sewy

    sewy

    Joined:
    Oct 11, 2015
    Posts:
    150
    Maybe guru @bgolus would know?
     
  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    Okay, so some random thoughts.

    You absolutely do not want to be converting the depth to linear to test the values against each other. Leave them exactly as they are and compare them against each other. Conversion will cause precision issues.

    Depending on the platform, the depth values may be 1.0 near, 0.0 far, so you need to check
    #ifdef UNITY_REVERSED_Z
    for if you should check if the value is
    >=
    or
    <=
    . Similarly this changes if you're clearing to 0.0 or 1.0.

    Presumably you're still using a depth buffer, and you're adding that code to your opaque objects' shaders so it renders them and writes to this "depth buffer" in the same pass? If so, you shouldn't even need to do the test! It should already have been done by the depth test itself.

    The big caveat to all of this is MSAA and write order. Random write targets in the fragment shader are tricky, especially when MSAA is involved. There's no guarantee that the fragments within a single pixel will run in a nice orderly fashion, and may even be running in parallel, meaning different fragments may read, and then write to the same "pixel" at the same time (because they all passed the test before other fragments in the same pixel write to it)! This can mean the "wrong" data gets written, or in the worse (and unlikely) case the data gets corrupted. I don't know of a way around this.

    Last thing... I believe that compute shader is only clearing the corner 8x8 pixels.
     
    sewy likes this.
  8. sewy

    sewy

    Joined:
    Oct 11, 2015
    Posts:
    150
    I am not using prepass depth, at the end of each fragment pass opaque shader (only fwdBase), there is the code from my first post. It indeed renders them and writes to my RW buffer in the same pass:
    Code (CSharp):
    1.  
    2.      int2 pixelUV = UnityPixelSnap(i.pos);
    3.      //pixelUV = floor(i.screenPos.xy/i.screenPos.w * _ScreenParams.xy); // Same result as above
    4.      if(textureBuffer[pixelUV].a < i.pos.z)
    5.          textureBuffer[pixelUV] = float4(0.1.xxx, i.pos.z);
    If I am not testing, it writes full mesh, even with occluded parts.

    This could potentially be the source. I would personally prefer unresolved multisampled RW buffer anyway, any thoughts if it is possible to access per subpixel depth?

    It doesn't seems to me, as it works as expected (except the noise).
     
  9. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    That's not what I was asking about. The depth buffer and depth texture (generated by a depth pre-pass) are entirely separate things when using the forward renderer ... which is specifically the issue you're trying to resolve. Forward rendering still requires a depth buffer to handle opaque z sorting, and ideally you'd just copy that to a texture once the forward opaques have finished rendering.

    Unity has functionality to copy a depth buffer to a texture, that's how the depth texture is created, but it's not exposed to c# even after several of us early VR devs asking for it for exactly the reason you're trying to work around now. It was eventually exposed as something for the SRP, but not the built in render pipeline. Part of the problem was Unity was missing multi-sample texture sampling, and all multi-sample textures were auto-resolved by the GPU, which is bad for depth textures. That didn't get added until they were well into the SRP development and BIRP dev had been functionally abandoned. I believe there are some URP or HDRP branches that have post-opaque pass depth texture resolves working, though I don't think it's in the main branches yet.

    What happens if you add
    [earlydepthstencil]
    to your shaders just above the frag function? I wonder if writing to the RW buffer disables early depth rejection.

    Nope. You'd need to resolve the depth buffer to a texture directly. The depth the fragment shader has will not even match any value the depth buffer has in its subsample depth values since they're not at the same subpixel position!

    I guess it depends on how you're calling dispatch from c#.

    One thing that bugs me is you should be assigning a render texture to the
    textureBuffer
    to be the object the
    RWTexture2D
    is reading/writing to. And you should be able to call clear on that.
     
  10. sewy

    sewy

    Joined:
    Oct 11, 2015
    Posts:
    150
    Yet another reason to upgrade, because Camera.SetTargetBuffers() is not working in VR as well..

    Seems likne nothing happens, so i guess..?

    Here is the snipped from assigning RT to RWBuffer and ComputeShader
    Code (CSharp):
    1.     void OnEnable()
    2.     {
    3.         bufferUpdate();
    4.     }
    5.  
    6.     void OnPreRender()
    7.     {
    8.         if (renderTexture != null && useClear)
    9.             clearBufferShader.Dispatch(0, renderTexture.width / 8, renderTexture.height / 8, 1); // To clear depth buffer before writing into it
    10.     }
    11.  
    12.     private void bufferUpdate()
    13.     {
    14.         if (renderTexture != null) clear();
    15.  
    16.         bool vrEnabled = GlobalDefaults.enableVR;
    17.  
    18.         if (vrEnabled && XRSettings.eyeTextureDesc.width != 0)
    19.         {
    20.             renderTexture = new RenderTexture(XRSettings.eyeTextureDesc.width, XRSettings.eyeTextureDesc.height, 32, RenderTextureFormat.ARGBFloat);
    21.         }
    22.         else
    23.             renderTexture = new RenderTexture(Screen.currentResolution.width, Screen.currentResolution.height, 32, RenderTextureFormat.ARGBFloat);
    24.  
    25.         renderTexture.enableRandomWrite = true;
    26.         renderTexture.Create();
    27.  
    28.         Graphics.ClearRandomWriteTargets();
    29.         Graphics.SetRandomWriteTarget(1, renderTexture);
    30.  
    31.         clearBufferShader.SetTexture(0, "textureBuffer", renderTexture);
    32.     }
    33.  
    Though clearing in the custom PostProcess shader (ColorMask 0) leads to same result as using ComputeShader.