Search Unity

Question Writing to the Z/Depth Buffer in URP

Discussion in 'General Graphics' started by Madalaski, May 24, 2020.

  1. Madalaski

    Madalaski

    Joined:
    Apr 16, 2013
    Posts:
    6
    From a few days of research on this forum, I've yet to find a definitive answer to this problem, though I'm certainly not the only person who's run into it. However, mine is slightly more specific and could hopefully garner an answer.

    The effect I'm currently working on renders out meshes on a specific layer in a lower resolution, pixelating them. It does this after Render Opaques but before Render Transparents. This works very well, as I could using CommandBuffer.Blit to Blend the new Opaque texture onto the old one, and currently as work-around, I can access the CameraDepthTexture and my rendered Depth Texture to manually Z Clip.

    However, this messes up Transparents. Since I'm not writing to the Camera's Z/Depth Buffer, the Transparents always render on top of my pixelated objects. I can easily produce a Render Texture that has the combined Depth Buffers of the Camera and Pixelated Object Layer but as far as I can find, there is no way for me to actually set the value of Camera's Depth Buffer, without calling DrawRendering or DrawMesh or some other method that isn't Blit. Even if I set the destination of Blit to the CameraDepthTexture, it won't change its values.

    The closest I can find to a solution is use the ShadowCaster Pass of a custom shader to write to a Z Buffer but the only examples I could find (of which there were only two) only describe changing the vertices of an object in the Z Buffer. Since I'm currently using Blit with a custom shader in order to apply ImageEffects onto the pixelated image, changing the vertices of the quad it's rendering with doesn't seem like an option.

    I just want to do this on PC Windows, not Mobile or anything else. If I can get it working in some case, with Unity's SRP, I'll be happy.

    I know likely no-one knows the answer to this, or I'm so green that this level of technical know-how is way above me and I shouldn't bother, or that I should just give up and use the PPv2 or the Built-in render pipeline. From what I've read Unity plans on adding more functionality for generating render data and injection points but it's a very long way away and I would still really appreciate any help or advice on the subject.

    At the very least, if this is just straight up impossible for some reason I don't know, I hope me asking this question, shows the Unity Devs that there's one more person that is interested in using and manipulating this data. I'm just trying to get it to work for this effect but I have a lot more creative ideas to try if I can do that.
     
    Alfonmc likes this.
  2. Alfonmc

    Alfonmc

    Joined:
    Dec 20, 2016
    Posts:
    4
    I am writing a thread as well, I guess we are on the hunt of this solution in URP I don't like this. It was kind of solved for Standard, but I am trying to figure out what is going on, is it a bug? still a problem? black magic only few can solve?
     
  3. Madalaski

    Madalaski

    Joined:
    Apr 16, 2013
    Posts:
    6
    Hello! Today is your lucky day because weeks after I made this post, I sucked it up and decided to finish my tutorial video with the message that I couldn't do this final section and would need it to become a feature with Unity sometime in the future. This is still true, SRP feels experimental at times and is missing a few key features that would make it simpler to use but in terms of this specific problem I found the solution when I was doing "fake research" for the end of my video. You can see more details of my journey here:


    I found this in HLSL but from my research I'm pretty sure there is a way to do this in CG. You simply replace your frag function of the shader you're using to render the object (or in my case blit) with something more akin to this.

    Code (CSharp):
    1.  
    2. half4 frag (Varyings input, out float depth : SV_Depth) : SV_Target {
    3.     half4 col = SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, input.uv);
    4.     depth = 1; // put whatever you want here for the depth
    5.     return col;
    6. }
    7.  
    The key changes are the output depth variable taking from the SV_Depth flag in the function and then actually changing the value of depth. Oh, and make sure the ZWrite tag is On for the shader.

    I've been told that this can give you some strange results as you're potentially changing the depth of a pixel after it's already been ZTested and possibly culled but because I was doing this for blitting and only needed to write to the depth buffer so that Transparents would be culled, this was fine and it works like a dream!

    Hope this helps you out!
     
    morepixels, Zyblade, DrSpritz and 3 others like this.