Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Write an existing depth buffer into the ZBuffer for occlusion

Discussion in 'General Graphics' started by thibautvdumont, Oct 24, 2019.

  1. thibautvdumont

    thibautvdumont

    Joined:
    Oct 6, 2015
    Posts:
    25
    Hello everyone,

    I'm working on people occlusion with ARFoundation, but this question is more generally about the possibility and way of placing a transparent occluding 2.5D mesh in the scene through a shader.

    Basically, I have in my hand a float texture, a projection of a body on the main camera, filled up with metric Z values in world space. I could possibly generate a mesh from it and place it in the scene, but I'm thinking there must be a simple way to place those 'occluding pixels' in the pipeline without this costly workaround.

    Most people are working with _CameraDepthTexture and postprocessing, but that isn't working with transparent objects. Wouldn't it be possible to simply write those depth values into the Z Buffer so that it is taken into account when sorting and drawing the objects ? This is the shader I wrote so far :

    Code (CSharp):
    1.  Shader "Custom/HumanSegmentationZWrite" {
    2. Properties {
    3.     _MainTex ("Texture", 2D) = "white" {}
    4.     _StencilTex ("StencilTex", 2D) = "white" {}
    5. }
    6. SubShader {
    7.     Tags {  "Queue"="Geometry-10" }
    8.     ColorMask 0
    9.      Pass{
    10.          ZWrite On
    11.          CGPROGRAM
    12.        
    13.              #pragma vertex vert
    14.              #pragma fragment frag
    15.              #include "UnityCG.cginc"
    16.  
    17.              uniform sampler2D _MainTex;
    18.              uniform sampler2D _StencilTex;
    19.              
    20.              struct v2f
    21.              {
    22.                 float2 uv : TEXCOORD0;
    23.                 float4 vertex : SV_POSITION;
    24.              };
    25.  
    26.              struct appdata
    27.             {
    28.                 float4 vertex : POSITION;
    29.                 float2 uv : TEXCOORD0;
    30.             };
    31.          
    32.              struct fragOut
    33.              {
    34.                 float4 color: SV_Target;
    35.                 float depth : SV_Depth;
    36.              };
    37.                  
    38.              v2f vert(appdata v)
    39.              {
    40.                 v2f o;
    41.                 o.vertex = UnityObjectToClipPos(v.vertex);
    42.                 o.uv = v.uv;
    43.                 return o;
    44.              }
    45.  
    46.              fragOut frag(v2f i) : COLOR {
    47.                  fragOut o;
    48.                  o.color = 0;
    49.  
    50.                 float2 uv = i.uv;
    51.  
    52.                 // Cut out everything that's not a human using the stencil
    53.                 float stencil = tex2D(_StencilTex, uv).r;
    54.                 if (stencil < 0.9)
    55.                 {
    56.                 #if defined(UNITY_REVERSED_Z)
    57.                     o.depth = 0;
    58.                 #else
    59.                     o.depth = 1;
    60.                 #endif
    61.                 }
    62.                 else
    63.                 {
    64.                     float worldDepth = tex2D(_MainTex, uv).r;
    65.                     float nearPlaneZ = _ProjectionParams.y;
    66.                     float farPlaneZ = _ProjectionParams.z;
    67.                     float frustrumDepth = farPlaneZ - nearPlaneZ;
    68.                     float frustrumRelativeDepth = (worldDepth - nearPlaneZ) * frustrumDepth;
    69.                     frustrumRelativeDepth = clamp(frustrumRelativeDepth, 0, 1);
    70.  
    71.                     #if defined(UNITY_REVERSED_Z)
    72.                         frustrumRelativeDepth = 1.0f - frustrumRelativeDepth;
    73.                     #endif
    74.  
    75.                     o.depth = frustrumRelativeDepth;
    76.                 }
    77.            
    78.                 return o;
    79.              }
    80.          ENDCG
    81.      }
    82. }
    83. }
    And this is how I'm using it on the main camera :

    Code (CSharp):
    1. public class HumanDepthSegmentation : MonoBehaviour
    2. {
    3.     [SerializeField] private Material _material;
    4.     [SerializeField] private ARHumanBodyManager _humanBodyManager;
    5.  
    6.  
    7.  
    8.     private void OnPreRender()
    9.     {
    10.         var humanStencil = _humanBodyManager.humanStencilTexture;
    11.         var humanDepth = _humanBodyManager.humanDepthTexture;
    12.  
    13.         _material.SetTexture("_StencilTex", humanStencil);
    14.         Graphics.Blit(humanDepth, null, _material);
    15.     }
    16. }
    17.  
    Unfortunately, despite having suitable values in the _MainTex (which is the depth tex), the virtual objects aren't occluded. Am I misunderstanding something about the rendering pipeline & order ?
     
    Last edited: Oct 25, 2019
  2. Shane_Michael

    Shane_Michael

    Joined:
    Jul 8, 2013
    Posts:
    158
    I would think you just want to run that in OnPreRender so that your stencil depth are in the z-buffer for the virtual objects to test against when they are rendered. In OnPostRender everything has already finished rendered so all the depth values you set will just sit there until they are discarded at the beginning of the next frame when the camera is cleared.
     
  3. thibautvdumont

    thibautvdumont

    Joined:
    Oct 6, 2015
    Posts:
    25
    Many thanks for you for your suggestion. I unfortunately did try with PreRender, but it isn't working either. It's as if the Blit wasn't writing to the ZBuffer.
    I got a working solution atm by :
    - placing a quad in front of my main camera
    - applying a shader-based solution to make it full screen
    - applying the previous material
    - setting the material parameters on LateUpdate

    It's ok, but I get the feeling that I shouldn't need to put this 'mask' in my scene and could get it working with Blit.
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    OnPreRender() is before that camera's target is setup. Doing a Blit() to a null target at that point ... well, I have no idea what you're rendering to then. It might actually end up being the same buffer as your camera will use but gets cleared immediately afterwards, or it's just rendering to the last thing that got rendered to.

    If you want to do this, you'd need to use a command buffer so you can render during the camera event BeforeOpaque as that's when the camera has a valid target to which you can write the depth to.