Search Unity

  1. Unity 6 Preview is now available. To find out what's new, have a look at our Unity 6 Preview blog post.
    Dismiss Notice
  2. Unity is excited to announce that we will be collaborating with TheXPlace for a summer game jam from June 13 - June 19. Learn more.
    Dismiss Notice

Question Reuse Depth

Discussion in 'High Definition Render Pipeline' started by Phantom_X, Jan 16, 2020.

  1. Phantom_X

    Phantom_X

    Joined:
    Jul 11, 2013
    Posts:
    316
    Hi,

    Is there any way to reuse the scene depth to mask a render texture in a custom pass? I use a layer mask to render some geometry to a texture and I want the geometry that is in front of this one to mask my texture.




    I tried comparing the scene depth and the mesh depth like so, but with no luck
    Code (CSharp):
    1.  
    2.         float depth = LoadCameraDepth(varyings.positionCS.xy);
    3.         float d = LoadCustomDepth(posInput.positionSS);
    4.  
    5.         float alphaFactor = (d < depth) ? 1:0;
    6.  
    my custom volume settings:



    Thanks
     
  2. antoinel_unity

    antoinel_unity

    Unity Technologies

    Joined:
    Jan 7, 2019
    Posts:
    267
    Hello,

    By any change would you be able to set the target depth bufer to camera when rendering your objects (+ override the depth state so you don't write to the camera depth buffer) ? With this the objects you render in your mask would be able to perform depth-testing against the scene (but not against each other).

    Otherwise i don't see anything wrong with the code that you posted (just wondering why are you using two different coordinate type to load the custom and camera depth buffers ?). I suggest you to debug the depth you're sampling, you can use our conversion functions `LinearEyeDepth` to convert the raw depth value to a depth in view space, like here: https://github.com/alelievr/HDRP-Cu.../CustomPasses/TIPS/Resources/TIPS.shader#L110 It will be easier to visualize in this format.
     
  3. Phantom_X

    Phantom_X

    Joined:
    Jul 11, 2013
    Posts:
    316
    I actually used the example you mentioned and they are using these different coordinates too, that's why mine look like that.
    so using the linearEyeDepth trick I was able to see that the depth texture from the scene looks fine, but the depth from the mesh renders as fully white. So I guess that's why the depth testing is not working.

    I thought my camera depth target was already set to camera, no ?

    About overriding the depth state, I'm not sure if I'm doing this correctly ( probably not ), but I create my depth buffer like so:
    Code (CSharp):
    1. _rtDepth = RTHandles.Alloc(
    2.                 Vector2.one, TextureXR.slices, dimension: TextureXR.dimension,
    3.                 colorFormat: GraphicsFormat.R16_UInt, useDynamicScale: true, isShadowMap: true,
    4.                 name: "Depth Mask", depthBufferBits: DepthBits.Depth16
    5.             );
    6.  
    then I do the objects rendering like this:

    Code (CSharp):
    1.  var result = new RendererListDesc(_shaderTags, cullingResult, camera.camera)
    2.         {
    3.             rendererConfiguration = PerObjectData.None | PerObjectData.LightProbe | PerObjectData.LightProbeProxyVolume | PerObjectData.Lightmaps,
    4.             renderQueueRange = RenderQueueRange.all,
    5.             sortingCriteria = SortingCriteria.BackToFront,
    6.             excludeObjectMotionVectors = false,
    7.             layerMask = maskLayer,
    8.             stateBlock = new RenderStateBlock(RenderStateMask.Depth) { depthState = new DepthState(true, CompareFunction.LessEqual) },
    9.         };
    10.  
    11.         CoreUtils.SetRenderTarget(cmd, _rt, _rtDepth, ClearFlag.All);
    12.         HDUtils.DrawRendererList(renderContext, cmd, RendererList.Create(result));
    13.  
     
  4. antoinel_unity

    antoinel_unity

    Unity Technologies

    Joined:
    Jan 7, 2019
    Posts:
    267
    What i meant is that when calling your SetRenderTarget() you pass your custom color buffer and the camera depth buffer and because it already contains the objects in your scene, the other objects you're drawing are going to be depth tested against the scene. But the problem is that you wont have your objects rendered in your custom depth buffer. So if you need your custom depth buffer for other steps in your effect you can't do that.

    You can get the camera depth buffer using this function: https://docs.unity3d.com/Packages/c...tomPass_GetCameraBuffers_RTHandle__RTHandle__
     
  5. Phantom_X

    Phantom_X

    Joined:
    Jul 11, 2013
    Posts:
    316
    Thank you so much for your help, but sorry I still can't get it to work :(

    So I tried passing the camera depth like you said with the GetCameraBuffers() method, but I still get the same result were my custom color buffer renders like if there was nothing in front.

    Code (CSharp):
    1.  
    2.         RTHandle source;
    3.         RTHandle depth;
    4.  
    5.         // Retrieve the target buffer
    6.         GetCameraBuffers(out source, out depth);
    7.  
    8.         // Render the objects in the layer mask into a mask buffer
    9.         var result = new RendererListDesc(_shaderTags, cullingResult, camera.camera)
    10.         {
    11.             rendererConfiguration = PerObjectData.None | PerObjectData.LightProbe | PerObjectData.LightProbeProxyVolume | PerObjectData.Lightmaps,
    12.             renderQueueRange = RenderQueueRange.all,
    13.             sortingCriteria = SortingCriteria.BackToFront,
    14.             excludeObjectMotionVectors = false,
    15.             layerMask = maskLayer,
    16.             stateBlock = new RenderStateBlock(RenderStateMask.Depth) { depthState = new DepthState(true, CompareFunction.LessEqual) },
    17.         };
    18.  
    19.         CoreUtils.SetRenderTarget(cmd, _rt, depth, ClearFlag.All);
    20.         HDUtils.DrawRendererList(renderContext, cmd, RendererList.Create(result));
    21.  
    22.         var compositingProperties = new MaterialPropertyBlock();
    23.         compositingProperties.SetTexture("_Mask", _rt);
    24.         HDUtils.DrawFullScreen(cmd, fullScreenMat, source, compositingProperties, shaderPassId: 0);
    25.  
    Is that what you meant?
     
  6. antoinel_unity

    antoinel_unity

    Unity Technologies

    Joined:
    Jan 7, 2019
    Posts:
    267
    When calling the CoreUtils.SetRenderTarget, you're clearing all the targets that you're binding (so the camera depth buffer). That means there is no more depth information in the camera depth buffer, you can set the clearflags to ClearFlag.Color so it only clears the color
     
  7. Phantom_X

    Phantom_X

    Joined:
    Jul 11, 2013
    Posts:
    316
    Ahhh that was it, works now! thanks a lot!
     
    antoinel_unity and meadjix like this.
  8. Phantom_X

    Phantom_X

    Joined:
    Jul 11, 2013
    Posts:
    316
    Hey @antoinel_unity ,
    I'm now rendering objects with a command buffer in the custom pass because I don't want them to be rendered by the camera as is. It was working fine in unity 2019.0.3f5, but now in f6 it broke.

    The uv for the depth and the uv for the color seem not to match anymore. In the full screen pass I need to multiply the UV by the _RTHandleScale.xy for the depth to work, but not multiply it for the color. Since they are the same render target I cannot set a uv for the color and an other one for the depth right ?




    EDIT: I made it work using CoreUtils.SetRenderTarget instead of using the SetRenderTarget straight from the command buffer.
     
    Last edited: Jan 28, 2020
  9. antoinel_unity

    antoinel_unity

    Unity Technologies

    Joined:
    Jan 7, 2019
    Posts:
    267
    Hey,

    When sampling RTHandles in shaders, you must always multiply your raw UVs by the _RTHandleScale.xy because when you have multiple cameras (scene view + game view at same time for example) it avoids to sample outside of the current camera viewport size. Note that you can also use a Load operation with the screen corredinate, in this case you don't need to scale anything.

    And yes, the CoreUtils.SetRenderTarget also sets the viewport and so avoid to write to a full render target which can cause scaling issues.