Search Unity

  1. Unity Asset Manager is now available in public beta. Try it out now and join the conversation here in the forums.
    Dismiss Notice

How can i write the camera depth into a rendertexture when OnRenderImage is not getting called?

Discussion in 'Graphics Experimental Previews' started by Desoxi, Sep 7, 2018.

  1. Desoxi

    Desoxi

    Joined:
    Apr 12, 2015
    Posts:
    195
    Hey, the title pretty much says everything. Im trying to use the camera depth texture inside another shader and thats why im trying to Blit the camera depth into a rendertexture via OnRenderImage.
    The only problem i have is, that its not called once in the HDRenderPipeline project.
    I created a new default 3D project (no hd render stuff) and there the OnRenderImage method is called as always.
    Where can i hook up the blit method in the HDRender project?
     
  2. Desoxi

    Desoxi

    Joined:
    Apr 12, 2015
    Posts:
    195
    Alright, through a lot of search i found out that i could use a custom post process stack effect to blit the depth into a rendertexture like this:

    C# part:
    Code (CSharp):
    1. using System;
    2. using System.Collections;
    3. using System.Collections.Generic;
    4. using UnityEngine;
    5. using UnityEngine.Rendering.PostProcessing;
    6.  
    7. [Serializable]
    8. [PostProcess(typeof(DepthExporterRenderer), PostProcessEvent.BeforeStack, "Custom/DepthExport")]
    9. public sealed class DepthExporter : PostProcessEffectSettings
    10. {
    11.     //public RenderTextureParameter depthTexture;
    12.  
    13. }
    14.  
    15. public sealed class DepthExporterRenderer : PostProcessEffectRenderer<DepthExporter>
    16. {
    17.     public override DepthTextureMode GetCameraFlags()
    18.     {
    19.         return DepthTextureMode.Depth;
    20.     }
    21.  
    22.     public override void Render(PostProcessRenderContext context)
    23.     {
    24.         var sheet = context.propertySheets.Get(Shader.Find("Hidden/Custom/DepthShader"));
    25.         //sheet.properties.SetFloat("_Blend", settings.blend);
    26.         context.command.BlitFullscreenTriangle(context.source, context.destination, sheet, 0);
    27.     }
    28. }
    29.  
    30. [Serializable]
    31. public sealed class RenderTextureParameter : ParameterOverride<RenderTexture> {}
    and the following shader:

    Code (CSharp):
    1. Shader "Hidden/Custom/DepthShader"
    2. {
    3.     HLSLINCLUDE
    4.  
    5. #include "PostProcessing/Shaders/StdLib.hlsl"
    6.  
    7.  
    8.     TEXTURE2D_SAMPLER2D(_CameraDepthTexture, sampler_CameraDepthTexture);
    9.  
    10.     float4 Frag(VaryingsDefault i) : SV_Target
    11.     {
    12.         float depth = LinearEyeDepth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, sampler_CameraDepthTexture, i.texcoordStereo));
    13.     return float4(depth, depth, depth, 0);
    14.     }
    15.  
    16.         ENDHLSL
    17.         SubShader
    18.     {
    19.         Cull Off ZWrite Off ZTest Always
    20.  
    21.             Pass
    22.         {
    23.             HLSLPROGRAM
    24.  
    25. #pragma vertex VertDefault
    26. #pragma fragment Frag
    27.  
    28.             ENDHLSL
    29.         }
    30.     }
    31. }
    Though i do have another problem now because the displacement which im using is based on this depth texture and that seems to be changing every render alghough nothing is moving.
    Im going to open another thread for this case.
     
  3. wyattt_

    wyattt_

    Unity Technologies

    Joined:
    May 9, 2018
    Posts:
    424
    These are my guesses. Post a gif if it's not too much trouble and I can tell ya if that's the case or not.

    Typical Depth Buffer "Jitter" Scenario:
    Typically it will "jitter" because of when (actual time during the render process) you are accessing the current Camera's Depth Texture. The depth buffer gets filled as opaque geometry is drawn to the screen and isn't "filled" until that has been done, so, if your shader is used as part of the opaque geometry queue, the depth buffer for that frame has not been filled yet and it is probably using the depth buffer from the previous frame. Your depth buffer is basically lagging behind one frame. To fix this, you generally set the queue of the shader to "Transparent" since transparent geometry gets rendered after opaque geometry (at which point the depth buffer will be filled)

    Your situation:
    I believe the situation is similar for you. You copy the depth buffer via post-processing to another buffer and use that in your shader but post-processing is done after opaque/transparent geometry is drawn which means your shader has already been used for rendering and you are copying what is now the previous frame's depth buffer to your intermediate depth buffer. Now it's the next frame and your shader is again used in the render pass but now you are using old depth buffer data stored in that intermediate buffer

    Side Note:
    It might also "jitter" because there is also the SceneView camera which has it's own depth pass stuff so you might actually be using the SceneView camera depth buffer at times in addition to your main camera. This should be rectified when entering Play Mode and not having the SceneView active

    EDIT:
    Jitter issue was a little different from what I expected due to vertex displacement. Can see the "jitter" here: https://forum.unity.com/threads/doe...-although-the-whole-scene-stays-still.552787/
     
    Last edited: Sep 25, 2018
  4. MadeFromPolygons

    MadeFromPolygons

    Joined:
    Oct 5, 2013
    Posts:
    3,983
    @wyatttt can you confirm if the above way is correct current way to do depth for new pipelines, such as if I wanted to do a standard colour by depth type shader such as that on the example for depth texture on the documentation? Really struggling to recreate an underwater fog/Coloration shader :(
     
  5. wyattt_

    wyattt_

    Unity Technologies

    Joined:
    May 9, 2018
    Posts:
    424
    This is how you should currently do it (minus the post-processing and intermediate buffer part though).

    Off the top of my head, here are the steps you'd have to take:

    SRP and Depth in custom Shader (similar to non-SRP workflow):
    1. Enable depth on your camera or in the pipeline settings asset if there is one (for Lightweight in specific)
    2. Set the "Queue" Tag to "Transparent"
    2. Add sampler2D _CameraDepthTexture to your shader
    4. Sample the camera's depth buffer using the LinearEyeDepth function that Desoxi used above
    5. Use the linearized depth value to do your depth-based coloring

    SRP and Depth in a Shader Graph:
    1. In your Shader Graph, add a Texture2D property via the Blackboard
    2. Set the reference to "_CameraDepthTexture"
    3. Set "Exposed" toggle to false
    4. Drag the property into your Shader Graph workspace
    5. Add a Sample Texture 2D Node and plug the _CameraDepthTexture node into the Texture2D input port
    6. You now need to sample the depth texture using screen space UVs, so you'll need to add a ScreenPosition node and plug that into the UV port of the Sample Texture2D Node. This will give you the screen position of the current mesh fragment
    7. The output of the Sample Texture2D Node will now give you the stored non-linear depth value for the screen pixel where the current mesh fragment is going to be drawn (you'll have to linearize this yourself until our Shader Graph Depth Node makes it into a package release). You can look at the Unity shader source to get the code for that function
    8. Do your depth-based coloring with the linearized depth value
    9. Create a Material out of your Shader Graph
    10. Set the Render Queue to Transparent via the Material Inspector


    For both cases, you might also need to compute the depth for you mesh fragment and compare that to the depth value stored in the depth buffer to get the color comparisons you want
     
  6. MadeFromPolygons

    MadeFromPolygons

    Joined:
    Oct 5, 2013
    Posts:
    3,983
    Brilliant thanks very much for detailing the steps clearly for me and anyone else who reads this!
     
  7. TerraUnity

    TerraUnity

    Joined:
    Aug 3, 2012
    Posts:
    1,255
    wyattt_ likes this.
  8. E2R_Ben

    E2R_Ben

    Joined:
    Oct 30, 2009
    Posts:
    143
    Hey, Im using LWRP 3.0, upgrading the project from a shader forge one. In shader forge I had "write to depth buffer" selected and the different alphaed objects blended as expected. When I moved to shader graph i get this sorting issue. Please can someone offer a solution? I cant really modify the mesh to remove inside faces at this stage in the project. (FYI is a hologoram so I kind of need it all to be transparent to add fade in and out effects)
    upload_2018-10-25_11-26-25.png