Search Unity

  1. Unity Asset Manager is now available in public beta. Try it out now and join the conversation here in the forums.
    Dismiss Notice

Does the depth texture slightly change in each render, although the whole scene stays still?

Discussion in 'Graphics Experimental Previews' started by Desoxi, Sep 9, 2018.

  1. Desoxi

    Desoxi

    Joined:
    Apr 12, 2015
    Posts:
    195
    Hey,
    in another thread i wrote my approach to finally read the depth texture of a second cam through a custom post process effect. And i can use it in all my other shaders through a render texture. The only issue which remains is, that it seems to change ever time. I can tell because my displacements are "flickering" up and down depending on the change.

    I hope the compression is not too crappy:



    Is there a way to go around this and stop the flickering?
     
  2. julian-moschuering

    julian-moschuering

    Joined:
    Apr 15, 2014
    Posts:
    529
    Looks like TAA is active? If this is the reason you can either check if it is really required for this camera or filter the last two frames using the motion texture.

    Normally the jittered depth should be used for effects. But some effects, like edge detections need a stable texture.
     
  3. Desoxi

    Desoxi

    Joined:
    Apr 12, 2015
    Posts:
    195
    Thank you for the reply.
    No I deactivated AA on that camera. I'm not sure how I could filter the last two frames with a motion texture. Do you have an example for that which I could use to learn and implement it?
     
    Last edited: Sep 10, 2018
  4. phil_lira

    phil_lira

    Unity Technologies

    Joined:
    Dec 17, 2014
    Posts:
    584
    Do you have MSAA enabled? There's chance the issue is related to your displacement effect. The way to debug this is to validate the depth texture is correct by blitting it to screen.
     
  5. Desoxi

    Desoxi

    Joined:
    Apr 12, 2015
    Posts:
    195
    Does MSAA has to be enabled? I think its not but im going to check that once im back home.
    The displacement works correct when using other textures from disk.
    The depth texture looks fine when directly rendering it to the screen, though i dont know whether or not its jittering and im simply not able to see the change.
     
  6. Desoxi

    Desoxi

    Joined:
    Apr 12, 2015
    Posts:
    195
  7. wyattt_

    wyattt_

    Unity Technologies

    Joined:
    May 9, 2018
    Posts:
    424
    Hadn't noticed that you mentioned you were doing displacement based on the depth buffer. Initially thought it was just color so that looks a little different from what I was expecting haha.

    Can you share a little more info on the code/graph and camera/scene setup in a single post? Might be easier to get an idea of what you are trying to do and what might be going wrong. It's odd that it jitters while in Game View and not moving the camera.

    Thanks!

    Scene looks super cool btw!
     
    Last edited: Sep 19, 2018
    Desoxi likes this.
  8. Desoxi

    Desoxi

    Joined:
    Apr 12, 2015
    Posts:
    195
    Thank you :)

    Ill try to write everything down here. Its actually pretty simple, but then im not sure if it is the best way to approach this!

    I do have a custom post process script "exporting" the depth of a camera into a rendertexture.

    Code (CSharp):
    1. using System;
    2. using UnityEngine;
    3. using UnityEngine.Rendering.PostProcessing;
    4.  
    5. [Serializable]
    6. [PostProcess(typeof(DepthExporterRenderer), PostProcessEvent.BeforeStack, "Custom/DepthExport")]
    7. public sealed class DepthExporter : PostProcessEffectSettings
    8. {
    9.     //public RenderTextureParameter depthTexture; doesnt work!
    10. }
    11.  
    12. public sealed class DepthExporterRenderer : PostProcessEffectRenderer<DepthExporter>
    13. {
    14.     public override DepthTextureMode GetCameraFlags()
    15.     {
    16.         return DepthTextureMode.Depth;
    17.     }
    18.  
    19.     public override void Render(PostProcessRenderContext context)
    20.     {
    21.         var sheet = context.propertySheets.Get(Shader.Find("Hidden/Custom/DepthShader"));
    22.         context.command.BlitFullscreenTriangle(context.source, context.destination, sheet, 0);
    23.     }
    24. }
    25.  
    26. //TODO: doesnt work, any other way to do something like this?
    27. [Serializable]
    28. public sealed class RenderTextureParameter : ParameterOverride<RenderTexture>
    29. {
    30. }

    Because creating a RenderTextureParameter class inheriting from "ParameterOverride<RenderTexture>" didnt work i chose the way to set a target texture inside the cameras output settings:

    capture111.PNG

    The shader im using in this line

    Code (CSharp):
    1. var sheet = context.propertySheets.Get(Shader.Find("Hidden/Custom/DepthShader"));
    looks like this:

    Code (CSharp):
    1. // Upgrade NOTE: replaced 'mul(UNITY_MATRIX_MVP,*)' with 'UnityObjectToClipPos(*)'
    2.  
    3. // Upgrade NOTE: replaced 'mul(UNITY_MATRIX_MVP,*)' with 'UnityObjectToClipPos(*)'
    4.  
    5. Shader "Hidden/Custom/DepthShader"
    6. {
    7.     HLSLINCLUDE
    8.  
    9. #include "PostProcessing/Shaders/StdLib.hlsl"
    10.  
    11.  
    12.     TEXTURE2D_SAMPLER2D(_CameraDepthTexture, sampler_CameraDepthTexture);
    13.  
    14.     float4 Frag(VaryingsDefault i) : SV_Target
    15.     {
    16.         float depth = Linear01Depth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, sampler_CameraDepthTexture, i.texcoordStereo));
    17.     return float4(depth, depth, depth, 1);
    18.     }
    19.  
    20.         ENDHLSL
    21.         SubShader
    22.     {
    23.         Cull Off ZWrite Off ZTest Always
    24.  
    25.             Pass
    26.         {
    27.             HLSLPROGRAM
    28.  
    29. #pragma vertex VertDefault
    30. #pragma fragment Frag
    31.  
    32.             ENDHLSL
    33.         }
    34.     }
    35. }
    I also tried to use "LinearEyeDepth" instead of "Linear01Depth" but because of the comment here which states that Linear01Depth handles orthographic projection correctly and my second camera is orthograpic i chose this one (the jitter exists in both cases).

    Well the cubes you see there are all drawn via Graphics.DrawMesh and a few matrix batches with a max size of 1023 (i was curious how it impacts the performance compared to instantiated cubes). Because i wasnt able to activate gpu instancing on shader graph shaders i couldnt reach the same performance like with the default material.

    And finally the shader which all cubes are using (3rd parameter in the DrawMesh method as material) uses the depth render texture to do some color and displacement manipulations.
    When i first saw the jitter i tried to deactivate parts of the shader, to see if it affects the displacement only or the color as well, but it seems to affect both, though in the color its a bit difficult to see.
    I also tried a perspective camera, or playing around with the orthographic size, but it didnt help much.

    The way im using this: my main camera is perspective and that is the main view. The orthograpic camera has a culling mask and renders only one layer with a render target set to a custom render texture.

    This is a picture of how it looks in the scene view

    capture112.PNG

    and this is the equivalent in the gameview with the rendered cubes:

    capture113.PNG

    This is a pretty long post now, but i appreciate your help :)

    EDIT: Ah yes of course there is no TAA activated or any AA at all. One thing i could observe is as soon as i deactivate the Post process layer on the ortho cam the jitter stops because of course the post process depth exporter doesn update the texture anymore and it stands still.
     
    Last edited: Sep 21, 2018
  9. Desoxi

    Desoxi

    Joined:
    Apr 12, 2015
    Posts:
    195
    The depth render texture im using as render target on the ortho cam looks like this:

    depth.PNG
     
  10. wyattt_

    wyattt_

    Unity Technologies

    Joined:
    May 9, 2018
    Posts:
    424
    Hmm. Ok. Try these things just to see what happens (suggestion 1 is a little too wild though for reasons noted at the bottom and probably won't work >.<):

    1. Set the Render Queue to "Transparent" on the Material that uses your Shader Graph and try setting the PostProcessEvent for your depth copy pass to BeforeTransparent
    2. Instead of using an intermediate depth buffer, calculate the depth of each fragment in your Shader Graph and use that instead. That way you have the current frame's depth value for that fragment. Take a look at UnityObjectToViewPos in UnityCG.cginc. The vector returned from that will have a "raw depth" (unprojected depth i think) value stored in the Z component. You negate the Z component and then divide that by your camera's far clip plane (accessed via the Camera Node) to get a fairly linear depth value. Use that as your depth value

    Note:
    The first one sounds kinda funky because you are using the fragment's depth value for your color and displacement, but if you set the Material to be Transparent, they won't be included in the depth buffer and therefore no values? Setting the stack event to BeforeTransparent would be in the hopes that you have actual data to use in the depth buffer and then transparent geometry should draw after that at which point you'd have relevant depth data. Would work if you did a depth prepass and then used that but that would require rendering everything twice.

    Stick with #2 haha
     
  11. Desoxi

    Desoxi

    Joined:
    Apr 12, 2015
    Posts:
    195
    Thanks for the answer and yes you are right, i tried #1 yesterday and it didnt have an effect :D
    After reading this i started to implement a custom node to use UnityObjectToViewPos with a custom view matrix instead of UNITY_MATRIX_V, which im assuming will give me the view matrix of the main camera.
    But then i was thinking about the usage of this and im not quite sure how you meant to use this.

    Because i have a 2 camera setup, the second cam renders objects which the first cam does not. If i put this code into my displacement shader (the cubes rendered by the main cam) and provide the view matrix of the second cam via script the other layers objects which depth im trying to get has to be rendered by the main cam and on top of that have to be in the main cameras frustum so i can use their fragment pos in world space to convert it to view etc.
    But i dont want to render the originals, thats why i used the render texture approach.

    Maybe i misunderstood your explanation?
     
    Last edited: Sep 21, 2018
  12. wyattt_

    wyattt_

    Unity Technologies

    Joined:
    May 9, 2018
    Posts:
    424
    @Desoxi Did you get this working? I think the Scene Depth Node is available now in ShaderGraph btw. Should be in 4.0.1-preview
     
  13. Desoxi

    Desoxi

    Joined:
    Apr 12, 2015
    Posts:
    195
    Unfortunately not, no. I just saw the depth node in shader graph, but dont know yet what the output is giving me exactly. And i need the depth of a second camera, which is not the case i guess because there is no index input or something similar, so i assume it gives back the scene depth seen through the main camera.

    I couldnt get rid of the flickering but im sure there is a way without it.
     
  14. hd5ai

    hd5ai

    Joined:
    Nov 2, 2018
    Posts:
    5
    In case it hasn't been answered elsewhere, and if anyone is interested - I think I have the reason why the flickering was happening.

    I ran into the same problem when abusing the Post Processing pipeline to extract depth and such from HDRP - rendering custom cameras out to EXR and not getting stable values between identical frames ended up breaking an external analysis tool.

    Thanks to the Frame Debugger, I could see that after my custom PP shader had run, a final pass kicked off which applied dithering. By default I couldn't find any way to disable this, so ended up forking the latest Post Processing code and adding my own switch. You can watch where it happens in
    PostProcessLayer.cs
    , search for
    dithering.Render



    It would be great to either have dithering disableable for those of us that need "non visual" data written out ... OR ... if there is an alternative, preferred way to grab stable depth / custom data (including gbuffers) out of the SRP, I would be keen to know.
     
    nasos_333 and Desoxi like this.
  15. nasos_333

    nasos_333

    Joined:
    Feb 13, 2013
    Posts:
    13,348
    Any news on this, has this been implemented in latest PP stack v2.0 ?

    Thanks