Search Unity

_CameraDepthTexture is empty

Discussion in 'AR' started by tfisiche, Oct 28, 2019.

  1. tfisiche

    tfisiche

    Joined:
    Sep 30, 2019
    Posts:
    9
    Hi,

    I'm aware that my question has been answered many times but none of the solutions I found works.
    I just want to retrieve the depth value of the camera (which is a hololens 1st gen in my case).
    I implemented the following shader to do that :

    Code (CSharp):
    1. Shader "Tutorial/Depth"{
    2.     //show values to edit in inspector
    3.     Properties{
    4.         [HideInInspector] _MainTex("Texture", 2D) = "white" {}
    5.     }
    6.  
    7.     SubShader{
    8.         // markers that specify that we don't need culling
    9.         // or comparing/writing to the depth buffer
    10.         //Cull Off
    11.         //ZWrite Off
    12.         //ZTest Always
    13.  
    14.         Pass{
    15.             CGPROGRAM
    16.             //include useful shader functions
    17.             #include "UnityCG.cginc"
    18.  
    19.             //define vertex and fragment shader
    20.             #pragma vertex vert
    21.             #pragma fragment frag
    22.  
    23.             //the rendered screen so far
    24.             sampler2D _MainTex;
    25.  
    26.             //the depth texture
    27.             sampler2D _CameraDepthTexture;
    28.  
    29.  
    30.             //the object data that's put into the vertex shader
    31.             struct appdata {
    32.                 float4 vertex : POSITION;
    33.                 float2 uv : TEXCOORD0;
    34.             };
    35.  
    36.             //the data that's used to generate fragments and can be read by the fragment shader
    37.             struct v2f {
    38.                 float4 position : SV_POSITION;
    39.                 float2 uv : TEXCOORD0;
    40.             };
    41.  
    42.             //the vertex shader
    43.             v2f vert(appdata v) {
    44.                 v2f o;
    45.                 //convert the vertex positions from object space to clip space so they can be rendered
    46.                 o.position = UnityObjectToClipPos(v.vertex);
    47.                 o.uv = ComputeScreenPos(o.position)
    48.                 return o;
    49.             }
    50.  
    51.  
    52.             //the fragment shader
    53.             float4 frag(v2f i) : SV_TARGET{
    54.                 //get depth from depth texture
    55.                 float2 uv = i.uv.xy / i.uv.w;
    56.                 float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, uv);
    57.                 float linearDepth = Linear01Depth(depth);
    58.                 return linearDepth;
    59.             }
    60.             ENDCG
    61.         }
    62.     }
    63. }
    In addition, i've enable depth buffer in a script :

    Code (CSharp):
    1. public Camera cam;
    2.  
    3.     void Awake()
    4.     {
    5.         cam.depthTextureMode = DepthTextureMode.Depth;
    6.     }
    But all the values I get are equals to 0 :

    Code (CSharp):
    1. RenderTexture rt = new RenderTexture( resWidth, resHeight, resDepth, RenderTextureFormat.ARGBFloat);
    2.             Graphics.Blit(depthMaterial.mainTexture, rt);
    3.             RenderTexture.active = rt;
    4.             depthTexture.ReadPixels(new Rect(0, 0, resWidth, resHeight), 0, 0);
    5.             depthTexture.Apply();
    Can anybody help me to figure out what's my issue pls ?

    Thx in advance
     
  2. Olmi

    Olmi

    Joined:
    Nov 29, 2012
    Posts:
    1,553
    You are using the old standard renderpipeline?
    You have assigned the camera to the field you've exposed (cam)?
    Can you get the depth rendered to the screen with your shader?

    The depth texture mode assignment looks correct to my eye.

    EDIT1: Also, now that I took a closer look, is this actually functioning code (i.e. meaning it compiles?) as I see typos there. First, remove every syntactical error before trying to proceed, then if things don't work, try to think what's wrong.

    EDIT2: You also have code where you divide uv.xy with non-existing w. Your input uv is only a two dimensional float.

    So please try fix first your stuff that won't compile.
     
    Last edited: Oct 28, 2019
  3. tfisiche

    tfisiche

    Joined:
    Sep 30, 2019
    Posts:
    9
    Hi, thank you for your answer.
    - I don't know which renderpipeline I'm using (if you mean rendering path, I'm currently using forward but I also tried with deferred).
    - I'm not sure I understand what you mean so tell me if I don't answer you correctly. I specified the "cam" component as the main camera of my scene in the empty object containing my script.
    - No I can't get anything from "_CameraDepthTexture", I always have values equal to 0.

    Yes, everything compile without errors. In addition, I removed the part where I divide uv.xy by uv.w but I still have the same behaviour.

    During play mode, I can see in the inspector that the camera is rendering depth so I don't understand why I have an empty texture ?
    I know that the objects with rendered depth must have an opaque shader whith a render queue <= 2500 but it still doesn't work ...
     
  4. Olmi

    Olmi

    Joined:
    Nov 29, 2012
    Posts:
    1,553
    Have you installed a scriptable renderpipeline renderer like LWPR/URP or HDRP and which one are you actually using? Or are you just using the standard, "old" renderer? It matters with these shaders and depth textures a lot.

    And if you (for some reason) don't see errors in your shaders, try it in another project. I'm sure it will NOT compile as you got basic syntax errors there, like missing semicolon.

    Select your shader in Project view and then check inspector, and see that you get correctly compiled shader there, without errors.
     
  5. tfisiche

    tfisiche

    Joined:
    Sep 30, 2019
    Posts:
    9
    I'm using the default render pipeline. I did try to use the LWRP but I have strange behaviour of the camera and the shaders and I don't know why so I came back to the default render pipeline.
    Is that why i have these issues ? If yes, could you advise me which pipeline to use and how to use them (if you know a good documentation page or tutorial, otherwise I'll check myself) ?

    Yeah sorry, you were right about the errors, I fixed them but still no improvement.
     
  6. Olmi

    Olmi

    Joined:
    Nov 29, 2012
    Posts:
    1,553
    Can you tell a bit more what you are trying to accomplish, so that it would be easier to help you.

    i.e. where do you need that depth from and where are you going to use it, and so on.
    Right now I'm not sure where you try to use your RenderTexture code etc.

    If you need a camera depth, you could just render straight to a DepthTexture from a camera. You can do that by setting a RenderTexture as the Target Texture.

    Or are you looking to build some post-processing effect that utilizes depth. Just guessing here.
    If that is the case, and you are using Post-Processing Stack v2, check the tutorial/info on how to create custom effects. It details pretty much every step needed to create Stack v2 effects.
    https://docs.unity3d.com/Packages/com.unity.postprocessing@2.1/manual/Writing-Custom-Effects.html

    HDRP/LWRP is completely different story if you need post effects.
     
    Last edited: Oct 29, 2019
  7. tfisiche

    tfisiche

    Joined:
    Sep 30, 2019
    Posts:
    9
    I'm just trying to compute the distance of the projected pixel from the camera screen. I'm not using post processing effects.

    Yes, I just want to render to a DepthTexture to get the values of each pixels and turn them into real distances.
     
  8. tfisiche

    tfisiche

    Joined:
    Sep 30, 2019
    Posts:
    9
    What should I do to achieve that ?
     
  9. raggnic

    raggnic

    Joined:
    Sep 27, 2017
    Posts:
    13
    did you get it to work in the end? I have a similar issue with a shader, it's working in play mode in the editor but not on the device, as if there were no depth/normals value
     
    Zebadiah, jolix and briank like this.
  10. IgorAherne

    IgorAherne

    Joined:
    May 15, 2013
    Posts:
    393
    Yes, answer by @bgolus from here

    Your shader needs a shadowcaster pass. The easiest way to do that, as long as you’re not modifying the vertex positions or adding alpha testing, is to add a Fallback shader. For most things you want this just before the last
    }
    in your shader:
    FallBack "Legacy Shaders/VertexLit"

    So you need to add it to the shader where you are trying to use
    _CameraDepthTexture
    .

    Also, make sure your camera is setup to use this mode:
    Code (CSharp):
    1. _myCamera.depthTextureMode = DepthTextureMode.Depth;
    If you are rendering into a render texture, make sure that it has depth as 24 or 32. Not zero.
    _myRenderTexture =  new RenderTexture(res,res, 24, RenderTextureFormat.Default);


    Also, don't forget to correctly sample the _CameraDepthTexture in shader.
    For example, if you are computing
    o.screenPos
    , remember that in fragment function you'll need to divide its
    xy
    by
    w
    when you try to sample the depth map. Or use
    tex2Dproj()
    which will do it for you.
    More on
    tex2Dproj
    here

    Code (CSharp):
    1. struct v2f{
    2.     float4 pos: SV_POSITION;
    3.     float4 screenPos : TEXCOORD1;
    4. };
    5.  
    6. v2f vert (appdata v) {
    7.    v2f o;
    8.    o.pos = UnityObjectToClipPos(v.vertex);
    9.    o.screenPos = ComputeScreenPos(o.pos);
    10.    return o;
    11. }
    12.  
    13. fixed4 frag(v2f i) : SV_Target{
    14.    const float NEARPLANE = _ProjectionParams.y;  //unity provides this constant. Camera's near plane.
    15.    const float FARPLANE = _ProjectionParams.z; //Not needed, but I'll use it for an artistic effect of heightmap.
    16.  
    17.    float depth = LinearEyeDepth(tex2D(_CameraDepthTexture, i.screenPos.xy/i.screenPos.w).r);// Sample the depth texture via xy/w. Or use tex2Dproj(_CameraDepthTexture.xyww).r;
    18.    float heightmap = (depth - NEARPLANE)/(FARPLANE - NEARPLANE);
    19.    heightmap = 1-heightmap;//for heightmap (closer=whiter)
    20.    return fixed4(heightmap.rrr,1);
    21. }

    Another important thing is:
    "Linear01Depth()" and "LinearEyeDepth()" start from the camera position instead of the near plane.

    And if you intend to calculate the depth of current fragment (without depthmap),
    you need to divide its z coord by w:

    float thisFragDepth = LinearEyeDepth(i.screenPos.z/i.screenPos.w);


    Also, you can't use
    _CameraDepthTexture
    from the shader during
    Graphics.Bit(myTexA, myTexB, myMaterial);
    .
    Because that texture is only available while rendering through a camera.
    For using it during Blit(), your shader needs
    _LastCameraDepthTexture
    instead.

    Lastly, remember that DirectX has differences to OpenGL in how it handles Projection matrix, and what will look "white vs dark" in a depth texture (nearer vs further, or other way around).
    So if your shader seems to ignore
    ZTest LEqual
    or seems to have weird triangle sort order (or maybe screen is flipped upside down), chances are you need to check those platform differences: https://docs.unity3d.com/Manual/SL-PlatformDifferences.html

    And if you are doing something with your camera projection matrices (instead of relying on unity's shader macros / functions), then check GL.GetGPUProjectionMatrix as well.
     
    Last edited: Mar 16, 2024