Search Unity

  1. The 2022.1 beta is now available for testing. To find out what's new, have a look at our 2022.1 beta blog post.
    Dismiss Notice
  2. Read here for Unity's latest plans on OpenXR.
    Dismiss Notice

_CameraDepthTexture is empty

Discussion in 'AR' started by tfisiche, Oct 28, 2019.

  1. tfisiche


    Sep 30, 2019

    I'm aware that my question has been answered many times but none of the solutions I found works.
    I just want to retrieve the depth value of the camera (which is a hololens 1st gen in my case).
    I implemented the following shader to do that :

    Code (CSharp):
    1. Shader "Tutorial/Depth"{
    2.     //show values to edit in inspector
    3.     Properties{
    4.         [HideInInspector] _MainTex("Texture", 2D) = "white" {}
    5.     }
    7.     SubShader{
    8.         // markers that specify that we don't need culling
    9.         // or comparing/writing to the depth buffer
    10.         //Cull Off
    11.         //ZWrite Off
    12.         //ZTest Always
    14.         Pass{
    15.             CGPROGRAM
    16.             //include useful shader functions
    17.             #include "UnityCG.cginc"
    19.             //define vertex and fragment shader
    20.             #pragma vertex vert
    21.             #pragma fragment frag
    23.             //the rendered screen so far
    24.             sampler2D _MainTex;
    26.             //the depth texture
    27.             sampler2D _CameraDepthTexture;
    30.             //the object data that's put into the vertex shader
    31.             struct appdata {
    32.                 float4 vertex : POSITION;
    33.                 float2 uv : TEXCOORD0;
    34.             };
    36.             //the data that's used to generate fragments and can be read by the fragment shader
    37.             struct v2f {
    38.                 float4 position : SV_POSITION;
    39.                 float2 uv : TEXCOORD0;
    40.             };
    42.             //the vertex shader
    43.             v2f vert(appdata v) {
    44.                 v2f o;
    45.                 //convert the vertex positions from object space to clip space so they can be rendered
    46.                 o.position = UnityObjectToClipPos(v.vertex);
    47.                 o.uv = ComputeScreenPos(o.position)
    48.                 return o;
    49.             }
    52.             //the fragment shader
    53.             float4 frag(v2f i) : SV_TARGET{
    54.                 //get depth from depth texture
    55.                 float2 uv = i.uv.xy / i.uv.w;
    56.                 float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, uv);
    57.                 float linearDepth = Linear01Depth(depth);
    58.                 return linearDepth;
    59.             }
    60.             ENDCG
    61.         }
    62.     }
    63. }
    In addition, i've enable depth buffer in a script :

    Code (CSharp):
    1. public Camera cam;
    3.     void Awake()
    4.     {
    5.         cam.depthTextureMode = DepthTextureMode.Depth;
    6.     }
    But all the values I get are equals to 0 :

    Code (CSharp):
    1. RenderTexture rt = new RenderTexture( resWidth, resHeight, resDepth, RenderTextureFormat.ARGBFloat);
    2.             Graphics.Blit(depthMaterial.mainTexture, rt);
    3.    = rt;
    4.             depthTexture.ReadPixels(new Rect(0, 0, resWidth, resHeight), 0, 0);
    5.             depthTexture.Apply();
    Can anybody help me to figure out what's my issue pls ?

    Thx in advance
  2. Olmi


    Nov 29, 2012
    You are using the old standard renderpipeline?
    You have assigned the camera to the field you've exposed (cam)?
    Can you get the depth rendered to the screen with your shader?

    The depth texture mode assignment looks correct to my eye.

    EDIT1: Also, now that I took a closer look, is this actually functioning code (i.e. meaning it compiles?) as I see typos there. First, remove every syntactical error before trying to proceed, then if things don't work, try to think what's wrong.

    EDIT2: You also have code where you divide uv.xy with non-existing w. Your input uv is only a two dimensional float.

    So please try fix first your stuff that won't compile.
    Last edited: Oct 28, 2019
  3. tfisiche


    Sep 30, 2019
    Hi, thank you for your answer.
    - I don't know which renderpipeline I'm using (if you mean rendering path, I'm currently using forward but I also tried with deferred).
    - I'm not sure I understand what you mean so tell me if I don't answer you correctly. I specified the "cam" component as the main camera of my scene in the empty object containing my script.
    - No I can't get anything from "_CameraDepthTexture", I always have values equal to 0.

    Yes, everything compile without errors. In addition, I removed the part where I divide uv.xy by uv.w but I still have the same behaviour.

    During play mode, I can see in the inspector that the camera is rendering depth so I don't understand why I have an empty texture ?
    I know that the objects with rendered depth must have an opaque shader whith a render queue <= 2500 but it still doesn't work ...
  4. Olmi


    Nov 29, 2012
    Have you installed a scriptable renderpipeline renderer like LWPR/URP or HDRP and which one are you actually using? Or are you just using the standard, "old" renderer? It matters with these shaders and depth textures a lot.

    And if you (for some reason) don't see errors in your shaders, try it in another project. I'm sure it will NOT compile as you got basic syntax errors there, like missing semicolon.

    Select your shader in Project view and then check inspector, and see that you get correctly compiled shader there, without errors.
  5. tfisiche


    Sep 30, 2019
    I'm using the default render pipeline. I did try to use the LWRP but I have strange behaviour of the camera and the shaders and I don't know why so I came back to the default render pipeline.
    Is that why i have these issues ? If yes, could you advise me which pipeline to use and how to use them (if you know a good documentation page or tutorial, otherwise I'll check myself) ?

    Yeah sorry, you were right about the errors, I fixed them but still no improvement.
  6. Olmi


    Nov 29, 2012
    Can you tell a bit more what you are trying to accomplish, so that it would be easier to help you.

    i.e. where do you need that depth from and where are you going to use it, and so on.
    Right now I'm not sure where you try to use your RenderTexture code etc.

    If you need a camera depth, you could just render straight to a DepthTexture from a camera. You can do that by setting a RenderTexture as the Target Texture.

    Or are you looking to build some post-processing effect that utilizes depth. Just guessing here.
    If that is the case, and you are using Post-Processing Stack v2, check the tutorial/info on how to create custom effects. It details pretty much every step needed to create Stack v2 effects.

    HDRP/LWRP is completely different story if you need post effects.
    Last edited: Oct 29, 2019
  7. tfisiche


    Sep 30, 2019
    I'm just trying to compute the distance of the projected pixel from the camera screen. I'm not using post processing effects.

    Yes, I just want to render to a DepthTexture to get the values of each pixels and turn them into real distances.
  8. tfisiche


    Sep 30, 2019
    What should I do to achieve that ?
  9. raggnic


    Sep 27, 2017
    did you get it to work in the end? I have a similar issue with a shader, it's working in play mode in the editor but not on the device, as if there were no depth/normals value
    jolix and briank like this.