Hi, I'm aware that my question has been answered many times but none of the solutions I found works. I just want to retrieve the depth value of the camera (which is a hololens 1st gen in my case). I implemented the following shader to do that : Code (CSharp): Shader "Tutorial/Depth"{ //show values to edit in inspector Properties{ [HideInInspector] _MainTex("Texture", 2D) = "white" {} } SubShader{ // markers that specify that we don't need culling // or comparing/writing to the depth buffer //Cull Off //ZWrite Off //ZTest Always Pass{ CGPROGRAM //include useful shader functions #include "UnityCG.cginc" //define vertex and fragment shader #pragma vertex vert #pragma fragment frag //the rendered screen so far sampler2D _MainTex; //the depth texture sampler2D _CameraDepthTexture; //the object data that's put into the vertex shader struct appdata { float4 vertex : POSITION; float2 uv : TEXCOORD0; }; //the data that's used to generate fragments and can be read by the fragment shader struct v2f { float4 position : SV_POSITION; float2 uv : TEXCOORD0; }; //the vertex shader v2f vert(appdata v) { v2f o; //convert the vertex positions from object space to clip space so they can be rendered o.position = UnityObjectToClipPos(v.vertex); o.uv = ComputeScreenPos(o.position) return o; } //the fragment shader float4 frag(v2f i) : SV_TARGET{ //get depth from depth texture float2 uv = i.uv.xy / i.uv.w; float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, uv); float linearDepth = Linear01Depth(depth); return linearDepth; } ENDCG } } } In addition, i've enable depth buffer in a script : Code (CSharp): public Camera cam; void Awake() { cam.depthTextureMode = DepthTextureMode.Depth; } But all the values I get are equals to 0 : Code (CSharp): RenderTexture rt = new RenderTexture( resWidth, resHeight, resDepth, RenderTextureFormat.ARGBFloat); Graphics.Blit(depthMaterial.mainTexture, rt); RenderTexture.active = rt; depthTexture.ReadPixels(new Rect(0, 0, resWidth, resHeight), 0, 0); depthTexture.Apply(); Can anybody help me to figure out what's my issue pls ? Thx in advance
You are using the old standard renderpipeline? You have assigned the camera to the field you've exposed (cam)? Can you get the depth rendered to the screen with your shader? The depth texture mode assignment looks correct to my eye. EDIT1: Also, now that I took a closer look, is this actually functioning code (i.e. meaning it compiles?) as I see typos there. First, remove every syntactical error before trying to proceed, then if things don't work, try to think what's wrong. EDIT2: You also have code where you divide uv.xy with non-existing w. Your input uv is only a two dimensional float. So please try fix first your stuff that won't compile.
Hi, thank you for your answer. - I don't know which renderpipeline I'm using (if you mean rendering path, I'm currently using forward but I also tried with deferred). - I'm not sure I understand what you mean so tell me if I don't answer you correctly. I specified the "cam" component as the main camera of my scene in the empty object containing my script. - No I can't get anything from "_CameraDepthTexture", I always have values equal to 0. Yes, everything compile without errors. In addition, I removed the part where I divide uv.xy by uv.w but I still have the same behaviour. During play mode, I can see in the inspector that the camera is rendering depth so I don't understand why I have an empty texture ? I know that the objects with rendered depth must have an opaque shader whith a render queue <= 2500 but it still doesn't work ...
Have you installed a scriptable renderpipeline renderer like LWPR/URP or HDRP and which one are you actually using? Or are you just using the standard, "old" renderer? It matters with these shaders and depth textures a lot. And if you (for some reason) don't see errors in your shaders, try it in another project. I'm sure it will NOT compile as you got basic syntax errors there, like missing semicolon. Select your shader in Project view and then check inspector, and see that you get correctly compiled shader there, without errors.
I'm using the default render pipeline. I did try to use the LWRP but I have strange behaviour of the camera and the shaders and I don't know why so I came back to the default render pipeline. Is that why i have these issues ? If yes, could you advise me which pipeline to use and how to use them (if you know a good documentation page or tutorial, otherwise I'll check myself) ? Yeah sorry, you were right about the errors, I fixed them but still no improvement.
Can you tell a bit more what you are trying to accomplish, so that it would be easier to help you. i.e. where do you need that depth from and where are you going to use it, and so on. Right now I'm not sure where you try to use your RenderTexture code etc. If you need a camera depth, you could just render straight to a DepthTexture from a camera. You can do that by setting a RenderTexture as the Target Texture. Or are you looking to build some post-processing effect that utilizes depth. Just guessing here. If that is the case, and you are using Post-Processing Stack v2, check the tutorial/info on how to create custom effects. It details pretty much every step needed to create Stack v2 effects. https://docs.unity3d.com/Packages/com.unity.postprocessing@2.1/manual/Writing-Custom-Effects.html HDRP/LWRP is completely different story if you need post effects.
I'm just trying to compute the distance of the projected pixel from the camera screen. I'm not using post processing effects. Yes, I just want to render to a DepthTexture to get the values of each pixels and turn them into real distances.
did you get it to work in the end? I have a similar issue with a shader, it's working in play mode in the editor but not on the device, as if there were no depth/normals value
Yes, answer by @bgolus from here Your shader needs a shadowcaster pass. The easiest way to do that, as long as you’re not modifying the vertex positions or adding alpha testing, is to add a Fallback shader. For most things you want this just before the last } in your shader: FallBack "Legacy Shaders/VertexLit" So you need to add it to the shader where you are trying to use _CameraDepthTexture. Also, make sure your camera is setup to use this mode: Code (CSharp): _myCamera.depthTextureMode = DepthTextureMode.Depth; If you are rendering into a render texture, make sure that it has depth as 24 or 32. Not zero. _myRenderTexture = new RenderTexture(res,res, 24, RenderTextureFormat.Default); Also, don't forget to correctly sample the _CameraDepthTexture in shader. For example, if you are computing o.screenPos, remember that in fragment function you'll need to divide its xy by w when you try to sample the depth map. Or use tex2Dproj() which will do it for you. More on tex2Dproj here Code (CSharp): struct v2f{ float4 pos: SV_POSITION; float4 screenPos : TEXCOORD1; }; v2f vert (appdata v) { v2f o; o.pos = UnityObjectToClipPos(v.vertex); o.screenPos = ComputeScreenPos(o.pos); return o; } fixed4 frag(v2f i) : SV_Target{ const float NEARPLANE = _ProjectionParams.y; //unity provides this constant. Camera's near plane. const float FARPLANE = _ProjectionParams.z; //Not needed, but I'll use it for an artistic effect of heightmap. float depth = LinearEyeDepth(tex2D(_CameraDepthTexture, i.screenPos.xy/i.screenPos.w).r);// Sample the depth texture via xy/w. Or use tex2Dproj(_CameraDepthTexture.xyww).r; float heightmap = (depth - NEARPLANE)/(FARPLANE - NEARPLANE); heightmap = 1-heightmap;//for heightmap (closer=whiter) return fixed4(heightmap.rrr,1); } Another important thing is: "Linear01Depth()" and "LinearEyeDepth()" start from the camera position instead of the near plane. And if you intend to calculate the depth of current fragment (without depthmap), you need to divide its z coord by w: float thisFragDepth = LinearEyeDepth(i.screenPos.z/i.screenPos.w); Also, you can't use _CameraDepthTexture from the shader during Graphics.Bit(myTexA, myTexB, myMaterial);. Because that texture is only available while rendering through a camera. For using it during Blit(), your shader needs _LastCameraDepthTexture instead. Lastly, remember that DirectX has differences to OpenGL in how it handles Projection matrix, and what will look "white vs dark" in a depth texture (nearer vs further, or other way around). So if your shader seems to ignore ZTest LEqual or seems to have weird triangle sort order (or maybe screen is flipped upside down), chances are you need to check those platform differences: https://docs.unity3d.com/Manual/SL-PlatformDifferences.html And if you are doing something with your camera projection matrices (instead of relying on unity's shader macros / functions), then check GL.GetGPUProjectionMatrix as well.