I have a camera that is configured to provide depth and motion vector textures. I'm trying to extend an Amplify Shader Editor template (essentially just a shader) to access these. I have the following code where the comments //MY ADDITION indicate... uh... my additions to the template shader. Code (CSharp): // Upgrade NOTE: replaced 'mul(UNITY_MATRIX_MVP,*)' with 'UnityObjectToClipPos(*)' Shader /*ase_name*/ "ASETemplateShaders/PostProcess" /*end*/ { Properties { _MainTex ( "Screen", 2D ) = "black" {} _CameraMotionVectorsTexture("Motion Vectors", 2D) = "black" {} // MY ADDITION _CameraDepthTexture("Depth Texture", 2D) = "black" {} //MY ADDITION /*ase_props*/ } SubShader { Tags{ "RenderType"="Opaque" } ZTest Always Cull Off ZWrite Off /*ase_pass*/ Pass { CGPROGRAM #pragma vertex vert_img_custom #pragma fragment frag #pragma target 3.0 #include "UnityCG.cginc" /*ase_pragma*/ struct appdata_img_custom { float4 vertex : POSITION; half2 texcoord : TEXCOORD0; /*ase_vdata:p=p;uv0=tc0*/ }; struct v2f_img_custom { float4 pos : SV_POSITION; half2 uv : TEXCOORD0; half2 stereoUV : TEXCOORD2; #if UNITY_UV_STARTS_AT_TOP half4 uv2 : TEXCOORD1; half4 stereoUV2 : TEXCOORD3; #endif /*ase_interp(4,7):sp=sp.xyzw;uv0=tc0.xy;uv1=tc1;uv2=tc2;uv3=tc3*/ }; uniform sampler2D _MainTex; uniform half4 _MainTex_TexelSize; uniform half4 _MainTex_ST; uniform sampler2D _CameraMotionVectorsTexture; // MY ADDITION uniform sampler2D _CameraDepthTexture; // MY ADDITION /*ase_globals*/ v2f_img_custom vert_img_custom ( appdata_img_custom v /*ase_vert_input*/ ) { v2f_img_custom o; /*ase_vert_code:v=appdata_img_custom;o=v2f_img_custom*/ o.pos = UnityObjectToClipPos ( v.vertex ); o.uv = float4( v.texcoord.xy, 1, 1 ); #if UNITY_UV_STARTS_AT_TOP o.uv2 = float4( v.texcoord.xy, 1, 1 ); o.stereoUV2 = UnityStereoScreenSpaceUVAdjust ( o.uv2, _MainTex_ST ); if ( _MainTex_TexelSize.y < 0.0 ) o.uv.y = 1.0 - o.uv.y; #endif o.stereoUV = UnityStereoScreenSpaceUVAdjust ( o.uv, _MainTex_ST ); return o; } half4 frag ( v2f_img_custom i /*ase_frag_input*/) : SV_Target { #ifdef UNITY_UV_STARTS_AT_TOP half2 uv = i.uv2; half2 stereoUV = i.stereoUV2; #else half2 uv = i.uv; half2 stereoUV = i.stereoUV; #endif half4 finalColor; // ase common template code /*ase_frag_code:i=v2f_img_custom*/ finalColor = /*ase_frag_out:Frag Color;Float4*/half4( 1, 1, 1, 1 )/*end*/; return finalColor; } ENDCG } } CustomEditor "ASEMaterialInspector" } Now, from what I've read over the past few hours, this should be assigning the depth and motion textures to the appropriately named properties so that I can access them in ASE's editor. Unfortunately, when I try and output them in any capacity, I'm left with a pure black image. This makes me think that something must be going wrong with the following lines: Code (CSharp): uniform sampler2D _CameraMotionVectorsTexture; // MY ADDITION uniform sampler2D _CameraDepthTexture; // MY ADDITION I don't expect clean results like a perfect greyscale depth texture, but rather the mess of colours a depth texture usually reports as if you don't convert it. Does anyone know what I'm doing wrong here? I'm sure it's something simple.
Okay, I found the problem, kind of. After some more searching and banging my head against this problem, I discovered this: Graphics.Blit Does Not Copy RenderTexture Depth It gives me a way to do this for the depth texture, but I can't for the life of me figure out how I'd do the same for the motion vectors. I feel like the fact that the motion vectors contain a range of -1 to 1 might be a problem here?
DepthTextureMode There's no need to have the textures be properties of the shader. Those textures are set as global parameters accessible by all shaders as long as they exist. You just need these lines in your shader: sampler2D_float _CameraDepthTexture; sampler2D_half _CameraMotionVectorsTexture; Then sample the textures like any other using the screen UVs. The depth texture is going to be a value from 0.0 to 1.0, and the velocity texture is going to be something like -1.0 to 1.0. Note, whether a depth of 0.0 is at the near or far plane depends on the platform being rendered on, which you can check for in the shader by if UNITY_REVERSED_Z is defined. However Unity has built in functions for converting the depth texture values into world space distances or linear 0.0 to 1.0. That link isn't useful for you here. There's no need to make a copy of the texture. If it exists it's already going to be passed to the shader. The real question is, do they exist? My guess is they don't because you've not enabled them on the camera. To do that you need to write a script that enables both the depth texture and velocity texture for the camera.depthTextureMode. GetComponent<Camera>().depthTextureMode |= DepthTextureMode.Depth | DepthTextureMode.MotionVectors;
They are enabled on the camera, but... Code (csharp): Graphics.Blit(cam.targetTexture, undevelopedPhotos[totalPhotos], noiseMaterial); This is the code I'm using to pass things to the material. When you blit, as far as I understand, you lose the depth texture information because that isn't copied over.
When you blit you loose the depth buffer, not the depth texture. The depth texture remains as a globally bound texture for the rest of the frame regardless of what you do. Unless you're using multiple cameras and need the depth texture from a specific one (and it's not the camera you're currently rendering to) then there's no need to copy it.
Alright, with your help I managed to get a depth texture outputting, but it seems ASE doesn't recognise sampler2D_half, so I guess I'll have to go to them for support now. Thanks!
Actually, upon further testing, it seems I'm getting a depth texture, but not the correct one. This is the rendertexture output to an image: But this is the depth texture output to an image: For some reason, I'm getting the main camera's depth texture when I call sampler2D_float _CameraDepthTexture; instead of the camera I want. If I disable the main camera, effectively rendering nothing to the screen, I get the correct depth texture.
Yes, I can access the depth texture so long as the main camera isn't active. Here's the code I'm using to send the RenderTexture around where it needs to be: Code (CSharp): IEnumerator TakePhoto() { if (totalPhotos < photoRollSize) { Shutter(); AdjustSettings(); yield return new WaitForEndOfFrame(); Graphics.Blit(cam.targetTexture, undevelopedPhotos[totalPhotos], noiseMaterial); totalPhotos++; } else { Debug.Log("Photo roll full!"); } } This is called whenever the player presses the assigned camera shot button.
Ah. You're waiting for the end of the frame. The depth texture for that camera has been destroyed at that point if the main camera is active, so yeah, it's not accessible. If you want to access the depth texture for that camera you do need to make a copy while that camera is active. To do this you'll want to use a command buffer assigned to the camera instead of a coroutine.
Hello All, I have a related question so I will highjack the thread slightly. I am trying a very simple setup where, I try to read the depth texture from one camera, use it in a shader, and visualise in a UI image via a material using that shader. I have only one camera in the scene for this test. I have the following code on the camera to make sure the depth texture mode is se to depth. Code (CSharp): using System.Collections; using System.Collections.Generic; using UnityEngine; public class CameraDepthUtility : MonoBehaviour { Camera cam; [SerializeField] Material mat; // Use this for initialization void Start () { cam = GetComponent<Camera> (); cam.depthTextureMode = DepthTextureMode.Depth; } void OnRenderImage(RenderTexture src, RenderTexture dest) { Graphics.Blit(src, dest, mat); } } The material has the following shader, where I sample the _CameraDepthTexture: Code (CSharp): Shader "DepthTest" { SubShader { Tags { "Queue"="Transparent" "RenderType"="Transparent" } ZWrite Off Blend SrcAlpha OneMinusSrcAlpha Pass { CGPROGRAM #pragma vertex vert #pragma fragment frag #pragma target 3.0 #include "UnityCG.cginc" // vertex shader inputs struct appdata { float4 vertex : POSITION; // vertex position float2 uv : TEXCOORD0; // texture coordinate }; // vertex shader outputs ("vertex to fragment") struct v2f { float2 uv : TEXCOORD0; // texture coordinate float4 vertex : SV_POSITION; // clip space position float4 scrPos : TEXCOORD1; }; // vertex shader v2f vert (appdata v) { v2f o; o.vertex = UnityObjectToClipPos(v.vertex); o.uv = v.uv; return o; } sampler2D _CameraDepthTexture; // pixel shader; returns low precision ("fixed4" type) // color ("SV_Target" semantic) fixed4 frag (v2f i) : SV_Target { fixed4 colCameraDepth = tex2D(_CameraDepthTexture, float2(i.uv.x , i.uv.y)); float r = colCameraDepth.r; r = 1 - Linear01Depth(r); colCameraDepth = fixed4(r,0,0,1); return colCameraDepth; } ENDCG } } } And the same material is assigned to a UI image for visualising purpose. However I cant get it to show the depth. I have the feeling that I am missing something very obvious about using the _CameraDepthTexture but cant figure out. All help is very much appreciated. Cheers, Doruk
Like the situation for the previous post, if your UI is being rendered as a screen space overlay, the depth texture has already been “destroyed” by the time the UI renders. (Technically it likely still exists, it’s just not being passed to shaders anymore.) Solutions would be to use a camera or world space UI element, or to copy the depth texture using a command buffer, or maybe even just assign it as a global texture using a different name.
@bgolus thank you for the help. The UI image was in the world space indeed. I could not get this work. But found a workaround that solves my situation. I make the camera render the depth info to a render texture and then feed that one in a shader to do the maths. Thanks again for the help!