All I am trying to do at the moment is mimic the functionality of the builtin depth texture with shader replacement. For the replacement shader I'm using the shader below (which is pretty much the same as the builtin shader): Code (CSharp): Shader "Custom/DepthTexture" { SubShader { Tags { "RenderType"="Opaque" } Pass { Fog { Mode Off } CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" struct v2f { float4 pos : SV_POSITION; float2 depth : TEXCOORD0; }; v2f vert (appdata_base v) { v2f o; o.pos = mul (UNITY_MATRIX_MVP, v.vertex); UNITY_TRANSFER_DEPTH(o.depth); return o; } half4 frag(v2f i) : COLOR { UNITY_OUTPUT_DEPTH(i.depth); } ENDCG } } } The code below is attached to a secondary camera (with the camera component disabled) that is supposed to render the depth buffer. The depthTextureShader variable is set to the shader above. Code (CSharp): using UnityEngine; using System.Collections; public class GBufferCam : MonoBehaviour { public Shader depthTextureShader; public RenderTexture bBuffer; // Use this for initialization void Start () { gBuffer = new RenderTexture(Screen.width, Screen.height, 0, RenderTextureFormat.ARGBHalf); gBuffer.depth = 24; } public void GetBuffer() { camera.CopyFrom(Camera.main); camera.renderingPath = RenderingPath.Forward; camera.SetTargetBuffers(gBuffer.colorBuffer, gBuffer.depthBuffer); camera.clearFlags = CameraClearFlags.SolidColor; camera.RenderWithShader(depthTextureShader, "RenderType"); } } When I read from the gBuffer texture in another shader using: Code (CSharp): float depth = Linear01Depth(UNITY_SAMPLE_DEPTH(tex2D(_GBuffer,i.uv_depth))); and display the depth in the fragment shader using: Code (CSharp): return float4(depth, depth, depth, 1.0); my result is incorrect. To my understating this is the same process that is used for the builtin depth texture shader, can anyone tell me what might be going wrong?
I can't see anywhere in that code snippet where you're actually passing the new depth texture to a material or setting it globally... or if GetBuffer is even getting called :/ I normally use something like this for my replacement shaders, give it a whirl and see how you go. Code (csharp): private void Awake() { camera.CopyFrom(Camera.main); var target = new RenderTexture(Screen.width, Screen.height, 16, RenderTextureFormat.Depth); camera.targetTexture = target; camera.depthTextureMode = DepthTextureMode.None; camera.SetReplacementShader(Shader.Find("Hidden/Camera-CustomDepthTexture"), "RenderType"); Shader.SetGlobalTexture("_GBuffer", target); }
Well I simply trimmed that bit out, where I send the gbuffer to the shader where it's used. I tested the code above and nothing seems to be being written into the depth buffer of the render texture, or at least it isn't producing the same results as using the builtin depth texture. The reason I'm trying to make a replacement shader that produces a depth buffer is that the builtin depth texture's buffer is linear whereas I need the depth buffer to be logarithmic (to preserve precision). Is there a way to store the depth in one of the color channels (or all of them, instead of the depth buffer) as this texture is only being used to store depth and not necessarily any other information of what the camera is looking at?