You can render the depth in unity using the render depth shader as shader replacement for camera: Spoiler: Shader Shader "Hidden/Render Depth" { SubShader { Tags { "RenderType"="Opaque" } Pass { Fog { Mode Off } CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" struct v2f { float4 pos : POSITION; #ifdef UNITY_MIGHT_NOT_HAVE_DEPTH_TEXTURE float2 depth : TEXCOORD0; #endif }; v2f vert( appdata_base v ) { v2f o; o.pos = mul(UNITY_MATRIX_MVP, v.vertex); UNITY_TRANSFER_DEPTH(o.depth); return o; } half4 frag(v2f i) : COLOR { UNITY_OUTPUT_DEPTH(i.depth); } ENDCG } } Fallback Off } The fading distance is very short though. Is there a way to increase it?
Only the near clipping plane seems to do any significant change, but I need it to be where it is. Changing the far clipping plane doesn't do anything. The whole scene is basically just white anyway.
If you mean o.depth or i.depth - I already tried that and it had no effect (unless multiplied by 0). Which is to be expected because all those wrappers do is basically setting a variable, and returning a ratio: Code (CSharp): #define UNITY_TRANSFER_DEPTH(oo) oo = o.pos.zw #define UNITY_OUTPUT_DEPTH(i) return i.x/i.y Or what do you mean with depth?
_whateverInputFloat * i.depth in the shader you listed. Try it in frag with a constant ie: UNITY_OUTPUT_DEPTH(i.depth * 2);
My guess is you want linear depth values instead of the ones used for the depth buffer. You could probably fairly easily modify this shader to do it: http://wiki.unity3d.com/index.php?title=AlphaClipsafe
That will not compile. Altering i.depth before usage in UNITY_OUTPUT_DEPTH will not do anything, as it is using a ratio. Multiplying the result of UNITY_OUTPUT_DEPTH will simply make the result darker or lighter without actually affecting the gradient distance. Linear would be nice yes. Though I'm not sure about fairly easy - I hardly know any shader code at all.
float vz = mul(UNITY_MATRIX_MV, v.vertex).z; float depth = _offset + abs( (1 - clamp(-vz / _farDepth, 0, 2)) * _depthScale); needs float _offset, _farDepth, _depthScale I use this for my depth of field - it results in a pretty good range and decent amount of control starting at the camera. But you probably can do better with daniel's advice.
I am trying to calculate the distance between the camera and a vertex or pixel. Then I could retrieve the depth of the vertex/pixel. Here is a good solution : Depth as distance to camera plane in GLSL Code (CSharp): varying float distToCamera; void main(){ vec4 cs_position = glModelViewMatrix * gl_Vertex; distToCamera =-cs_position.z; gl_Position = gl_ProjectionMatrix * cs_position;} With this example the depth is dependent to camera distance from the object which is logical and normal. But I would like to constrain this value. I would like the same depth and value if I am near from the object or if I am far. That's why I am talking about a "normalized" depth. Here is an example of what I am trying to achieve. On the left you can see that the depth is dependant to the camera distance from the object. And on the right even if the camera moves back from the object, the depth remains the same. Is it possible ? How ?
Think of depth like fog, when its far away you can't see anything (black), when its near you, you can sort of see (white). Let me demonstrate Depth at normal Now when you want to increase the far distance, your increasing the range but your losing some of the detail, edges, or surfaces on the nearest side. Depth Increased But if you decrease the far distance, your losing the depth far away, but edges, details, and surfaces become more defined. Depth Decreased So what this guy above my comment wants is basically unachievable with depth, but you can do that if you apply a texture to the object. Im gonna assume your making a sub-surface scattering shader that needs a thickness calculator or texture, you can do that with a texture that represents the thickness of the object in certain areas, but it won't be possible to do it with depth. Because thats like trying to keep springs all the way down the farther you move away from them.
This doesnt help? Code (CSharp): Shader "Z" { Properties { } SubShader { Tags { "RenderType"="Opaque" } LOD 200 Pass { Lighting Off Fog { Mode Off } CGPROGRAM #pragma vertex vert #pragma fragment frag #pragma fragmentoption ARB_precision_hint_fastest struct a2v { float4 vertex : POSITION; fixed4 color : COLOR; }; struct v2f { float4 pos : SV_POSITION; half dist : TEXCOORD0; }; v2f vert (a2v v) { v2f o; o.pos = mul(UNITY_MATRIX_MVP, v.vertex); o.dist = mul(UNITY_MATRIX_IT_MV, v.vertex).z; return o; } fixed4 frag(v2f i) : COLOR { return fixed4(i.dist, i.dist, i.dist, 1); } ENDCG } } FallBack Off }
Yes, it is possible if you pass uniforms that can be used to calculate the object distance in the same space as the fragment distance then you can subtract the object distance to retrieve the "relative depth". If you need it relative to the nearest point, this can be approximated by subtracting the radius and position of the bounding sphere.