I'm in a situation where I need to know when a shadowcaster is being used to render the Depth buffer (VS shadow buffer) Nothing seem evident at this time. So I wonder if there is some shader constant that can be tested in the vertex shader to implicitly tell me the render usage. I might be alone on this one, but I would like a shader keyword to differentiate the usage. Because I want to use a lower resolution "mesh" for the shadow VS the depth. Because the depth buffer must be absolutely identical to the rendered geometry, but the shadow doesn't need to be ?
Code (CSharp): #ifdef SHADOWS_DEPTH if (unity_LightShadowBias.z != 0.0) { // shadow } else { // camera depth } #endif This is what Unity uses internally, but it only works if your lights have a non-zero bias setting. For Unity's internal case they actually don't care if when it's zero it does the same code, so they never added another way to deal with this. I started a thread on this a while ago to try and find a 100% consistent way to tell between camera and light, but I never found a good one that worked in all cases. https://forum.unity3d.com/threads/d...dow-caster-and-shadow-receiver-in-5-2.362653/
The frame debugger seem to indicate (at least in unity 5.4.0f3) that unity_LightShadowBias is zeroed in the depth pass. In the shadowcaster I'm now using this if(any(unity_LightShadowBias)==false) // Depth pass But I'm concerned that this can break at any time... Could someone from Unity confirm that this is the official/correct way to write Depth vs Shadow shaders ?