Search Unity

Custom fragment depth in directional light shadowcaster shader

Discussion in 'Shaders' started by neginfinity, Apr 9, 2016.

  1. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,572
    Has anyone here ever written a custom fragment/vertex shader pair that changed per-pixel depth value AND supported shadows from directional lights? I could use a minimal example of this kind of shader.

    I have a custom shader of this kind, but directional light is giving me trouble - depth value appears to be invalid (it is stretched) and I am probably missing an adjustment somewhere.

    This:
    Code (csharp):
    1.  
    2. float calculateShadowDepth(float3 worldPos){
    3.     float4 projPos = mul(UNITY_MATRIX_VP, float4(worldPos, 1));
    4.     projPos = UnityApplyLinearShadowBias(projPos);
    5.     return projPos.z/projPos.w;
    6. }
    7.  
    Works correctly for spot lights, but fails when the engine starts rendering directional shadows (apparently it does it using orthographic camera).
     
  2. Zicandar

    Zicandar

    Joined:
    Feb 10, 2014
    Posts:
    388
    I'm not 100% sure I did this, but I have played with per-pixel depth offsets by modifying depth output from pixel shader. (WARNING: This can/will disable early-z culling/optimization!)
    I'd suggest if you can to do the same to the pass where your outputting your mesh as a shadow caster. But I'm not sure if they allow pixel shaders in the shadow passes?
    And your completly correct that a Directional light is handled as a Ortho camera, that's the "idea" of it being so far away it's "directional" and not from a point.
     
  3. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,572
    Fragment shaders are allowed in shadow passes.

    Alright. Here are some details.

    This is a cube made out of 12 triangles:
    1.jpg
    ^^^ This is a cube. Not a sphere. The depth calculation is correct for perspective/ortho cameras, I checked.

    This is what it looks like when it is lit by spot/point lights.
    2.jpg
    And this is what it looks like when it is lit by directional light.
    3.jpg
    As you can see the shadow depth is stretched, even thought the silhouette of the shadow is correct.

    In frame debugger I can see that the shadow for directional light is rendered in 3 passes by 3 fake spot lights which appear to have orthographic projection on them. Then those 3 shadow buffers are combined into one using some process I don't get to see.

    It is also quite annoying that I can't find a method to see the final shader with macro expansion applied to it.

    Does anyone have any idea what theoretically could be going wrong here?
     
    Last edited: Apr 11, 2016
  4. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,572
    I think... I might've figured out what's wrong with it. I'll post an update later - once I test it.
     
  5. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,520
    I've done this before. Tricky business, because it's also different for point lights. (Shadow map is rendered to a cubemap.) The advantage of the point light though is that it's always rendered to a "regular" texture. For directional and spots it can also be an actual depth texture, which requires outputting to sv_depth. In case hardware shadow maps are not supported, the output needs to go to sv_color.

    I just took the entire shadow pass from the standard shader and adjusted that. The first directional light might be treated differently, because it's usually the most important. (Aka the sun.)

    @Zicandar: It will (not can) disable early z-rejection, so always render these surfaces first. (They will be shaded no matter what is in front.)

    If the GPU makers are listening. I had a small idea on the early z-rejection issue. What if you add a mode in which the vertex shader output is a best case z-depth result. As in, it might be visible, but the pixel shader might make it worse. Or with typical z-testing, the pixel shader can only push the pixels back, not move them forward compared to the vertex shader output. In that case, which is also the case of neginifinity, early z-rejection can still be applied. (No GPU makers listening? Just my luck...)
     
  6. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,572
    I've already dealt with this part, actually. Point lights were the easiest, because they have the least insane macros in AutoLight.cginc.

    Since the sphere-cube is pretty much rendered by raycasting, I think it is just the case of incorrect incoming direction vector which only shows up on orthographic spot lights which seems to be used for directional light rendering. I should be able to test that tomorrow. I've already dealt with this problem for orthographic camera, but it looks like I overlooked it again in the spotlight related code.
     
  7. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,572
    I figured it out.

    4.jpg

    The issue turned out to be quite... complicated.

    Basically... in most scenarios when unity is rendering something it is possible to determine whether the camera is orthographic using unity_OrthoParams.w. If it is > 0, the camera is orthographic. For raycast object it determines how direction to camera is calculated, which is quite important.

    However. This flag is not set correctly when unity renders scene to depth texture (UpdateDepthTexture) and it is not set correctly when rendering directional orthographic spot lights from which directional light shadow is constructed..

    So, I had to directly query projection matrix values to determine whether it is orthographic or not.
    Code (cg):
    1.  
    2.         if ((UNITY_MATRIX_P[3].x == 0.0) && (UNITY_MATRIX_P[3].y == 0.0) && (UNITY_MATRIX_P[3].z == 0.0)){
    3.             o.rayDir = -UNITY_MATRIX_V[2].xyz;
    4.         }
    5.         else{
    6.             o.rayDir = getRayToCamera(worldPos);
    7.         }
    8.  
    ^^^ Which is a messy way to go about it.
    Also, I had incorrect multi-compile pragma in shadowcaster pass (needed #pragma multi_compile_shadowcaster ).

    I'm not sure if I should report this as a bug. It is an incredibly obscure issue, and I've never had anyone respond to any of my reports...

    ----

    Reported. Case 787801.
     
    Last edited: Apr 12, 2016
  8. Zicandar

    Zicandar

    Joined:
    Feb 10, 2014
    Posts:
    388
    I though there was a way to tell it you would only move stuff further away? And that would allow early z-rejection.
    ALWAYS report stuff, if not reported, the issue does not exist. :)
     
  9. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,572
    Reported it already. See case Number.

    Z-Rejection is irrelevant for me though, it is a test/prototype, so it can be slow. Also I should probably use compute shader for this.
     
  10. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,520
    I can't find anything like that. Only references that early z-rejection is disabled when outputting depth. The most recent reference is from 2012 though, so things might have changed.
     
  11. stevenc33

    stevenc33

    Joined:
    Apr 20, 2016
    Posts:
    18
    neginfinity,

    I'm also raycasting spheres from cubes, in hopes of using them for SVOs. So far normals are colors look correct and great. I've been struggling with the cryptic system of unity's ShadowCaster (at least this is where I think I can solve the problem). To bad good resources and documentation are nearly non existent.

    Mind if I ask that you share some source or create a tutorial?
     
  12. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,572
    There's a "blog" linked in my signature with whopping 5 articles total.

    Here's the code:
    http://neginfinity.bitbucket.org/shaders/2016/04/13/raytraced-primitives-in-unity-pt4_1.html



    ^^^ The earth here is a cube.

    To correctly reconstruct depth, calculate fragment position in the worldspace multiply by view matrix as a 4 component vector (with w == 1), then divide all components by w. That'll give you screen coordinates and correct depth regardless of the matrix you're using.

    The example can easily be modified to handle distance fields too:
    distancefields.png
     
    Reanimate_L likes this.
  13. Reanimate_L

    Reanimate_L

    Joined:
    Oct 10, 2009
    Posts:
    2,788
    interesting so this is basicaly raymarched object right, not camera renderers?
    sorry for off the main topic
     
  14. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,572
    Yes. It is a raymarched/raytraced object with correct depth, shadows and everything.
    The original "Cube" serves as sort of portal.
    You can see boundary outline on the last screenshot. That's object's actual geometry.

    The screen with earth uses mathematically defined sphere (meaning ray tracing only has one step), the second sample uses distance fields (meaning you could plug anything in there) and actual raymarching. The distant field shape on the last screen is "sphere minus torus minus cube".
     
  15. Reanimate_L

    Reanimate_L

    Joined:
    Oct 10, 2009
    Posts:
    2,788
    Hmm even more interested. . . :D
     
  16. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,572
    Well, the articles are online, so feel free to read them.

    The main difficulty with unity lighting is that there are ifdefs involved and there are multiple projection modes being used.
    For example, directional shadow is screenspace, but in order to construct it unity creates 2 or 3 temporary spotlight-like shadowmaps... which IIRC may use ortographic projection.

    I think I outlined that too. I think there are 2 or 3 paragraphs of text outlining what kind of jumping through the hoops is required to get all lighting models to work in a shader that overwrites depth of object.
     
  17. Reanimate_L

    Reanimate_L

    Joined:
    Oct 10, 2009
    Posts:
    2,788
    Checking it out, thanks for the article man. Most article about raymarching are mostly fullscreen camera renders, really great to find about raymarch object
     
  18. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,572
    It ended up being mostly about fighitng unity lighting system and making shadows right. The latest test with distance field isn't on the "blog". Will probably post it there eventually. Then again, distance fields are surprisingly easy to implement.
     
    Reanimate_L likes this.
  19. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,520
    Yes, same here. Adjusting the depth in the standard pass was not much of an issue. For the shadows there is a difference between point lights and the other types. And you have to account for the options that the output is just a depth buffer or a color buffer with depth buffer. I'm also interested to read your article. I got things to work most of the time, but I'd like to compare it with your solution. The shadows were absolutely the hard part.
     
  20. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,572
    See "LightAndShadow.cginc" from here then check how it is being used.
    I also added code for distance fields screenshot from earlier.
     
  21. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,520
    Interesting. I do see small differences. For one I use a separate shader altogether for the shadowcaster pass. And if not needed I only output depth. For the rest I see small differences. Where you have:
    Code (csharp):
    1.  
    2. float4 calculateCubeShadowDepth(float3 worldPos){
    3.    float3 diff = worldPos - _LightPositionRange.xyz;
    4.    float depth = (length(diff) + unity_LightShadowBias.x) * _LightPositionRange.w;
    5.    return UnityEncodeCubeShadowDepth(depth);
    6. }
    7.  
    I have:
    Code (csharp):
    1.  
    2. float3 dist = pos_world.xyz - _LightPositionRange.xyz;
    3. return UnityEncodeCubeShadowDepth(length(dist) * _LightPositionRange.w);
    4.  
    So I'm missing the bias here. And where you have:
    Code (csharp):
    1.  
    2. float calculateShadowDepth(float3 worldPos) {
    3.    float4 projPos = mul(UNITY_MATRIX_VP, float4(worldPos, 1));
    4.    projPos = UnityApplyLinearShadowBias(projPos);
    5.    return projPos.z/projPos.w;
    6. }
    7.  
    I have:
    Code (csharp):
    1.  
    2. if (unity_LightShadowBias.z != 0.0) {
    3.    float shadowCos = dot(normal_world, view_world);
    4.    float shadowSine = sqrt(-shadowCos * shadowCos + 1.0);
    5.    float normalBias = unity_LightShadowBias.z * shadowSine;
    6.    pos_world.xyz -= normal_world * normalBias;
    7. }
    8. float4 pos_clip = mul(UNITY_MATRIX_VP, pos_world);
    9. pos_clip = UnityApplyLinearShadowBias(pos_clip);
    10. return pos_clip.z / pos_clip.w;
    11.  
    So here I do have a bias.
     
  22. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,572
    One more thing.

    Raycasting.cginc:
    Code (csharp):
    1.  
    2. #if !defined(RAYCAST_WITHIN_SHADOWCASTER_PASS) \
    3.     && (defined(SHADOWS_DEPTH)||defined(SHADOWS_NATIVE)||defined(SHADOWS_CUBE)) \
    4.     && !defined(SPOT) \
    5.     && !defined(POINT) \
    6.     && !defined(DIRECTIONAL) \
    7.     && !defined(POINT_COOKIE) \
    8.     && !defined(DIRECTIONAL_COOKIE)
    9.  
    10. #define RAYCAST_WITHIN_SHADOWCASTER_PASS
    11.  
    12. #endif
    13.  
    14. float4 calculateCubeShadowDepth(float3 worldPos){
    15.     float3 diff = worldPos - _LightPositionRange.xyz;
    16.     float depth = (length(diff) + unity_LightShadowBias.x) * _LightPositionRange.w;
    17.     return UnityEncodeCubeShadowDepth(depth);
    18. }
    19.  
    20. float calculateShadowDepth(float3 worldPos){
    21.     float4 projPos = mul(UNITY_MATRIX_VP, float4(worldPos, 1));
    22.     projPos = UnityApplyLinearShadowBias(projPos);
    23.     return projPos.z/projPos.w;
    24. }
    25.  
    26. RaytracedFragOut computeOutputFragment(float3 worldPos, float4 col){
    27.     RaytracedFragOut result;
    28. #if defined(RAYCAST_WITHIN_SHADOWCASTER_PASS)
    29.     #ifdef SHADOWS_CUBE
    30.         float4 shadowDepth = calculateCubeShadowDepth(worldPos);
    31.         result.depth = calculateFragmentDepth(worldPos);
    32.         result.col = shadowDepth;
    33.     #else
    34.         float shadowDepth = calculateShadowDepth(worldPos);
    35.         result.depth = shadowDepth;
    36.         result.col = shadowDepth;
    37.     #endif
    38. #else
    39.     result.depth = calculateFragmentDepth(worldPos);
    40.     result.col = col;
    41. #endif
    42.     return result;
    43. }
    44.  
    45.  
    I actually worked with something similar today, and forgot about the whole "calculateFragmentDepth" magic I wrote in that in that shader when I posted the article.

    Ended up with a nasty surprise when the scene started using color instead of depth on spot light shadows.
     
  23. Gravitymama

    Gravitymama

    Joined:
    Mar 31, 2018
    Posts:
    2
    what is _LightPositionRange.w for directional light??
     
  24. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,572
    Directional lights use completely different shadow technique and do not utilize this variable at all.