Search Unity

New trouble with screen space partial derivatives and tex2Dgrad()

Discussion in 'Shaders' started by Quatum1000, Mar 25, 2020.

  1. Quatum1000


    Oct 5, 2014
    Hi everyone,

    I'm using the deferred shader to pass some easy light maps onto the scene. When using
    Code (CSharp):
    1. half4 lmSample = tex2D(_LightmapTwoD, lightMapUV);
    I got these partial derivatives fails for any reason.

    Then I used the second, third option from a thread from here.

    Code (CSharp):
    1.         // second option
    2.         float2 dd_x = ddx(lightMapUV);
    3.         float2 dd_y = ddy(lightMapUV);
    4.         half4 lmSample = tex2Dgrad(_LightmapTwoD, lightMapUV, dd_x, dd_y);
    6.         // third option
    7.         float2 dd_x = ddx(lightMapUV);
    8.         float2 dd_y = ddy(lightMapUV);
    9.         float2 lightMapUV = frac(lightMapUV);
    10.         half4 lmSample = tex2Dgrad(_LightmapTwoD, lightMapUV, dd_x, dd_y);
    Anyway... nothing helps to get rid of the derivatives artifacts.

    Got it from this ebook: Wolfgang Engel

    Code (CSharp):
    1.         float2 dd_x = ddx(lightMapUV * 8192);
    2.         float2 dd_y = ddy(lightMapUV * 8192);
    3.         float delta_max_sqr = max(dot(dd_x, dd_x), dot(dd_y, dd_y));
    4.         float mip = 0.5 * log2(delta_max_sqr);
    5.         ted2Dlod(_LightmapTwoD, float4(lightMapUV, 0, mip));
    But fail here also.

    A real pain that tex2Dgrad not work in this case anyhow. I tried also tex clamp off, border mipmaps on, etc..

    Is there any other chance to fix this?

    Thanks a lot..
    Last edited: Mar 25, 2020
  2. bgolus


    Dec 7, 2012
    Welcome to the problem every deferred UV system comes up against. The issue isn't in your light map texture's settings, or what you do with the derivatives of the light map UV, it is the light map UV itself.

    Derivatives are the difference between adjacent pixels for each 2x2 block of pixel (called a pixel quad), and you're getting the derivatives of a screen space texture's values. If an edge of the rendered geometry is in the middle of one of those blocks, the derivatives will potentially be quite high. This doesn't happen when rendering actual geometry because the derivatives for UVs are those for each triangle's interpolated values, not for the screen space values. When rendering a triangle, each pixel in a pixel quad is running that surface's shader and getting interpolated UVs, even if the triangle only covers one pixel of the pixel quad. This ensures that the derivatives of values on the edges of geometry are still correct for that geometry.

    But when getting derivatives from a screen space texture you don't get that benefit. You're just getting the derivatives of the texture values with no understanding of geometry edges. If there's a big change between two values in the texture mid pixel quad, you get a large derivative value.

    If you search online for deferred decal mip maps, or raytracing mip maps, or deriving normals from depth buffers, you'll find plenty of discussions on the topic, but never a real solution ... because there isn't one.

    Here are a few of the common "solutions":

    The most common solution is to get the derivatives of the depth buffer at the same time, and if there's a large enough discontinuity set the UV derivatives to zero on that axis. This is by far the most common solution and is what gets implemented in probably every AAA game that uses deferred rendering and projected decals. You probably also want to clamp the UV derivatives to some max magnitude, effectively limiting the mip map range allowed. This is relatively cheap and solves 90% of the issues at the cost of a little bit of additional aliasing. These days TAA usually sweeps that issue away.

    The second option is to reconstruct the derivatives manually using multiple samples of the UV texture on depth discontinuities. This is how more advanced normal reconstruct from a depth buffer works. Again, the above is usually good enough for most people that the additional cost isn't considered worth it.
    See slides 37 through 42 which talks a little about both, though in this case they're doing decals with a compute shader against a world position screen space texture, so that's what they have derivatives for.

    The third option is to calculate the mip based purely on the distance from the camera. You need to know the intended texel-per-world unit scale of the surface's lightmaps for this to work. It's more common for decals since you can often have a good idea of what the texel scale is. It also has aliasing issues at grazing angles, but doesn't have edge discontinuities since you're not using derivatives at all. Doesn't let you use anisotropic filtering, which might not really be an issue for lightmaps anyway. Can be combined with the above two techniques to potentially get you something better than just zeroing out the derivatives on discontinuities.

    The fourth option is to render out the derivatives at the same time as you're rendering out the UVs and store those in the same texture. This gets you ground truth results, but usually makes the decision to only store UVs and not the lightmap's values less attractive. I know some people do something like this for their toy renderers, but I don't think any games actually do this.
    Olmi, Quatum1000 and Invertex like this.
  3. Quatum1000


    Oct 5, 2014
    Thanks for your detail explanation. I understand the issue now. but my technical acknowledge of the derivatives and deferred buffers are pretty low.

    I didn't understand the technical difference between solution one and two.
    So I adapted the code from the presentation. But the first issue is to get the float4 depthQuad0.
    No idea what means :
    float4 depthQuad0 = depthBufferTex.GatherRed(nearestClampSampler, sampleUv, [int2(-1,-1)]);


    I use the standard deferred shader. Declared sampler2d _LightmapTwoD;

    Code (CSharp):
    1. half4 CalculateLight(unity_v2f_deferred i)  {
    2. // ... unity stuff..
    3. half4 res = UNITY_BRDF_PBS (data.diffuseColor, data.specularColor, oneMinusReflectivity, data.smoothness, data.normalWorld, -eyeVec, light, ind);
    5. // ===================
    6. // .. my lightmap stuff..
    8. // The most common solution is to get the derivatives of the depth buffer at the same time,
    9. // and if there's a large enough discontinuity set the UV derivatives to zero on that axis.
    11.        float4 lightMapUV = wpos.xz;
    13.        float4 depthQuad0 = ????
    14.        float4 depthQuad1 = ????
    15.        float4 dxy = float4(depthQuad0.x, depthQuad0.z,   depthQuad1.x, depthQuad1.z);
    16.        float depthC = depthQuad0.w;
    17.        float depthThres = 0.0001;
    18.        bool sampleLod0 = any(abs(depthC - dxy) > depthThres);
    20.        float4 lmSample;
    22.        UNITY_BRANCH
    23.        if(sampleLod0) {
    24.            lmSample = tex2Dlod(_LightmapTwoD, float4(lightMapUV,0,0));
    25.        }
    26.        UNITY_BRANCH
    27.        if (sampleLod0 == false) {
    28.            lmSample = tex2D(_LightmapTwoD, lightMapUV);
    29.        }
    31. // ===================
    For sure I would test the first solution as well, but I didn't know what it means to get the derivatives continuously.

    // The most common solution is to get the derivatives of the depth buffer at the same time,
    // and if there's a large enough discontinuity set the UV derivatives to zero on that axis.

    Thank you.
  4. bgolus


    Dec 7, 2012
    The Gather functions are for getting the red value of the 4 texels within what would otherwise be a bilinear sample. For example, if you had a 2x2 texture and called GatherRed at float2(0.5,0.5) you’d get back a float4 with the red of all 4 texels of the texture. This is useful for getting the value from multiple texels faster than sampling each individually using point sampling.

    However this is also a Direct3D 10 / OpenGL 4.0 feature, and requires using D3D 10 style texture uniforms to use. Functionally doing multiple tex2Dlod calls with the uv offset by 1 texel at a time gets you the same results.

    See this example:
    Just pretend each
    is a
    and it should make sense. (Their depth buffer texture has no mip maps so a “
    “ equivalent isn’t needed.)

    You’re not getting the derivatives continuously, you’re using derivatives to find out if there’s a large enough change in the depth buffer for you to suspect there is a geometry edge, aka a discontinuity.
    Something like this:
    Code (csharp):
    1. float depth = LinearEyeDepth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, screenUV));
    2. float2 dd_x = ddx(lightMapUV);
    3. if (abs(ddx(depth)) > _DepthDiffThreshold)
    4.   dd_x = float2(0,0);
    5. float2 dd_y = ddy(lightMapUV);
    6. if (abs(ddy(depth)) > _DepthDiffThreashold)
    7.   dd_y = float2(0,0);
    8. half4 lmSample = tex2Dgrad(_LightmapTwoD, lightMapUV, dd_x, dd_y);
    is a totally arbitrary value you'd have to play with depending what looks good for your content. Basically it's the same idea as sampling the depth buffer multiple times and looking for big steps, but you're letting the hardware do the work for you.
    Quatum1000 likes this.
  5. Quatum1000


    Oct 5, 2014
    Thank you very much! You made my day!

    Your solution works. The algorithm works for a specified distance.
    In my case, a city scene great for low distance with _DepthDiffThreshold=10 of a cam pos.y +2m over ground. But it cause a lot LOD flickering at about > 100m.

    Then I came up with the idea to use depth distance for the _DepthDiffThreshold. Means, on higher distances the derivatives discontinuity fails less and the distance flickering stopped immediately until far distances > 6000m.

    To fix further flicking issues on low surface angles against the camera, I took delta height between the pixel and the camera position into account. The code include my personally best results.

    Code (CSharp):
    1.        // ... heightDelta
    2.        // 01.0m   _DepthDiffThresholdMul = 1.000
    3.        // 02.5m   _DepthDiffThresholdMul = 0.350
    4.        // 05.0m   _DepthDiffThresholdMul = 0.280
    5.        // 10.0m   _DepthDiffThresholdMul = 0.200
    6.        // 20.0m   _DepthDiffThresholdMul = 0.165
    7.        // 30.0m   _DepthDiffThresholdMul = 0.100
    8.        // 50.0m   _DepthDiffThresholdMul = 0.090
    9.        // 100.0m  _DepthDiffThresholdMul = 0.075
    11.        float heightDelta = abs((wpos.y - _WorldSpaceCameraPos));
    12.        float depth = LinearEyeDepth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, uv));
    14.        // ... _DepthDiffThresholdMul = 0.30-0.35 seems to
    15. work good for a standard outside scene
    16.        float _DepthDiffThreshold = depth * _DepthDiffThresholdMul;
    18.        float2 dd_x = ddx(lightMapUV);
    19.        if (abs(ddx(depth)) > _DepthDiffThreshold)
    20.            dd_x = float2(0, 0);
    21.        float2 dd_y = ddy(lightMapUV);
    22.        if (abs(ddy(depth)) > _DepthDiffThreshold)
    23.            dd_y = float2(0, 0);
    24.        lmSample = tex2Dgrad(_LightmapTwoD, lightMapUV, dd_x, dd_y);
    Last edited: Mar 26, 2020