Search Unity

Depth and normal buffers use point filtering in deferred rendering?

Discussion in 'Shaders' started by Iron-Warrior, Jan 26, 2019.

  1. Iron-Warrior

    Iron-Warrior

    Joined:
    Nov 3, 2009
    Posts:
    838
    Hi, I've discovered that it seems like the depth and normals buffers (_CameraDepthTexture and _CameraGBufferTexture2) use point filtering, rather than bilinear/trilinear. This prevents sampling from pixels "in between" other pixels, which is useful for some effects, such as scaling the thickness of a line in edge detection. Here's a quick test program I wrote demonstrating the effect:

    Code (csharp):
    1. Shader "Roystan/Post Debug"
    2. {
    3.     Properties
    4.     {
    5.         [HideInInspector]
    6.         _MainTex ("-", 2D) = "white" {}
    7.         _Offset("Offset", Float) = 0
    8.     }
    9.     SubShader
    10.     {
    11.         // No culling or depth
    12.         Cull Off ZWrite Off ZTest Always
    13.  
    14.         Pass
    15.         {
    16.             CGPROGRAM
    17.             #pragma vertex vert
    18.             #pragma fragment frag
    19.  
    20.             #include "UnityCG.cginc"
    21.  
    22.             struct appdata
    23.             {
    24.                 float4 vertex : POSITION;
    25.                 float2 uv : TEXCOORD0;
    26.             };
    27.  
    28.             struct v2f
    29.             {
    30.                 float2 uv : TEXCOORD0;
    31.                 float4 vertex : SV_POSITION;
    32.             };
    33.  
    34.             v2f vert (appdata v)
    35.             {
    36.                 v2f o;
    37.                 o.vertex = UnityObjectToClipPos(v.vertex);
    38.                 o.uv = v.uv;
    39.                 return o;
    40.             }
    41.  
    42.             sampler2D _MainTex;
    43.             float4 _MainTex_TexelSize;
    44.             sampler2D _CameraDepthTexture;
    45.             float4 _CameraDepthTexture_TexelSize;
    46.             float _Offset;
    47.  
    48.             fixed4 frag (v2f i) : SV_Target
    49.             {
    50.                 float2 deltaMain = _MainTex_TexelSize.xy * _Offset;
    51.                 float2 deltaDepth = _CameraDepthTexture_TexelSize.xy * _Offset;
    52.  
    53.                 //return tex2D(_CameraDepthTexture, i.uv + deltaMain).r;
    54.                 return tex2D(_MainTex, i.uv + deltaDepth);
    55.             }
    56.             ENDCG
    57.         }
    58.     }
    59. }
    Switching between the two return values at the end shows that while you can smoothly translate _MainTex using the offset, the depth texture will only sample from discrete pixel values.

    Is there any way to modify this behaviour, and does anyone have an explanation why it would be this way? I couldn't think of what purpose it would serve to have the buffers set up like this.

    Thanks,
    Erik
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,338
    The why is because generally you never want to sample normals or depth using filtering in the usually use cases. Lighting, fog, SSAO, soft particles, etc. All of these have problems on depth or normal discontuities when using bilinear filtering that you want to avoid. The main color on the other hand benefits from bilinear filtering for several effects, like depth of field, bloom, post AA, and auto exposure.

    So that’s the why. But how do you work around it? The most portable option is to sample from the 4 pixels and do the interpolation yourself, that works on all platforms, but obviously isn’t the nicest or fastest solution. So instead you could use inline samplers states.

    https://docs.unity3d.com/Manual/SL-SamplerStates.html


    Another similar option to the first option would be to use the various Gather sample functions to get 4 point filtered pixel values and do the interpolation yourself still. However it means you can get 4 depth samples in a single Gather call, or 4 normals using the three GatherRed/Green/Blue calls. This requires using the D3D11 style HLSL shown in the link above, but means you get a lot of the benefits of both bilinear sampling and point sampling without having to sample the texture as many times.
     
    Iron-Warrior likes this.
  3. Iron-Warrior

    Iron-Warrior

    Joined:
    Nov 3, 2009
    Posts:
    838
    The why makes a ton of sense in retrospect, thanks for the explanation.

    Using inline sampler states was easy to set up and worked perfectly.

    https://i.imgur.com/gnC29rz.gif

    Nice thin lines sampling between texels.
     
    bgolus likes this.
  4. flogelz

    flogelz

    Joined:
    Aug 10, 2018
    Posts:
    142
    I just recently learned about the linear inline sampling in your tutorial about the outlines and wanted to try it too. For some reason it didn't work on the gbuffer textures? All the mirror options worked totally fine, just trilinear and linear don't have an effect and it always stays at point filtering-

    (I tried the same sampler on another texture in a simple vertfrag shader and there it worked- Only the gbuffer textures seem to make problems)