Search Unity

How to change the size of the camera's depth buffer?

Discussion in 'General Graphics' started by HuangWM, Aug 19, 2019.

  1. HuangWM

    HuangWM

    Joined:
    Nov 3, 2015
    Posts:
    45
    The resolution of my game is 4k. Can I change size of the camera's depth buffer to 1080p?
     
  2. joelv

    joelv

    Unity Technologies

    Joined:
    Mar 20, 2015
    Posts:
    203
    No sadly not in an easy way. The way depth buffering works is that for each color pixel you have an associated depth value to only get the closest opaque pixel. If you have more color pixels per depth pixel you would get some edge artifacts. You can perhaps render your whole world at 1080p and then upscale to 4k (using some good approach fitting your use case) and then render UI or anything that needs to be pixel perfect on top of that at 4k (without depth testing)
     
  3. HuangWM

    HuangWM

    Joined:
    Nov 3, 2015
    Posts:
    45
    I tried a way but I don't know if it works.

    Code (CSharp):
    1. using UnityEngine;
    2. using UnityEngine.Rendering;
    3.  
    4. public class RendererCore : MonoBehaviour
    5. {
    6.     public Material DownscaleDepthMaterial;
    7.  
    8.     private Camera m_Camera;
    9.  
    10.     private CommandBuffer m_CB_AfterForwardOpaque;
    11.  
    12.     protected void Start()
    13.     {
    14.         m_Camera = gameObject.GetComponent<Camera>();
    15.  
    16.         m_CB_AfterForwardOpaque = new CommandBuffer();
    17.         m_Camera.AddCommandBuffer(CameraEvent.AfterForwardOpaque, m_CB_AfterForwardOpaque);
    18.     }
    19.  
    20.     protected void OnRenderImage(RenderTexture source, RenderTexture destination)
    21.     {
    22.         m_CB_AfterForwardOpaque.Clear();
    23.  
    24.         RenderTextureFormat formatRF32 = RenderTextureFormat.RFloat;
    25.         int lowresDepthWidth = source.width / 2;
    26.         int lowresDepthHeight = source.height / 2;
    27.         RenderTexture lowresDepthRT = RenderTexture.GetTemporary(lowresDepthWidth, lowresDepthHeight, 0, formatRF32);
    28.  
    29.         m_CB_AfterForwardOpaque.Blit(source, lowresDepthRT, DownscaleDepthMaterial);
    30.         m_CB_AfterForwardOpaque.SetGlobalTexture("_CameraDepthTexture", lowresDepthRT);
    31.        
    32.         Graphics.Blit(source, destination);
    33.         RenderTexture.ReleaseTemporary(lowresDepthRT);
    34.     }
    35. }
    Code (CSharp):
    1. Shader "Custom/RendererCore/DownsampleDepth"
    2. {
    3.    
    4.     CGINCLUDE
    5.    
    6.     #include "UnityCG.cginc"
    7.    
    8.     struct v2f
    9.     {
    10.         float4 pos: SV_POSITION;
    11.         float2 uv: TEXCOORD0;
    12.     };
    13.    
    14.     sampler2D _CameraDepthTexture;
    15.     float4 _CameraDepthTexture_TexelSize; // (1.0/width, 1.0/height, width, height)
    16.    
    17.     v2f vert(appdata_img v)
    18.     {
    19.         v2f o = (v2f)0;
    20.         o.pos = UnityObjectToClipPos(v.vertex);
    21.         o.uv = v.texcoord;
    22.        
    23.         return o;
    24.     }
    25.    
    26.     float frag(v2f input): SV_Target
    27.     {
    28.         float2 texelSize = 0.5 * _CameraDepthTexture_TexelSize.xy;
    29.         float2 taps[4] = {
    30.             float2(input.uv + float2(-1, -1) * texelSize),
    31.             float2(input.uv + float2(-1, 1) * texelSize),
    32.             float2(input.uv + float2(1, -1) * texelSize),
    33.             float2(input.uv + float2(1, 1) * texelSize)
    34.         };
    35.        
    36.         float depth1 = tex2D(_CameraDepthTexture, taps[0]);
    37.         float depth2 = tex2D(_CameraDepthTexture, taps[1]);
    38.         float depth3 = tex2D(_CameraDepthTexture, taps[2]);
    39.         float depth4 = tex2D(_CameraDepthTexture, taps[3]);
    40.        
    41.         float result = min(depth1, min(depth2, min(depth3, depth4)));
    42.        
    43.         return result;
    44.     }
    45.    
    46.     ENDCG
    47.    
    48.     SubShader
    49.     {
    50.         Pass
    51.         {
    52.             ZTest Always Cull Off ZWrite Off
    53.            
    54.             CGPROGRAM
    55.            
    56.             #pragma vertex vert
    57.             #pragma fragment frag
    58.             ENDCG
    59.            
    60.         }
    61.     }
    62.     Fallback off
    63. }
     
  4. joelv

    joelv

    Unity Technologies

    Joined:
    Mar 20, 2015
    Posts:
    203
    I think what you want is more akin to https://docs.unity3d.com/Manual/DynamicResolution.html
    You render your main view camera at a lower resolution and then you upscale the color buffer using some algorithm to 4k. Depth buffer should probably be discarded, or if you really need it filtered using some edge preserving filter (and you should assume depth information is approximate only after this)
     
  5. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    What are you actually trying to accomplish? Are you looking to drop the resolution of the depth buffer for some specific visual effect you’re going for, or as a rendering optimization? What you’re doing above would indeed create a half resolution version of the camera depth texture, but that has nothing to do with the depth buffer. The depth texture is something Unity either generates as a separate pass, or copies from the gbuffer depth buffer depending on if you’re using forward or deferred rendering. What you’re doing would only make the texture which things like some post processing samples from have a lower resolution texture. The actual depth buffer would remain 4K, and those post process effects would also still be running at 4K, giving only a negligible performance benefit if any. That also assumes Unity even lets you override a built in texture like that, I think it usually ignores that and throws a warning. You’re also setting it and then throwing it away (release) before it ever gets to the post processing, so really you’re just wasting GPU time building a texture that never gets used.


    If your goal is to improve performance, then rendering the whole screen at a lower resolution is the only real option. You cannot render to a target that has a color and depth buffer of different resolutions. The closest you can get to that is actually MSAA, but in that case the color is the thing that ends up being lower resolution than the depth. There have been games that use MSAA to render at a lower resolution and reconstruct a “full resolution” image from that data. Many of the PS4 Pro games do something like this to be able to reach those higher resolutions.
     
  6. HuangWM

    HuangWM

    Joined:
    Nov 3, 2015
    Posts:
    45
    I can't use lower resolution, because we signed some contracts.
    For more information, please see another article I sent.
    https://forum.unity.com/threads/how-to-optimize-graphics.724076/#post-4836605
    PIX Capture: https://drive.google.com/open?id=1KZc4BfU9We-6DwJL7uqzrm2qKiFhIKXC
    I guess the bottleneck is reading depth textures frequently.
     
  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    I'm not a Unity employee, so I cannot access what I assume is a private thread, and you probably shouldn't be linking xbox one x pix dumps on a public forum.

    If your bottleneck is reading frequently from the camera depth texture, presumably you're using some post processing that's sampling the camera depth texture, like depth of field or ambient occlusion. Downscaling the camera depth texture to a lower resolution version, and then modifying the ambient occlusion to run at a lower resolution should be possible without a significant quality hit. Depth of field you could do something similar, but might require some some more creative modifications as running everything at a lower res there will visibly making your entire scene lower resolution and blur the edges of objects that should not be.