Search Unity

RenderTexture with both RGB and depth

Discussion in 'General Graphics' started by GuitarBro, Mar 9, 2019.

  1. GuitarBro

    GuitarBro

    Joined:
    Oct 9, 2014
    Posts:
    180
    I wish to create a RenderTexture that captures both the RGB values of the camera being used as well as the depth information, preferably in the alpha channel of the same texture to avoid needing two samplers in the shader it will be used in. I don't care about having transparency in the alpha channel and the precision of the depth information isn't super important to me as long as it's enough to not have extreme banding.

    After looking around for quite a bit, the closest solution I could find would be to simply create two rendertextures, one for the RGB with no depth buffer and one for Depth with a depth buffer set to 16. Though this would require two samplers in the shader, if I'm not mistaken.

    Alternatively, I may just be approaching this from the wrong angle, in which case, what I really want to know is: what is the most efficient way to get both RGB and depth information into a shader? (Ideally in a way that the UVs could be animated within the shader)
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    You could either have separate RGB & Depth, or combined like you said. To get enough accuracy in the depth buffer, unless your depth range is very, very low, then using a single ARGBHalf texture might be the answer.

    Here's the thing, if you're going to encode depth information in the alpha, you'll need custom shaders for everything that camera renders. And you still want a actually depth buffer, even if the alpha is holding the depth too.
    Code (csharp):
    1. struct v2f {
    2.     float4 pos : SV_Position;
    3.     float2 uv : TEXCOORD0;
    4. };
    5.  
    6. v2f vert (appdata_full v)
    7. {
    8.     v2f o;
    9.     o.pos = UnityObjectToClipPos(v.vertex);
    10.     o.uv = v.texcoord.xy;
    11.     return o;
    12. }
    13.  
    14. half4 frag (v2f i, float4 vpos : VPOS) : SV_Target
    15. {
    16.     half4 col = tex2D(_MainTex, i.uv);
    17.     col.a = i.pos.z; // the SV_Position semantic's z contains the same depth value as what goes into the depth buffer
    18.     return col;
    19. }
    Alternatively, having two render textures, one ARGB32 and one RHalf may end up being faster, both to render and to read from. Even though it's two textures, they're overall using less memory than a single ARGBHalf. Which is faster will depend on your hardware, and what you’re rendering. Basically if you’re memory bandwidth bound, then two textures may be faster, otherwise it may not even be different.
     
  3. GuitarBro

    GuitarBro

    Joined:
    Oct 9, 2014
    Posts:
    180
    Interesting. Modifying every shader to write it's depth to alpha isn't going to scale very well, so it sounds like using two textures may actually be the best option here. It certainly sounds like the simplest option so I may just go with that unless it becomes an issue down the road (though I imagine I'd have better areas to optimize first). Thanks for the information.
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    Still requires custom shaders either way. The two render texture method with out custom shaders won't write anything to the second buffer.
     
  5. GuitarBro

    GuitarBro

    Joined:
    Oct 9, 2014
    Posts:
    180
    Yeah I already have a custom shader for the object I wish to pass the render textures into, so that shouldn't be a problem. I just didn't want to code myself into a corner doing something relatively inefficient if there was a better way to do it.