Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Native depth texture - Metal support

Discussion in 'General Graphics' started by amamoany, Jul 13, 2015.

  1. amamoany

    amamoany

    Joined:
    Jul 1, 2013
    Posts:
    3
    Hello,

    I've been running a few tests with Cameras. Specifically - extracting depth info from them by setting the Camera.depthTextureMode property to DepthTextureMode.Depth.

    It appears that on iOS devices supporting the Metal API, the native depth buffer isn't being used. Instead the camera is generating the depth buffer using a separate pass. I've confirmed this by writing a post-effect that enables depth rendering and blit's the contents of the depth texture to the screen.

    I've run the effect on Metal and also under OpenGL GLES 2.0 by way of comparison. There is a significant frame-rate reduction under Metal. With OpenGL ES 2.0 it seems the native depth buffer is used and there is no such reduction, as mentioned in the manual:

    http://docs.unity3d.com/Manual/SL-DepthTextures.html


    Can anyone at Unity confirm that this is indeed the case? And if so, is a update for this on Unity's roadmap?

    Many thanks,
    Adam


    Unity Version : 5.1.0f3 (pro).
     
  2. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,002
    Did you ever figure this out? Does it still happen? (Did you file a bug report?)

    I'm writing a dof filter for iOS and I'm using the camera depth buffer. If this is still true, maybe I should switch to using render texture depth. (I would check myself but I don't have access to a metal enabled device these days)
     
  3. amamoany

    amamoany

    Joined:
    Jul 1, 2013
    Posts:
    3
    No, I haven't pursued this any farther - so I don't know if there is a fix in later versions of Unity. A bug report may be the way to if there have been no recent changes in this area.

    Even using a RenderTexture might not help you. Because, although you can apply the RT to a material as a texture I don't think there is a way of extracting/reading the depth info. It's just color info that is readable, I believe.

    There may be a way of getting at the depth info if you use RenderTexture.depthBuffer, but it looks like you will have to go down the native plugin route to do that.
     
  4. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,002
  5. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Try reporting a bug, it could be just a define issue or something overlooked with the shader includes.
     
  6. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,002
    Actually I am not even sure what is supposed to happen, so I don't know if the behaviour is a bug or not.

    I am using Camera.depthTextureMode = DepthTextureMode.Depth; which on OpenGL ES 2.0 causes UpdateDepthTexture to grab something like 6+ms on an iPad 4, which to me seems a lot. (it also generates a whole bunch of calls).

    So I tried to find out what that command actually does. I opened the source of the Camera-DepthTexture.shader

    Code (csharp):
    1. SubShader {
    2.     Tags { "RenderType"="Opaque" }
    3.     Pass {
    4. CGPROGRAM
    5. #pragma vertex vert
    6. #pragma fragment frag
    7. #include "UnityCG.cginc"
    8. struct v2f {
    9.     float4 pos : SV_POSITION;
    10.     #ifdef UNITY_MIGHT_NOT_HAVE_DEPTH_TEXTURE
    11.     float2 depth : TEXCOORD0;
    12.     #endif
    13. };
    14. v2f vert( appdata_base v ) {
    15.     v2f o;
    16.     o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
    17.     UNITY_TRANSFER_DEPTH(o.depth);
    18.     return o;
    19. }
    20. fixed4 frag(v2f i) : SV_Target {
    21.     UNITY_OUTPUT_DEPTH(i.depth);
    22. }
    23. ENDCG
    24.     }
    25. }
    Which is pretty much the example that Unity has here http://docs.unity3d.com/Manual/SL-DepthTextures.html which if I also understand correctly, is a proper example on how to use native depth for each device. So my understanding so far, is that switching depthTextureMode to render Depth, cause the camera to render the scene again with the above replacement shader. Which sounds like a correct way of doing things.

    So I'm doing that, then I'm reading depth with sampler2D _CameraDepthTexture in a shader and doing a simple lerp with it. But why is it so slow? Is it using the native z-buffer? (maybe it is and it's just slower than I would expect?)

    Is there a way I can be sure the device is using GL_OES_depth_texture, or whatever other native equivalent?

    Sorry for the amount of questions, but the manual page is a bit confusing to read and I'm slightly out of my depth (heh...) here.
     
    Last edited: Oct 2, 2015
  7. andrewgotow

    andrewgotow

    Joined:
    Dec 28, 2013
    Posts:
    18
    Many mobile devices (including iOS devices) do not use a depth buffer by default, so when you ask for one Unity has to go through and re-render the entire scene to a new texture.

    iOS devices have extremely limited memory available for rendering, so they utilize a "tile-based deferred rendering (TBDR)" system. The frame is broken into small tiles (16x16 pixels or so), and each is rendered independently and compiled into the final composite image. The memory for these tiles is stored directly in GPU hardware, meaning that it's incredibly performant and energy efficient, and only a few bytes are needed instead of the several megabytes required to render a fullscreen buffer.

    The TBDR renderer essentially performs hidden-surface removal using these small tiles as temporary depth-buffers, assuming (correctly in the majority of cases) that this information is unnecessary once you've done your depth testing, after which it is discarded and the same block of memory is used for the next tile.

    Both OpenGL ES (through the GL_OES_depth_texture extension) and Metal support native depth texture formats, but as far as I know, there is no way to retrieve the depth data from the tiled renderer without rendering the scene a second time, or outputting the depth in a separate buffer using an MRT system.
     
    Last edited: Dec 6, 2016
  8. nasos_333

    nasos_333

    Joined:
    Feb 13, 2013
    Posts:
    12,898
    So metal is slower if need the depth information ?
     
  9. nasos_333

    nasos_333

    Joined:
    Feb 13, 2013
    Posts:
    12,898
    What is the way to get the depth buffer in Metal for iOS, when needed for transparent material ?