Search Unity

How to render object with just a depth texture?

Discussion in 'General Graphics' started by JonBFS, Jul 30, 2019.

  1. JonBFS

    JonBFS

    Joined:
    Feb 25, 2019
    Posts:
    39
    I've been struggling this for days now and realized I need direct help from people who are more knowledgeable about this.

    I am working with ARKit and want objects to occlude my hand if they are closer. I have at my disposable

    From AR Kit SDK
    1. a depth texture of my hand
    2. a stencil texture of my hand

    Solution A?
    1. Camera A renders scene to RenderTexture
    2. Camera B renders scene to a depth texture?
    3. Camera C has a quad with ShaderX
    4. ShaderX combines textures A+B+AR depth texture and use fragment shader to compare pixel values determining what gets shown.

    I'm sure there are better ways to get a solution, but I don't know my options. Also, I'm having trouble with step #2. Any help is greatly appreciated
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    Other option:
    1. Render ARKit depth texture depth into scene depth.
    2. Render scene.
    3. Fin.

    Requires using a shader that writes to SV_Depth. You can use ColorMask 0 to skip rendering to the color values. My understanding is the ARKit's depth texture is in linear meters, so you need to convert that into the non-linear depth of the current camera. Here's the function I use to do that.
    Code (CSharp):
    1.             float LinearToDepth(float linearDepth)
    2.             {
    3.                 return (1.0 - _ZBufferParams.w * linearDepth) / (linearDepth * _ZBufferParams.z);
    4.             }

    Another option:
    1. Render scene with shaders that sample the ARKit depth texture and clip() when further away.

    Have your shader pass the screen position and linear depth from the vertex to the fragment, and compare against the ARKit depth texture. Look at the built in particle shaders for an example, though that doing soft fading with the results, and needs to convert the camera depth texture into linear depth, which you shouldn't need to do.
     
  3. JonBFS

    JonBFS

    Joined:
    Feb 25, 2019
    Posts:
    39
    This sounds like a good route to take. I read the documentation about SV_Depth and making the frag shader output to the depth buffer for me, but how do I pass in the linearDepth parameter as in your example. I shouldn't use tex2D because that unwrap the texture to conform along the mesh correct? Or maybe I should use a quad covering the screen so I won't have the problem.
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    You absolutely should be using tex2D to sample the ARKit depth texture, that's the only (sane) way to get per pixel values. You'll want to use a command buffer to call Blit() on CameraEvent.BeforeForwardOpaque. Blit draws a full screen quad, with UVs matching the screen.

    The linear depth is the value in the depth texture you get from ARKit. You'll need to create a material in script that uses your custom shader and assign the depth texture (and maybe stencil texture to clip() against) to the material before using it in the Blit().

    You'll want your blit to look like this:
    Code (csharp):
    1. CommandBuffer myCommandBuffer = new CommandBuffer();
    2. myCommandBuffer.Blit(null, BuiltinRenderTextureType.CurrentActive, myDepthWriteMaterial);
    3. myCamera.AddCommandBuffer(CameraEvent.BeforeForwardOpaque, myCommandBuffer);
    You just need to do that once OnEnable.
     
    Last edited: Jul 30, 2019
    AlejMC likes this.
  5. JonBFS

    JonBFS

    Joined:
    Feb 25, 2019
    Posts:
    39
    Wonderful! I'm almost there. I'm using a dummy greyscale texture of a hand in the shader that you instructed me to write (shown below). The hand silhouette blocks objects that are far away from being rendered and is properly rendered when it gets closer to the camera.

    The only problem that remains is that everything outside the silhouette is pitch black. Could it be that it's because I'm writing directly to SV_Depth and meddling with the depth and that I need to add the depth of the rest of the scene?

    Code (CSharp):
    1.     SubShader
    2.     {
    3.         Tags { "RenderType"="Opaque" }
    4.         LOD 200
    5.  
    6.         pass
    7.         {
    8.             ColorMask 0
    9.             ZWrite On
    10.             Fog { Mode Off }
    11.          
    12.             CGPROGRAM
    13.  
    14.             #pragma vertex vert
    15.             #pragma fragment frag
    16.             #pragma target 3.0
    17.             #include "UnityCG.cginc"
    18.  
    19.             sampler2D _ArDepthTex;
    20.          
    21.             struct v2f
    22.             {
    23.                 float2 uv : TEXCOORD0;
    24.                 float4 vertex : SV_POSITION;
    25.             };
    26.  
    27.             v2f vert(appdata_base v)
    28.             {
    29.                 v2f o;
    30.                 o.vertex = UnityObjectToClipPos(v.vertex);
    31.                 o.uv = v.texcoord;
    32.                 return o;
    33.             }
    34.  
    35.             float frag(v2f i) : SV_DEPTH
    36.             {
    37.                 fixed4 col = tex2D(_ArDepthTex,  i.uv);
    38.                 float depth = col.r;
    39.                 return (1.0 - _ZBufferParams.w * depth) / (depth * _ZBufferParams.z);
    40.             }
    41.  
    42.             ENDCG
    43.         }
    44.  
    45.     }
    46.     FallBack "Diffuse"
     
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    You said you had a stencil texture, though I don’t entirely understand in what form the stencil texture is passed in. Is it a b&w image of the hand outline? If so, sample that texture and use:
    clip(stencil.r - 0.5);
     
  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    As for why, if you’re using a dummy texture rather than the real one, that means you have a black and white texture with a range of 0.0 to 1.0, presumably the hand is black, and the background is white. Well that’s going to write to the depth as if there’s a wall 1 meter away. The real ARKit depth texture should be a floating point texture with a much larger range, so presumably the “white” areas are actually values much larger than 1.0, but honestly I have no idea since I’ve never used ARKit.

    The other problem I don’t have an answer to is I don’t know how the ARKit camera view is rendered to the scene. Ideally it gets rendered into the camera first and ignores the depth buffer.
     
  8. JonBFS

    JonBFS

    Joined:
    Feb 25, 2019
    Posts:
    39
    Yes, you're right! I forgot about adding the stencil texture. The frag shader no longer sets the depth outside the hand so everything else renders just fine.

    This was done in a test environment, the only thing to do now is to try it in the real environment *fingers crossed*
     
  9. yester30

    yester30

    Joined:
    Mar 4, 2015
    Posts:
    4
    Hi there, did you succeed in doing this in the end? I'm curious & trying to do the same thing