Search Unity

How to Setup People Occlusion

Discussion in 'AR' started by Cruxx, Jun 8, 2019.

  1. Cruxx

    Cruxx

    Joined:
    Nov 3, 2018
    Posts:
    6
    I have just have my unity AR foundation and IOS updated to the latest preview version. I tried to get the new fancy feature people occlusion works but have no idea where to start with.
    I read the doc and look through the sample repo and did not find anything helpful (maybe it's just me being stupid). Is there a option I should turn on in the unity editor, or is that function something being similar with the light estimation which will require external scripts?
    Thank you
     
    pwshaw likes this.
  2. tdmowrer

    tdmowrer

    Joined:
    Apr 21, 2017
    Posts:
    605
    Check out the "HumanSegmentationImages" sample scene. Right now, it just surfaces the stencil and depth buffers as raw images. We're working on a better sample, but that should get you started.
     
    pwshaw and ina like this.
  3. virtualHCIT

    virtualHCIT

    Joined:
    Aug 20, 2018
    Posts:
    15
    I was wondering if you could suggest how to go about using the depth and stencil raw buffers to perform occlusion. Would these have to be used in a custom postprocess effect? Can they be applied directly or do they have to be cropped/rotated to match the screen resolution/orientation?

    Thanks!
     
    Last edited: Jun 10, 2019
  4. virtualHCIT

    virtualHCIT

    Joined:
    Aug 20, 2018
    Posts:
    15
    Some other questions: does UnityARKit_HumanBodyProvider_TryGetHumanDepth and UnityARKit_HumanBodyProvider_TryGetHumanStencil use Apple's generateDilatedDepthFromFrame and generateMatteFromFrame to acquire the buffers? (i.e. are the buffers already scaled and refined using ARKit's API?)
     
  5. Aaron-Meyers

    Aaron-Meyers

    Joined:
    Dec 8, 2009
    Posts:
    305
    Is there any chance that there will be a way to use the human segmentation-generated Depth Buffer as the base depth buffer or will it be necessary to sample it separately inside a shader and then perform some kind of check with the frag depth or something like that?
     
  6. todds_unity

    todds_unity

    Joined:
    Aug 1, 2018
    Posts:
    324
    You will need to write your own shader that uses both the stencil and depth image.

    The values in the depth buffer are in meters with the range [0, infinity) and need to be converted into the view space with the depth value [0, 1] mapped between the near & far clip plane. Additionally, because non-human pixels in the human depth image are 0, you will need to use the stencil buffer to know whether that 0 value means a human occluding pixel at the near clip plane or a non-human pixel which should get the value of the far clip plane.

    Todd
     
    jiangaqin and Colin_MacLeod like this.
  7. Aaron-Meyers

    Aaron-Meyers

    Joined:
    Dec 8, 2009
    Posts:
    305
    Thanks for the extra info! Sorry if this is a dumb question, but would there potentially be some way of blitting the depth texture onto the frame buffer's depth buffer before the whole scene draws (like right after the Camera clears it)? For scenarios where we don't need the stencil buffer functionality and only want to use the depth buffer, this seems like it would really simplify the workflow. I have an AR project that I'm working on adding people occlusion to, but it uses some rather complex shaders from the asset store that are really tough to modify because they use all kinds of #include .cginc files and compiler preprocessors...

    Also, any estimate on when we might see a sample demonstrating the technique you outlined in your previous post?

    Thanks!
     
  8. Keion92

    Keion92

    Joined:
    Sep 21, 2015
    Posts:
    6
    The examples provides us with the stencil and depth data for People Occlusion but how can we get the original depth map? Is it possible to control how far it will detect the people occlusion or is it a fixed value?
     
    cjensen1 and ina like this.
  9. Aaron-Meyers

    Aaron-Meyers

    Joined:
    Dec 8, 2009
    Posts:
    305
    I feel like i'm a little out of my depth (no pun intended) here, but it seems like drawing a full screen quad with a shader that only outputs to SV_Depth before the rest of the scene draws would be the simplest implementation of basic occlusion.

    I'm a bit of a neophyte when it comes to View space... I don't really understand the range of values derived from something like
    Code (CSharp):
    1. UnityObjectToViewPos( float4( v.vertex.xyz, 1.0 ) )
    Clip space I understand a bit more because the x and y components are (-1 -> 1), but I don't really understand the z and w components.

    Anyway, I hope Unity will include an example that demonstrates just writing the depth stuff to SV_Depth because that seems like a much more user friendly way of adding this to any project.
     
    ina likes this.
  10. eco_bach

    eco_bach

    Joined:
    Jul 8, 2013
    Posts:
    1,601
    @todds_unity So if I understand correctly, this custom shader will only be concerned with returning numerical depth data, not necessarily displaying anything ?
     
  11. Keion92

    Keion92

    Joined:
    Sep 21, 2015
    Posts:
    6
    @todds_unity Is there a reason why the image in the HumanSegmentationImages is reversed? We are attempting to unrevert it so we can apply the shader. If you know how to uninvert the output from the humansegmentationimage scene, please let us know.

    Thank you.
     
  12. Aaron-Meyers

    Aaron-Meyers

    Joined:
    Dec 8, 2009
    Posts:
    305
    In whatever shader you're using to sample from the depth/stencil textures, just invert the y coordinate:
    Code (CSharp):
    1. uv.y = 1.0 - uv.y;
     
  13. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,085
    waiting also for a working example
     
  14. Keion92

    Keion92

    Joined:
    Sep 21, 2015
    Posts:
    6
    What variable tells us that the depth or stencil value is 0 or 1? I can only find the texture humanStencil and humanDepth. How can I get the depth data from what we already have?
     
  15. Keion92

    Keion92

    Joined:
    Sep 21, 2015
    Posts:
    6
    Is there a reason why the provided sample has the camera output very zoomed in compared to the depth/stencil data output? How can we change the output camera so that it sees what the original camera sees / stencil depth output sees. Because they are not the same outputs.

    @todds_unity @tdmowrer
     
  16. eco_bach

    eco_bach

    Joined:
    Jul 8, 2013
    Posts:
    1,601
    Yes agree. To get anything working took a lot of trial and error hackery using Blitting and even then the results are barely acceptable
     
    Cruxx, FlyingRio and ina like this.
  17. Keion92

    Keion92

    Joined:
    Sep 21, 2015
    Posts:
    6
    For people depth value, is there a limitation for the distance between the camera and the people detected? After 1 meter, the people depth value is a pure color meaning that there's no more change in value.
     
  18. Aaron-Meyers

    Aaron-Meyers

    Joined:
    Dec 8, 2009
    Posts:
    305
    It goes beyond 1 meter, but in a fragment shader, 1.0 in the red channel is fully red. If you try dividing the depth by 10, you wouldn't reach full red until 10 meters away. Make sense? It's a floating point texture so it can store values beyond 1.0.
     
    JonBFS and FlyingRio like this.
  19. HypeVR

    HypeVR

    Joined:
    Aug 28, 2018
    Posts:
    11
    IMG_0065.jpg IMG_0066.jpg

    Why under landscape mode, depth/stencil is flipped horizontally (which can be fixed in shader by uv.x = 1 - uv.x;
    ), and when holding my phone in portrait mode, depth/stencil is not changing accordingly, anyone how to fix this?
     
  20. edo_m18

    edo_m18

    Joined:
    Nov 17, 2014
    Posts:
    12
    I have same issue.

    Can anyone tell me about what the problem is?
    I'm working on masking the AR scene with stencil texture.

    I tried it as just mask value texture first, but it didn't show me correctly. Next, I tried to flip and scale the texture. But it didn't work. Please see the attached image. It was a little bit wrong. I also attached my shader code. Please tell me what how to fix the problem?

    This shader code is used in posteffect.

    Code (CSharp):
    1. v2f vert (appdata v)
    2. {
    3.     v2f o;
    4.     o.vertex = UnityObjectToClipPos(v.vertex);
    5.     o.uv = v.uv;
    6.     return o;
    7. }
    8.  
    9. sampler2D _MainTex;
    10. sampler2D _DepthTex;
    11. sampler2D _StencilTex;
    12.  
    13. fixed4 frag (v2f i) : SV_Target
    14. {
    15.     fixed4 col = tex2D(_MainTex, i.uv);
    16.  
    17.     float2 uv = i.uv;
    18.     uv.x = 1.0 - uv.x;
    19.  
    20.     // The reason of "1.62" is correcting ratio.
    21.     // Passed stencil texture is not same ratio to the display.
    22.     uv.y = (uv.y + 0.5) / 1.62;
    23.  
    24.     float stencil = tex2D(_StencilTex, uv).r;
    25.  
    26.     return lerp(col, float4(1, 0, 0, 1), stencil);
    27. }
     

    Attached Files:

    Last edited: Aug 8, 2019
  21. edo_m18

    edo_m18

    Joined:
    Nov 17, 2014
    Posts:
    12
    I found out correct code. (But it works only portrait left)

    Code (CSharp):
    1. fixed4 frag (v2f i) : SV_Target
    2. {
    3.     fixed4 col = tex2D(_MainTex, i.uv);
    4.  
    5.     // Correcting ratio from 2688x1242 to 1920x1440
    6.     float ratio = 1.62;
    7.  
    8.     float2 uv = i.uv;
    9.     uv.x = 1.0 - uv.x;
    10.     uv.y /= ratio;
    11.     uv.y += 1.0 - (ratio * 0.5);
    12.  
    13.     float stencil = tex2D(_StencilTex, uv).r;
    14.     if (stencil >= 0.9)
    15.     {
    16.         return col * float4(stencil, 0.0, 0.0, 1.0);
    17.     }
    18.     else
    19.     {
    20.         return col;
    21.     }
    22. }
     

    Attached Files:

  22. dilmerval

    dilmerval

    Joined:
    Jun 15, 2013
    Posts:
    232
    That looks great, I have the same issue where is a bit off, can you post the entire shader and possibly setup?
     
  23. edo_m18

    edo_m18

    Joined:
    Nov 17, 2014
    Posts:
    12
    Thank you for replying.

    Here is my code. The code have to a little bit optimize I think.
    And I attached an example of result this code. IMG_0114.PNG

    First, This is the shader code.

    Code (CSharp):
    1. Shader "Hidden/PeopleOcclusion"
    2. {
    3.     Properties
    4.     {
    5.         _MainTex ("Texture", 2D) = "white" {}
    6.     }
    7.     SubShader
    8.     {
    9.         Cull Off ZWrite Off ZTest Always
    10.  
    11.         Pass
    12.         {
    13.             CGPROGRAM
    14.             #pragma vertex vert
    15.             #pragma fragment frag
    16.  
    17.             #include "UnityCG.cginc"
    18.  
    19.             struct appdata
    20.             {
    21.                 float4 vertex : POSITION;
    22.                 float2 uv : TEXCOORD0;
    23.             };
    24.  
    25.             struct v2f
    26.             {
    27.                 float2 uv : TEXCOORD0;
    28.                 float4 vertex : SV_POSITION;
    29.             };
    30.  
    31.             v2f vert (appdata v)
    32.             {
    33.                 v2f o;
    34.                 o.vertex = UnityObjectToClipPos(v.vertex);
    35.                 o.uv = v.uv;
    36.                 return o;
    37.             }
    38.  
    39.             sampler2D _MainTex;
    40.             sampler2D _BackgroundTex;
    41.             sampler2D _DepthTex;
    42.             sampler2D _StencilTex;
    43.  
    44.             UNITY_DECLARE_DEPTH_TEXTURE(_CameraDepthTexture);
    45.  
    46.             fixed4 frag (v2f i) : SV_Target
    47.             {
    48.                 fixed4 col = tex2D(_MainTex, i.uv);
    49.  
    50.                 float2 uv = i.uv;
    51.  
    52.                 // Flip x axis.
    53.                 uv.x = 1.0 - uv.x;
    54.  
    55.                 // Correcting textures ratio that can be got by ARHumanBodyManager to the screen ratio.
    56.                 float ratio = 1.62;
    57.                 uv.y /= ratio;
    58.                 uv.y += 1.0 - (ratio * 0.5);
    59.  
    60.                 float stencil = tex2D(_StencilTex, uv).r;
    61.                 if (stencil < 0.9)
    62.                 {
    63.                     return col;
    64.                 }
    65.  
    66.                 // Check depth delta. If delta is over zero, it means pixels that estimated like human is in front of AR objects.
    67.                 float depth = tex2D(_DepthTex, uv).r;
    68.                 float sceneZ = LinearEyeDepth(tex2D(_CameraDepthTexture, i.uv));
    69.                 float delta = saturate(sceneZ - depth);
    70.                 if (delta > 0.0)
    71.                 {
    72.                     return tex2D(_BackgroundTex, i.uv);
    73.                 }
    74.                 else
    75.                 {
    76.                     return col;
    77.                 }
    78.             }
    79.             ENDCG
    80.         }
    81.     }
    82. }
    83.  

    Next, here is the C# code.
    I cloned it from AR foundation example code and customize a little bit it.

    My approach is to render as post-effect.

    Code (CSharp):
    1. using System.Text;
    2. using UnityEngine;
    3. using UnityEngine.UI;
    4. using UnityEngine.XR.ARFoundation;
    5.  
    6. public class PeopleOcclusion : MonoBehaviour
    7. {
    8.     [SerializeField, Tooltip("The ARHumanBodyManager which will produce frame events.")]
    9.     private ARHumanBodyManager _humanBodyManager;
    10.  
    11.     [SerializeField]
    12.     private Material _material = null;
    13.  
    14.     [SerializeField]
    15.     private ARCameraBackground _arCameraBackground = null;
    16.  
    17.     [SerializeField]
    18.     private RawImage _captureImage = null;
    19.  
    20.     private RenderTexture _captureTexture = null;
    21.  
    22.     public ARHumanBodyManager HumanBodyManager
    23.     {
    24.         get { return _humanBodyManager; }
    25.         set { _humanBodyManager = value; }
    26.     }
    27.  
    28.     [SerializeField]
    29.     private RawImage _rawImage;
    30.  
    31.     /// <summary>
    32.     /// The UI RawImage used to display the image on screen.
    33.     /// </summary>
    34.     public RawImage RawImage
    35.     {
    36.         get { return _rawImage; }
    37.         set { _rawImage = value; }
    38.     }
    39.  
    40.     [SerializeField]
    41.     private Text _imageInfo;
    42.  
    43.     /// <summary>
    44.     /// The UI Text used to display information about the image on screen.
    45.     /// </summary>
    46.     public Text ImageInfo
    47.     {
    48.         get { return _imageInfo; }
    49.         set { _imageInfo = value; }
    50.     }
    51.  
    52.     #region ### MonoBehaviour ###
    53.     private void Awake()
    54.     {
    55.         Camera camera = GetComponent<Camera>();
    56.         camera.depthTextureMode |= DepthTextureMode.Depth;
    57.  
    58.         _rawImage.texture = _humanBodyManager.humanDepthTexture;
    59.  
    60.         _captureTexture = new RenderTexture(Screen.width, Screen.height, 0);
    61.         _captureImage.texture = _captureTexture;
    62.     }
    63.     #endregion ### MonoBehaviour ###
    64.  
    65.     private void LogTextureInfo(StringBuilder stringBuilder, string textureName, Texture2D texture)
    66.     {
    67.         stringBuilder.AppendFormat("texture : {0}\n", textureName);
    68.         if (texture == null)
    69.         {
    70.             stringBuilder.AppendFormat("   <null>\n");
    71.         }
    72.         else
    73.         {
    74.             stringBuilder.AppendFormat("   format : {0}\n", texture.format.ToString());
    75.             stringBuilder.AppendFormat("   width  : {0}\n", texture.width);
    76.             stringBuilder.AppendFormat("   height : {0}\n", texture.height);
    77.             stringBuilder.AppendFormat("   mipmap : {0}\n", texture.mipmapCount);
    78.         }
    79.     }
    80.  
    81.     private void Update()
    82.     {
    83.         var subsystem = _humanBodyManager.subsystem;
    84.  
    85.         if (subsystem == null)
    86.         {
    87.             if (_imageInfo != null)
    88.             {
    89.                 _imageInfo.text = "Human Segmentation not supported.";
    90.             }
    91.             return;
    92.         }
    93.  
    94.         StringBuilder sb = new StringBuilder();
    95.         Texture2D humanStencil = _humanBodyManager.humanStencilTexture;
    96.         Texture2D humanDepth = _humanBodyManager.humanDepthTexture;
    97.         LogTextureInfo(sb, "stencil", humanStencil);
    98.         LogTextureInfo(sb, "depth", humanDepth);
    99.  
    100.         if (_imageInfo != null)
    101.         {
    102.             _imageInfo.text = sb.ToString();
    103.         }
    104.  
    105.         _material.SetTexture("_StencilTex", humanStencil);
    106.         _material.SetTexture("_DepthTex", humanDepth);
    107.         _material.SetTexture("_BackgroundTex", _captureTexture);
    108.     }
    109.  
    110.     private void LateUpdate()
    111.     {
    112.         if (_arCameraBackground.material != null)
    113.         {
    114.             Graphics.Blit(null, _captureTexture, _arCameraBackground.material);
    115.         }
    116.     }
    117.  
    118.     private void OnRenderImage(RenderTexture src, RenderTexture dest)
    119.     {
    120.         Graphics.Blit(src, dest, _material);
    121.     }
    122. }
    123.  
     
    thorikawa and JurreFonk like this.
  24. knotttrodt

    knotttrodt

    Joined:
    Nov 27, 2017
    Posts:
    1
    Have you fixed the rotation problem?
     
  25. edo_m18

    edo_m18

    Joined:
    Nov 17, 2014
    Posts:
    12
    I'm sorry, I haven't fixed it. It works on only Landscape right.
     
  26. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,085
    hey, any update on this? seems like boilerplate stuff to include in ARFoundation!
     
  27. Development-Boldly-XR

    Development-Boldly-XR

    Joined:
    Mar 8, 2019
    Posts:
    12
    I'm also curious if there are any more resources as of now?
     
  28. Development-Boldly-XR

    Development-Boldly-XR

    Joined:
    Mar 8, 2019
    Posts:
    12
    @edo_m18 could you elaborate a bit more on how you have setup your project? I'm trying to replicate your scene, but I'm having a hard time. i.e. where is the hidden shader used and what material is needed in the inspector? Really great work thusfar!
     
  29. perchangpete

    perchangpete

    Joined:
    Oct 14, 2016
    Posts:
    13
    I've managed to make some progress on this stuff. I started with the awesome example that @edo_m18 shared (thanks very much for that!) and added a couple of extra things. The issues I was trying fix:

    - make it work for all device orientations
    - make it work for all device aspect ratios
    - improve the depth comparisons. It seemed to break in some situations (especially when you scale your content)

    I think i've managed to sort out the first two issues and I've made a little bit of progress on the third. The depth comparision kind of works but I'm not at all convinced by my implementation.

    I had two issues with the depth comparison. The first was scaling. If you want to make your content appear smaller in the real world you need to increase the scale of the AROrigin which has the effect of moving your content further from the camera in the game world. So, the same content, in the same location in the real world, will have different depth values depending on what scale you apply to it. My fix for this issue was to pass the AROrigin scale to the shader and use it to unscale the values from the depth texture. The second issue was the nature of the values in the occlusion depth texture - i'm not convinced that they are in metres (which they should be afaik). I hacked my shader to output different colours depending on the raw values in the occlusion depth texture and it looks like 1 unit of occlusion depth is about 5/8 metres. I 'fixed' this problem by multiplying the occlusion depth by 0.625 before using it in the shader.

    As I said, i'm not convinced by what i've done, and it definitely doesn't work perfectly - I feel like i'm missing some essential piece of knowledge to figure this out properly. So, if anyone has some ideas or feedback, I'd really appreciate some help!

    ...

    You should be able to get my code working by just adding the files to your project, adding PeopleOcclusionPostEffect.cs to your ar camera object and filling in its editor properties - I recommend using the ar foundation sample and adding the effect into the SimpleAR scene.
     

    Attached Files:

  30. bbird_unity

    bbird_unity

    Joined:
    Apr 16, 2019
    Posts:
    1
    Thanks for posting that @perchangpete! I was able to successfully build with it after following your instructions. Looks great in landscape mode, but portrait on my Xr device seemed to have the occulusion plane rotated to the wrong side. I haven't yet had the opportunity to look at the code yet, but hopefully I can spend some time debugging that in the coming days.

    I'm curious—did you have any success running the Stencil and Depth modes at Full Screen Resolution (set via AR Human Body Manager)? It seems to work fine on Standard mode, but I was hoping to experiment with the hi-res version, which didn't work.

    Keep up the good work!
     
    ina likes this.
  31. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,085
    Has anyone tried to get positioning of hand to have the hand hold a 3D object (not just occlusion, but actually mixed occlusion - part occluded by object held in hand, and part shown for hand not occluded by object)
     
    Last edited: Aug 28, 2019
  32. perchangpete

    perchangpete

    Joined:
    Oct 14, 2016
    Posts:
    13
    Glad you managed to get it up and running! I think you should be able to set the stencil to full resolution but if you set the depth to full resolution the system will crash with some kind of texture format problem. I believe this is a known issue that the unity guys are working on.
    As far as portait mode goes... I'm sorry but i've not tested it :-/ i would guess that my code could be extended to cope with it though. The main thing you'll need to consider is that the camera feed and occlusion textures are all 4:3, on all devices and they are cropped to fill the screen - i've modifed UVs to cope with that in landscape mode, you might need to do something similar for portrait...
     
  33. Development-Boldly-XR

    Development-Boldly-XR

    Joined:
    Mar 8, 2019
    Posts:
    12
    @perchangpete This looks very promising. My build was also succesful and I have added the results below. I'm using an iPhoneXS. I was also encountering issues with the depth being inconsistent.
     

    Attached Files:

    • ar1.jpg
      ar1.jpg
      File size:
      332.2 KB
      Views:
      1,506
    • ar2.jpg
      ar2.jpg
      File size:
      685.9 KB
      Views:
      1,376
  34. perchangpete

    perchangpete

    Joined:
    Oct 14, 2016
    Posts:
    13
    Yeah, that really doesn't work in portrait does it :-/ I'm glad you managed to get it working. Let me know if you make any progress on the depth issues!
     
  35. Aaron-Meyers

    Aaron-Meyers

    Joined:
    Dec 8, 2009
    Posts:
    305
    @tdmowrer @todds_unity can we expect to see a working example of people occlusion from Unity anytime soon? will the ARFoundation module handle different device orientations? an update would be much appreciated!
     
  36. MassiveTchnologies

    MassiveTchnologies

    Joined:
    Jul 5, 2016
    Posts:
    87
    Are there any news on people occlusion samples? iOS 13 is going to be released soon and we need to compile and submit binaries ahead of time.
     
    ina, vovo801 and sahArt like this.
  37. Numa

    Numa

    Joined:
    Oct 7, 2014
    Posts:
    100
    Hi there, it wasn't easy but I've managed to get a pretty cool result so I thought I'd share it here:


    It needs more work but I'm pretty happy with the outcome. It supports both landscape orientations as well as various aspects ratios.
    This is how I handle the left / right orientations:
    Shader code:
    Code (CSharp):
    1. // Orientation is set in a script, -1 for landscape left, 1 for landscape right
    2. if (_Orientation > 0)
    3. {
    4.     // Flip x axis
    5.     uv.x = 1.0 - uv.x;
    6. }
    7. else
    8. {
    9.     // Flip y axis
    10.     uv.y = 1.0 - uv.y;
    11. }
    And this is how I deal with iPhone / iPad ratios:
    Script:
    Code (CSharp):
    1. // Correcting ratio from 1920x1440(hd camera output) to device screen resolution
    2. float ratio = ((float)Screen.width / (float)Screen.height) / (1920f / 1440f);
    Shader:
    Code (CSharp):
    1. uv.y /= _Ratio;
    2. uv.y += 1.0 - (_Ratio * 0.5);
    Thanks everyone for the help on this thread!
     
    IsDon, efge, DrSharky and 1 other person like this.
  38. Aaron-Meyers

    Aaron-Meyers

    Joined:
    Dec 8, 2009
    Posts:
    305
    Looks pretty excellent to me. Any chance you could share any more of the code? How it's set up in a scene and the shader where you're reading from the stencil and depth textures?
     
  39. Numa

    Numa

    Joined:
    Oct 7, 2014
    Posts:
    100
    I based it off what other people have been sharing.
    For _depthScale and _Dilation I suggest you make yourself 2 sliders and plug that into your shader to try and find good values at runtime. I found it needed different values for iPhone and iPad.

    Controller:
    Code (CSharp):
    1.  private void Start()
    2. {
    3.     // Don't do anything if segmentation isn't supported on this device
    4.     if (!SupportsSegmentation())
    5.         return;
    6.  
    7.     // Turn on camera depth
    8.     Camera cam = GetComponent<Camera>();
    9.     cam.depthTextureMode |= DepthTextureMode.Depth;
    10.     _renderTexture = new RenderTexture(Screen.width, Screen.height, 0);
    11.  
    12.     // Correcting ratio from 1920x1440(hd camera output) to device screen resolution
    13.     float ratio = ((float)Screen.width / (float)Screen.height) / (1920f / 1440f);
    14.     _material.SetFloat("_Ratio", (float)ratio);
    15. }
    16.  
    17. private void Update()
    18. {
    19.     if (!SupportsSegmentation())
    20.         return;
    21.  
    22.     humanStencil = _humanBodyManager.humanStencilTexture;
    23.     humanDepth = _humanBodyManager.humanDepthTexture;
    24.  
    25.     _material.SetTexture("_StencilTex", humanStencil);
    26.     _material.SetTexture("_DepthTex", humanDepth);
    27.     _material.SetTexture("_BackgroundTex", _renderTexture);
    28. }
    29.  
    30. private void LateUpdate()
    31. {
    32.     if (!SupportsSegmentation())
    33.         return;
    34.  
    35.     Graphics.Blit(null, _captureTexture, _arCameraBackground.material);
    36. }
    37.  
    38. private void OnRenderImage(RenderTexture src, RenderTexture dest)
    39. {
    40.     if (!SupportsSegmentation())
    41.         return;
    42.  
    43.     Graphics.Blit(src, dest, _material);
    44. }
    45.  
    46. private bool SupportsSegmentation()
    47. {
    48.     var subsystem = _humanBodyManager.subsystem;
    49.     return subsystem != null && _humanBodyManager.subsystem.SubsystemDescriptor.supportsHumanDepthImage;
    50. }
    51.  
    52.  
    Shader:
    Code (CSharp):
    1.  
    2. fixed4 frag (v2f i) : SV_Target
    3. {
    4.     fixed4 color = tex2D(_MainTex, i.uv);
    5.     float2 uv = i.uv;
    6.  
    7.     float4 backgroundTex = tex2D(_BackgroundTex, i.uv);
    8.  
    9.     // Landscape right
    10.     if (_Orientation > 0)
    11.     {
    12.         // Flip x axis
    13.         uv.x = 1.0 - uv.x;
    14.     }
    15.     // Landscape left
    16.     else
    17.     {
    18.         // Flip y axis
    19.         uv.y = 1.0 - uv.y;
    20.     }
    21.  
    22.     // Might be doing something wrong here as this formula
    23.     // doesn't work at all for ratios close to 1
    24.     if (_Ratio > 1.1)
    25.     {
    26.         uv.y /= _Ratio;
    27.         uv.y += 1.0 - (_Ratio * 0.5);
    28.     }
    29.  
    30.     // Dilate depth map to avoid contour artefacts around higher res stencil map
    31.     // when the subject is BEHIND the 3D object
    32.     // Pick a value that works for you for _Dilation, we use ~0.007 but yours might be different
    33.     // Find a balance between having artefacts when the person is behind the 3d model and having
    34.     // a contour that's wider than the person when they're in front.
    35.     float2 _min = float2(0,0);
    36.     float2 _max = float2(1,1);
    37.  
    38.     //get the color of 8 neighbour pixel
    39.     float U = tex2D(_DepthTex,clamp(uv + float2(0,_Dilation),_min,_max));
    40.     float UR = tex2D(_DepthTex,clamp(uv + float2(_Dilation,_Dilation),_min,_max));
    41.     float R = tex2D(_DepthTex,clamp(uv + float2(_Dilation,0),_min,_max));
    42.     float DR = tex2D(_DepthTex,clamp(uv + float2(_Dilation,-_Dilation),_min,_max));
    43.     float D = tex2D(_DepthTex,clamp(uv + float2(0,-_Dilation),_min,_max));
    44.     float DL = tex2D(_DepthTex,clamp(uv + float2(-_Dilation,-_Dilation),_min,_max));
    45.     float L = tex2D(_DepthTex,clamp(uv + float2(-_Dilation,0),_min,_max));
    46.     float UL = tex2D(_DepthTex,clamp(uv + float2(-_Dilation,_Dilation),_min,_max));
    47.  
    48.     float dilatedDepth = max(max(max(UR,DR),max(DL,UL)),max(max(L,R),max(U,D)));
    49.  
    50.     // Scale the depth values to work better in real life
    51.     // Find a value that works for your app. You'll see different results depending on how
    52.     // far your device is from the 3D objects
    53.     dilatedDepth *= _DepthScale;
    54.  
    55.     // Cut out everything that's not a human using the stencil
    56.     float stencil = tex2D(_StencilTex, uv).r;
    57.     if (stencil < 0.9)
    58.     {
    59.         return color;
    60.     }
    61.  
    62.     // Get Unity scene depth and make it linear
    63.     float sceneZ = LinearEyeDepth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv));
    64.  
    65.     // Compare scene depth and person depth
    66.     float delta = sceneZ - dilatedDepth;
    67.  
    68.     // Display the scene or the person based on depth difference
    69.     if (delta > 0.0)
    70.     {
    71.         // Smooth transition between body and scene if they're colliding.
    72.         // Again find a value that looks good to you :)
    73.         fixed saturation = saturate((delta - 0.025) / (0.05));
    74.         return lerp(color,backgroundTex,saturation);
    75.     }
    76.     else
    77.     {
    78.         return col;
    79.     }
    80. }
    Hope that helps! Also if anyone has any ideas on how to optimise this further I'm all ears.
     
    Last edited: Sep 21, 2019
  40. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,085
    Checking if this works with the latest ARFoundation preview.2 release - i keep getting the pink screen effect?
     
  41. Numa

    Numa

    Joined:
    Oct 7, 2014
    Posts:
    100
    Yes this works with ARFoundation 3 preview 2. Pink screen sounds like you have a missing shader in a material somewhere? In unity make sure your segmentation material can locate its shader. Same thing for the camera background material if you have a custom one.
     
    danbfx likes this.
  42. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,085
    Which shader should be used here?
     
  43. vovo801

    vovo801

    Joined:
    Feb 3, 2015
    Posts:
    18
    First of all, huge thanks to everyone for posting their script and shader examples.
    We are currently using an example by @perchangpete
    Which works really well for landscape orientation, but one of the major goals is to get this functioning in portrait orientation.
    We have tried flipping the uvs in every possible way in the shader. It seems like applying some rotation to the texture is also required. Tried adding rotation into the shader based on this thread:
    https://forum.unity.com/threads/rotation-of-texture-uvs-directly-from-a-shader.150482/
    But we can't seem to find the correct combo of orientation + uvs to rotate.
    If anyone can point me in the right direction with this, it would be really appreciated.
     
  44. vovo801

    vovo801

    Joined:
    Feb 3, 2015
    Posts:
    18
    To provide some context, attaching the shader that we currently use (based on the solution by @perchangpete ).

    Code (CSharp):
    1.  
    2. Shader "Perchang/PeopleOcclusion"
    3. {
    4.     Properties
    5.     {
    6.         _MainTex ("Texture", 2D) = "white" {}
    7.         _CameraFeed ("Texture", 2D) = "white" {}
    8.         _OcclusionDepth ("Texture", 2D) = "white" {}
    9.         _OcclusionStencil ("Texture", 2D) = "white" {}
    10.         _UVMultiplier ("UV Multipler", Float) = 1.0
    11.         _UVFlip ("Flip UVX", Float) = 0.0
    12.         _UVRotation("Rotate UV", Float) = 0.0
    13.     }
    14.     SubShader
    15.     {
    16.         // No culling or depth
    17.         Cull Off ZWrite Off ZTest Always
    18.  
    19.         Pass
    20.         {
    21.             CGPROGRAM
    22.             #pragma vertex vert
    23.             #pragma fragment frag
    24.  
    25.             #include "UnityCG.cginc"
    26.  
    27.             struct appdata
    28.             {
    29.                 float4 vertex : POSITION;
    30.                 float2 uv : TEXCOORD0;
    31.             };
    32.  
    33.             struct v2f
    34.             {
    35.                 float2 uv : TEXCOORD0;
    36.                 float2 uv1 : TEXCOORD1;
    37.                 float2 uv2 : TEXCOORD2;
    38.                 float4 vertex : SV_POSITION;
    39.             };
    40.  
    41.             sampler2D _MainTex;
    42.             sampler2D_float _OcclusionDepth;
    43.             sampler2D _OcclusionStencil;
    44.             sampler2D _CameraFeed;
    45.             sampler2D_float _CameraDepthTexture;
    46.             float _UVMultiplier;
    47.             float _UVFlip;
    48.             float _UVRotation;
    49.  
    50.             v2f vert (appdata v)
    51.             {
    52.                 float sinX = sin (_UVRotation);
    53.                 float cosX = cos (_UVRotation);
    54.                 float2x2 rotationMatrix = float2x2(cosX, -sinX, sinX, cosX);
    55.              
    56.                 v2f o;
    57.                 o.vertex = UnityObjectToClipPos(v.vertex);
    58.                 o.uv = v.uv;
    59.              
    60.                 o.uv1 = float2(v.uv.x, ((1.0 - (1.0/_UVMultiplier)) * 0.5) + (v.uv.y / _UVMultiplier)); //uvs corrected to map camera image to screen space
    61.                 o.uv2 = float2(lerp(1.0 - o.uv1.x, o.uv1.x, _UVFlip), lerp(o.uv1.y, 1.0 - o.uv1.y, _UVFlip)); //flipped uvs for the occlusion textures
    62.                 o.uv1.xy = mul ( o.uv1.xy, rotationMatrix);
    63.                 o.uv2.xy = mul ( o.uv2.xy, rotationMatrix);
    64.                 return o;
    65.             }        
    66.  
    67.             fixed4 frag (v2f i) : SV_Target
    68.             {
    69.                 fixed4 col = tex2D(_MainTex, i.uv);
    70.                 fixed4 cameraFeedCol = tex2D(_CameraFeed, i.uv1);
    71.                 float sceneDepth = LinearEyeDepth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv));
    72.  
    73.                 float4 stencilCol = tex2D(_OcclusionStencil, i.uv2);
    74.                 float occlusionDepth = tex2D(_OcclusionDepth, i.uv2) * 0.625; //0.625 hack occlusion depth based on real world observation
    75.                            
    76.                 float showOccluder = step(occlusionDepth, sceneDepth) * stencilCol.r; // 1 if (depth >= ocluderDepth && stencil)
    77.  
    78.                 return lerp(col, cameraFeedCol, showOccluder);
    79.              
    80.             }
    81.             ENDCG
    82.         }
    83.     }
    84. }
    85.  
    The changes:
    1) Added _UVRotation("Rotate UV", Float) = 0.0 to properties.
    2) Added float _UVRotation; to Pass{}
    3) Changed this bit of the shader based on example from this topic: https://forum.unity.com/threads/rotation-of-texture-uvs-directly-from-a-shader.150482/
    Code (CSharp):
    1.  
    2.             v2f vert (appdata v)
    3.             {
    4.                 float sinX = sin (_UVRotation);
    5.                 float cosX = cos (_UVRotation);
    6.                 float2x2 rotationMatrix = float2x2(cosX, -sinX, sinX, cosX);
    7.              
    8.                 v2f o;
    9.                 o.vertex = UnityObjectToClipPos(v.vertex);
    10.                 o.uv = v.uv;
    11.              
    12.                 o.uv1 = float2(v.uv.x, ((1.0 - (1.0/_UVMultiplier)) * 0.5) + (v.uv.y / _UVMultiplier)); //uvs corrected to map camera image to screen space
    13.                 o.uv2 = float2(lerp(1.0 - o.uv1.x, o.uv1.x, _UVFlip), lerp(o.uv1.y, 1.0 - o.uv1.y, _UVFlip)); //flipped uvs for the occlusion textures
    14.                 o.uv1.xy = mul ( o.uv1.xy, rotationMatrix);
    15.                 o.uv2.xy = mul ( o.uv2.xy, rotationMatrix);
    16.                 return o;
    17.             }    
    18.  
    4) In code if the device orientation is portrait, I am passing this into the shader:
    var rotation = - Mathf.PI / 2f;
    m_material.SetFloat("_UVRotation", rotation);

    And the result that I get is:

    The hand in the lower part of the screen is squished, but it masks the 3d objects in the scene properly.
    It seems like stretching the image to the entire screen would resolve this, but have not found the right settings for this so far. Using different multipliers for _UVMultiplier in the shader does not seem to solve this. Seems really close, but not quite there yet.
     
    DrSharky likes this.
  45. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,085
    Can someone post a definitive solution here and/or github/gitlab repo?

    I feel like this is one of those foundational/fundamental things that can help a lot of people - esp those who are more focused on application development than fiddling with getting ARKit 3 features to work in Unity.
     
  46. Tanktop_in_Feb

    Tanktop_in_Feb

    Joined:
    Oct 24, 2018
    Posts:
    1
    I based it off what @perchangpete have been sharing.

    Unity 2019.2.3.f1
    Version 11.0 beta
    iPhone Xs
    iOS 13.0
     

    Attached Files:

    Rainie_Sun, ipo100186 and vovo801 like this.
  47. vovo801

    vovo801

    Joined:
    Feb 3, 2015
    Posts:
    18
    @Tanktop_in_Feb Huge thanks for posting the shader + code.
    I have found that in this line:
    fixed4 cameraFeedCol = tex2D(_CameraFeed, i.uv1 * _Time);
    If I remove the *_Time, the tiling effect that you get goes away.
    But then it reveals that the flip is not correct in portrait orientation (might require rotating the texture, which I tried in my previous post). You'll see what I mean when you try this. But it is pretty close. Awesome!

    Edit:
    Did some experimenting and here's a version of the shader that worked for us in portrait mode (we are using this shader + the code posted by @Tanktop_in_Feb):

    Code (CSharp):
    1.  
    2. Shader "Perchang/PeopleOcclusion"
    3. {
    4.     Properties
    5.     {
    6.         _MainTex ("Texture", 2D) = "white" {}
    7.         _CameraFeed ("Texture", 2D) = "white" {}
    8.         _OcclusionDepth ("Texture", 2D) = "white" {}
    9.         _OcclusionStencil ("Texture", 2D) = "white" {}
    10.         _UVMultiplierLandScape ("UV MultiplerLandScape", Float) = 0.0
    11.         _UVMultiplierPortrait ("UV MultiplerPortrait", Float) = 0.0
    12.         _UVFlip ("Flip UV", Float) = 0.0
    13.         _ONWIDE("Onwide", Int) = 0
    14.     }
    15.     SubShader
    16.     {
    17.         // No culling or depth
    18.         Cull Off ZWrite Off ZTest Always
    19.  
    20.         Pass
    21.         {
    22.             CGPROGRAM
    23.            
    24.             #pragma vertex vert
    25.             #pragma fragment frag
    26.  
    27.             #include "UnityCG.cginc"
    28.  
    29.             struct appdata
    30.             {
    31.                 float4 vertex : POSITION;
    32.                 float2 uv : TEXCOORD0;
    33.             };
    34.  
    35.             struct v2f
    36.             {
    37.                 float2 uv : TEXCOORD0;
    38.                 float2 uv1 : TEXCOORD1;
    39.                 float2 uv2 : TEXCOORD2;
    40.                 float4 vertex : SV_POSITION;
    41.             };
    42.  
    43.             sampler2D _MainTex;
    44.             float4 _MainTex_ST;
    45.             sampler2D_float _OcclusionDepth;
    46.             sampler2D _OcclusionStencil;
    47.             sampler2D _CameraFeed;
    48.             sampler2D_float _CameraDepthTexture;
    49.             float _UVMultiplierLandScape;
    50.             float _UVMultiplierPortrait;
    51.             float _UVFlip;
    52.             int _ONWIDE;
    53.  
    54.             v2f vert (appdata v)
    55.             {
    56.                 v2f o;
    57.                 o.vertex = UnityObjectToClipPos(v.vertex);
    58.                 o.uv = v.uv;
    59.                 if(_ONWIDE == 1)
    60.                 {
    61.                     o.uv1 = float2(v.uv.x, (1.0 - (_UVMultiplierLandScape * 0.5f)) + (v.uv.y / _UVMultiplierLandScape));
    62.                     o.uv2 = float2(lerp(1.0 - o.uv1.x, o.uv1.x, _UVFlip), lerp(o.uv1.y, 1.0 - o.uv1.y, _UVFlip));
    63.                 }
    64.                 else
    65.                 {
    66.                     o.uv1 = float2(1.0 - v.uv.y, 1.0 - _UVMultiplierPortrait * 0.5f + v.uv.x / _UVMultiplierPortrait);
    67.                     float2 forMask = float2((1.0 - (_UVMultiplierPortrait * 0.5f)) + (v.uv.x / _UVMultiplierPortrait), v.uv.y);
    68.                     o.uv2 = float2(lerp(1.0 - forMask.y, forMask.y, 0), lerp(forMask.x, 1.0 - forMask.x, 1));
    69.                 }
    70.                 return o;
    71.             }
    72.  
    73.             fixed4 frag (v2f i) : SV_Target
    74.             {
    75.                 fixed4 col = tex2D(_MainTex, i.uv);
    76.                 fixed4 cameraFeedCol = tex2D(_CameraFeed, i.uv1);
    77.                 float sceneDepth = LinearEyeDepth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv));
    78.                 float4 stencilCol = tex2D(_OcclusionStencil, i.uv2);
    79.                 float occlusionDepth = tex2D(_OcclusionDepth, i.uv2) * 0.625; //0.625 hack occlusion depth based on real world observation
    80.                            
    81.                 float showOccluder = step(occlusionDepth, sceneDepth) * stencilCol.r; // 1 if (depth >= ocluderDepth && stencil)
    82.  
    83.                 return lerp(col, cameraFeedCol, showOccluder);
    84.             }
    85.  
    86.             ENDCG
    87.         }
    88.     }
    89. }
     
    Last edited: Sep 26, 2019
  48. danbfx

    danbfx

    Joined:
    Feb 22, 2011
    Posts:
    40
    hi there is an official unity people occlusion script available yet??
     
  49. Mobgen-Lab

    Mobgen-Lab

    Joined:
    May 23, 2017
    Posts:
    15
    You should upload one example working because the depth Texture apparently is not working well. Even one example workign in different devices (Ipad and Iphone) and orientations.

    Thanks
     
    Last edited: Oct 3, 2019
    pwshaw and Buzzrick_Runaway like this.
  50. GSO_GT

    GSO_GT

    Joined:
    Aug 30, 2019
    Posts:
    17
    Hi all, i'm having a problem with the texture field in the PeopleOcclusionPostEffect. What should I put inside? Right now it's showing my hands as white but occluding somewhat okay.

    Thanks in advance for the help!