Search Unity

  1. Read here for Unity's latest plans on OpenXR.
    Dismiss Notice

Resolved Occlusion on Face points

Discussion in 'Unity MARS' started by IF_test, Oct 8, 2020.

  1. IF_test

    IF_test

    Joined:
    Feb 27, 2017
    Posts:
    10
    Hello,
    I've been testing FaceTracking with MARS. can someone guide me on how to apply occlusion on some of the points for example ear points, so that ear points can only shows up when we rotate head to the side(right and left).

    Also, how can I improve the stability as I can see FaceTracking is stable on IOS devices and on android device FaceTracking is not that stable
     
  2. jmunozarUTech

    jmunozarUTech

    Unity Technologies

    Joined:
    Jun 1, 2020
    Posts:
    246
    Hey there!

    When you create the facemask (From the MARS Panel), you get a child game object of that Face Mask Proxy called "Depth Mask", that gameobject is the one that occludes your content; if you have an artist or you can do 3D modeling you can modify the depth mask mesh filter and occlude the ears more if necessary.

    With regards to Face Tracking on Android and iOS; unfortunately there is not much that can be done; Face tracking on iOS is way better than Android and this comes from their platforms themselves.
     
  3. IF_test

    IF_test

    Joined:
    Feb 27, 2017
    Posts:
    10
    Hello,
    Depth Mask Mesh is not adjusting based on the face. Every person has a different face, and the contents will be occluded by the Depth Mask Mesh. we have a default mesh in MARS which is ideal I guess. but a person with a big face, this depth mask face will cover only 60% to 80% part of the big face. we can modify the mesh on 3D modeling software and we can make to fit the big face but then a problem will occur for the smaller faces. can we use the face mesh which is returned by the ARCore or ARKit? if yes can you guide me on how to achieve this?
     
    Last edited: Nov 30, 2020
  4. unity_andrewm

    unity_andrewm

    Unity Technologies

    Joined:
    Jun 25, 2015
    Posts:
    71
    Take a look at the 'Face Mask' template.
    The FaceMask proxy has a 'face action' - I've included a screenshot.
    upload_2020-11-30_13-17-20.png

    The 'Face Mesh' mesh will be replaced by the face mesh provided by ARFoundation when running on device. This will give you the native ARKit face mesh, for example.

    Android face tracking tends to naturally be shakier - we're working on some good smoothing heuristics for that case
     
    IF_test and jmunozarUTech like this.
  5. jaydipsinh

    jaydipsinh

    Joined:
    Mar 31, 2017
    Posts:
    2
    Hello sir,
    i have used MARS in my project. currently i need help. i want to stop rendering of AR camera in MARS without disabling AR camera. i need just stop showing camera rendering of outside into unity.
     
  6. IF_test

    IF_test

    Joined:
    Feb 27, 2017
    Posts:
    10
    Thank you its working well
     
  7. IF_test

    IF_test

    Joined:
    Feb 27, 2017
    Posts:
    10
    one more issue I'm Facing is when reloading the scene again face tracking is not working. its only happening on IOS in android it works well.

    consider scenario :
    I've two scenes 1) 3D scene and 2)Facetracking scene(mars)

    when I'm changing the scene from 1st scene to 2nd scene FaceTracking works well. but after that reloading the2nd scene again face tracking not working. its also not working If I load 1st scene and then loading 2nd scene.
     
    Last edited: Jan 29, 2021
  8. IF_test

    IF_test

    Joined:
    Feb 27, 2017
    Posts:
    10
  9. mtschoen

    mtschoen

    Unity Technologies

    Joined:
    Aug 16, 2016
    Posts:
    139
    Hi there! We have confirmed the issue with switching scenes and face tracking on iOS. We are working to fix this in a future update of MARS.
     
    IF_test and jmunozarUTech like this.
  10. IF_test

    IF_test

    Joined:
    Feb 27, 2017
    Posts:
    10
    Hi, Thanks for the Reply

    I'm using Version 1.1.1 will check the same after upgrading to Version 1.2.0 .. can you suggest a workaround to solve this issue so we don't have to wait for future updates?
     
  11. unity_andrewm

    unity_andrewm

    Unity Technologies

    Joined:
    Jun 25, 2015
    Posts:
    71
    Here's a custom action I've used in the past to auto-fit head content to heads of different sizes:

    Code (CSharp):
    1.     /// <summary>
    2.     /// Stretches a transform to match the expected head bounds, using common landmarks as a basis
    3.     /// </summary>
    4.     [Unity.MARS.Attributes.MonoBehaviourComponentMenu(typeof(FitToHeadAction), "Action/Fit to Head")]
    5.     public class FitToHeadAction : Actions.TransformAction, IUsesCameraOffset, IUsesMARSTrackableData<IMRFace>, ISpawnable, IRequiresTraits
    6.     {
    7.         // How much to expand / offset the mesh by to make it fit the head properly
    8.         // Use content/transform parenting to refine this further
    9.         static readonly Vector3 k_Padding = new Vector3( 0.01f, 0.0f, 0.02f);
    10.         const float k_FaceConfidenceDecay = 0.05f;
    11.  
    12. #if !FI_AUTOFILL
    13.         IProvidesCameraOffset IFunctionalitySubscriber<IProvidesCameraOffset>.provider { get; set; }
    14. #endif
    15.  
    16.         static readonly TraitRequirement[] k_RequiredTraits = { TraitDefinitions.Face };
    17.  
    18.         Vector3 m_DefaultScale = Vector3.one;
    19.         Vector3 m_DefaultCenter = Vector3.zero;
    20.  
    21.         Vector3 m_CurrentScale = Vector3.one;
    22.         Vector3 m_CurrentCenter = Vector3.zero;
    23.  
    24.         Transform m_CameraTransform;
    25.         float m_LastFaceConfidence = 0.0f;
    26.  
    27.         public void OnMatchAcquire(QueryResult queryResult)
    28.         {
    29.             InitializeLandmarkDefaults();
    30.             UpdateScale(queryResult);
    31.         }
    32.  
    33.         public void OnMatchUpdate(QueryResult queryResult)
    34.         {
    35.             UpdateScale(queryResult);
    36.         }
    37.  
    38.  
    39.         void InitializeLandmarkDefaults()
    40.         {
    41.             m_CameraTransform = MARSUtils.MarsRuntimeUtils.GetActiveCamera(true).transform;
    42.  
    43.             // Ensures the minimum landmarks are here to use this function effectively
    44.             var fallbackFacelandmarks = Landmarks.MARSFallbackFaceLandmarks.instance.GetFallbackFaceLandmarkPoses();
    45.  
    46.             var fallbackMissing = false;
    47.             if (!fallbackFacelandmarks.ContainsKey(MRFaceLandmark.LeftEar))
    48.             {
    49.                 Debug.LogError("Missing the ear fallback landmark!");
    50.                 fallbackMissing = true;
    51.             }
    52.             if (!fallbackFacelandmarks.ContainsKey(MRFaceLandmark.NoseTip))
    53.             {
    54.                 Debug.LogError("Missing the nose tip fallback landmark!");
    55.                 fallbackMissing = true;
    56.             }
    57.             if (!fallbackFacelandmarks.ContainsKey(MRFaceLandmark.LeftEye))
    58.             {
    59.                 Debug.LogError("Missing the eye fallback landmark!");
    60.                 fallbackMissing = true;
    61.             }
    62.  
    63.             if (fallbackMissing)
    64.             {
    65.                 m_DefaultScale = transform.localScale;
    66.                 m_DefaultCenter = transform.localPosition;
    67.                 return;
    68.             }
    69.  
    70.             var earLandmark = fallbackFacelandmarks[MRFaceLandmark.LeftEar];
    71.             var noseLandmark = fallbackFacelandmarks[MRFaceLandmark.NoseTip];
    72.             var eyeLandmark = fallbackFacelandmarks[MRFaceLandmark.LeftEye];
    73.  
    74.             // Based on the fallback values, we get what the 'standard' scale and center are
    75.             m_DefaultScale = CalculateScale(earLandmark.position, eyeLandmark.position, noseLandmark.position);
    76.             m_DefaultCenter = CalculateCenter(earLandmark.position, eyeLandmark.position);
    77.             m_CurrentScale = m_DefaultScale;
    78.             m_CurrentCenter = m_DefaultCenter;
    79.         }
    80.  
    81.         Vector3 CalculateScale(Vector3 earPosition, Vector3 eyePosition, Vector3 nosePosition)
    82.         {
    83.             // Ears are on opposite sides of the head, so we can use them to get general head width
    84.             // The eyes and nose tend to be at set ratios of head size, so using the distance between them we determine a general head height multiple
    85.             // We assume the head is roughly square shaped on the X-Z plane
    86.             var calculatedScale = new Vector3(Mathf.Abs(earPosition.x) * 2.0f,
    87.                                                 Mathf.Abs((eyePosition.y - nosePosition.y)) * 8.0f,
    88.                                                 Mathf.Abs(earPosition.x) * 2.0f);
    89.             calculatedScale += k_Padding * 2.0f;
    90.             return calculatedScale;
    91.         }
    92.         Vector3 CalculateCenter(Vector3 earPosition, Vector3 eyePosition)
    93.         {
    94.             // Head is centered at eye-height.  Ears tend to be in the middle of the head, while eyes are up front, so we can that for depth adjustment
    95.             return new Vector3(0.0f, eyePosition.y, Mathf.Abs(earPosition.x - eyePosition.z) * 0.5f - (k_Padding.x - k_Padding.z));
    96.         }
    97.  
    98.         void UpdateScale(QueryResult queryResult)
    99.         {
    100.             var assignedFace = queryResult.ResolveValue(this);
    101.  
    102.             if (assignedFace == null)
    103.                 return;
    104.  
    105.             var headPose = assignedFace.pose;
    106.  
    107.             var newScale = m_CurrentScale;
    108.             var newCenter = m_CurrentCenter;
    109.  
    110.             // Pull up landmark poses and figure out new transformation values from them
    111.             if (assignedFace != null && assignedFace.LandmarkPoses != null)
    112.             {
    113.                 var resultLandmarks = assignedFace.LandmarkPoses;
    114.                 if (resultLandmarks.ContainsKey(MRFaceLandmark.LeftEar) && resultLandmarks.ContainsKey(MRFaceLandmark.NoseTip) && resultLandmarks.ContainsKey(MRFaceLandmark.LeftEye))
    115.                 {
    116.                     var earLandmark = resultLandmarks[MRFaceLandmark.LeftEar];
    117.                     var noseLandmark = resultLandmarks[MRFaceLandmark.NoseTip];
    118.                     var eyeLandmark = resultLandmarks[MRFaceLandmark.LeftEye];
    119.  
    120.                     var earPosition = headPose.ApplyInverseOffsetTo(earLandmark.position);
    121.                     var eyePosition = headPose.ApplyInverseOffsetTo(eyeLandmark.position);
    122.                     var nosePosition = headPose.ApplyInverseOffsetTo(noseLandmark.position);
    123.  
    124.                     newScale = CalculateScale(earPosition, eyePosition, nosePosition);
    125.                     newCenter = CalculateCenter(earPosition, eyePosition);
    126.                 }
    127.             }
    128.  
    129.             // Landmark estimation tends to be very inaccurate when the head is viewed sideways.  Instead, keep track of historical 'best' values and use those
    130.             // Use the most confident value for headpose w/ mars session camera
    131.             // Confident : Width/height ratio close to average
    132.             // Confident : Face is facing camera (dot)
    133.             // Confident : Decay over time
    134.             var angleConfidence = 1.0f - (Mathf.Clamp(Vector3.Angle(headPose.forward, m_CameraTransform.forward), 0.0f, 45.0f) / 45.0f);
    135.  
    136.             if (m_LastFaceConfidence > 0.825f)
    137.                 m_LastFaceConfidence = Mathf.Max(m_LastFaceConfidence - Time.deltaTime * k_FaceConfidenceDecay, 0.75f);
    138.  
    139.             if (angleConfidence > m_LastFaceConfidence)
    140.             {
    141.                 var lerpPercent = angleConfidence / (angleConfidence + m_LastFaceConfidence);
    142.  
    143.                 m_CurrentScale = Vector3.Lerp(m_CurrentScale, newScale, lerpPercent);
    144.                 m_CurrentCenter = Vector3.Lerp(m_CurrentCenter, newCenter, lerpPercent);
    145.                 m_LastFaceConfidence = angleConfidence;
    146.             }
    147.  
    148.             headPose = headPose.TranslateLocal(m_CurrentCenter);
    149.             transform.SetWorldPose(this.ApplyOffsetToPose(headPose));
    150.             transform.localScale = m_CurrentScale;
    151.         }
    152.  
    153.         public TraitRequirement[] GetRequiredTraits()
    154.         {
    155.             return k_RequiredTraits;
    156.         }
    157.     }
     
    WhiteLotus and mtschoen like this.
  12. mtschoen

    mtschoen

    Unity Technologies

    Joined:
    Aug 16, 2016
    Posts:
    139
    Hi there, @IF_test. I just did a test with the older versions of AR Foundation and the ARKit plugin and found that the issues we were seeing with switching scenes and starting up face tracking weren't present. These were the versions we originally used to verify MARS face tracking features. We're still working on updates to fix issues with later versions, but if you roll back to the 2.x version of AR Foundation, you may be able to work around the issue on your end.

    These are the versions I used to test:

    "com.unity.xr.arfoundation": "2.1.8",
    "com.unity.xr.arcore": "2.1.8",
    "com.unity.xr.arkit": "2.1.9",
    "com.unity.xr.arkit-face-tracking": "1.0.7",
    "com.unity.xr.management": "3.2.15"
     
    WhiteLotus and jmunozar like this.
  13. IF_test

    IF_test

    Joined:
    Feb 27, 2017
    Posts:
    10
    Hello,
    Thanks for the reply. I've Roll back to the same versions Mentioned above but facing the same issue.
     
  14. mtschoen

    mtschoen

    Unity Technologies

    Joined:
    Aug 16, 2016
    Posts:
    139
    Hi there! Thanks for the reminder to follow up on this. We recently found the issue, and it might be quite a simple fix! The default face template has a head bust for the occlusion mask, which has a transform offset. If you zero out that offset, the head bust will not line up anymore in Scene View but in a device build the mesh should show up in the right spot. We will be fixing the template for our next release, but you should be able to work around the issue by zeroing out the transform in your scene. Just bear in mind that future face mask templates you make with MARS 1.2.0 will have the same issue.
     
unityunity