Search Unity

  1. Unity 2019.2 is now released.
    Dismiss Notice

ARFoundation Face Tracking step-by-step

Discussion in 'Handheld AR' started by jimmya, Dec 18, 2018.

  1. jimmya

    jimmya

    Unity Technologies

    Joined:
    Nov 15, 2016
    Posts:
    792
    We have just announced the release of the latest version of ARFoundation that works with Unity 2018.3.

    For this release, we have also updated the ARFoundation samples project with examples that show off some of the new features. This post will explain more about the Face Tracking feature that is shown off in the ARFoundation samples.

    The Face Tracking example scenes will currently only work on iOS devices that have a TrueDepth camera: iPhone X, XR, XS XS Max or iPad Pro (2018).

    The Face Tracking feature is exposed as part of the Face Subsystem similar to other subsystems: you can subscribe to events that will inform you when a new face has been detected, and that event will provide you with basic information about the face including the pose (position and rotation) of the face, and its TrackableId (which is a session unique id for any tracked object in the system). Using the TrackableId, you can then get more information about that particular face from the subsystem including the description of its mesh and the blendshape coefficients that describe the expression on that face. There are also events to subscribe to which allow you to detect when the face has been updated or when it has been removed.

    ARFoundation as usual provides some abstraction via the ARFaceManager component. This component can be added to an ARSessionOrigin in the scene, and it will create a copy of the "FacePrefab" prefab and add it to the scene as a child of the ARSessionOrigin when it detects a face. It will also update and remove the generated "Face" GameObjects as needed when the face is updated or removed respectively.

    You usually place an ARFace component on the root of the "FacePrefab" prefab above so that the generated GameObject can automatically update its position and orientation according to the data that is provided by the underlying subsystem.

    You can see how this works in the FaceScene in ARFoundation-samples. You first create a prefab named FacePrefab that has ARFace component on the root of the GameObject hierarchy:



    You then plug this prefab reference in the ARFaceManager component that you have added to ARSessionOrigin:



    You will also need to check the "ARKit Face Tracking" checkbox in the ARKitSettings asset as explained here.

    When you build this to a device that supports ARKit Face Tracking and run it, you should see the three colored axes from FacePrefab appear on your face and move around with your face and orient itself aligned to your face.

    You can also make use of the face mesh in your AR app by getting the information from the ARFace component. Open up FaceMeshScene and look at the ARSessionOrigin. It is setup in the same way as before, with just a different prefab being referenced by the ARFaceManager. You can take a look at this FaceMeshPrefab:



    You can see that the mesh filter on the prefab is actually empty. This is because the script component ARFaceMeshVisualizer will create a mesh and fill in the vertices and texture coordinates based on the data that it gets from ARFace component. You will also notice the MeshRenderer contains the material that this mesh will be rendered with, in this case a texture with three colored vertical bands. You can change this material to create masks and face paints etc. If this scene is built out to a device, you should be able to see your face rendered with the material specified.


    You can also make use of the blendshape coefficients that are output by ARKit Face Tracking. Open up the FaceBlendsShapeScene and check out the ARSessionOrigin. It looks the same as before, except that the prefab it references is now FaceBlendShapes prefab. You can take a look at that prefab:



    In this case, there is a ARFaceARKitBlendShapeVisualizer which references the SkinnedMeshRender GameObject of this prefab so that it can manipulate the blendshapes that exist on that component. This component will get the Blendshape coefficients from the ARKit SDK and manipulate the blendshapes on our sloth asset so that it replicates your expression. Build it out to a supported device and try it out!

    For more detailed coverage of the face tracking support, have a look at the docs.

    This was a summary of how to create your own face tracking experiences with the new ARFoundation release. Please take some time to try it out and let us know how it works out! As always, I'm happy to post any cool videos, demos or apps you create on https://twitter.com/jimmy_jam_jam
     

    Attached Files:

    Jonas_Hartmann and tdmowrer like this.
  2. Staus

    Staus

    Joined:
    Jul 7, 2014
    Posts:
    8
    Awesome! Is there any way of exposing raw TrueDepth video feed from iOS devices?
    I know that ARFoundation aims to abstract device specific features away, but it just seems like such a pity to not have access to this data when it's just sitting there and fx facetracking likely makes use of it anyway.
    Thanks for the good work!
     
  3. jimmya

    jimmya

    Unity Technologies

    Joined:
    Nov 15, 2016
    Posts:
    792
    ARKit FaceTracking does not have this available to end users but abstracts it away. There are other APIs that will expose it, but it is outside of the scope right now. The depthmap may not be as useful as you think it is - they have to use a lot of filtering and smoothing in their SDK to enable these features.
     
  4. nbkhanhdhgd

    nbkhanhdhgd

    Joined:
    Apr 19, 2016
    Posts:
    2
    It's worked perfectly on IphoneX.
    But camera is default front camera. Can I switch to back camera ?
     
    ramphulneillpartners likes this.
  5. m3rt32

    m3rt32

    Joined:
    Mar 5, 2013
    Posts:
    54
    Arcore introduced their facetracking support. You guys have any plan on this ?
     
  6. xiaocheny

    xiaocheny

    Joined:
    Feb 20, 2018
    Posts:
    1
    What about Android devices? The face tracking samples will not working so far?
     
    ZenUnity likes this.
  7. lincode

    lincode

    Joined:
    May 2, 2019
    Posts:
    1
    No, we can't currently. It requires the depth true that is only the front camera's feature in iPhoneX now.
     
  8. WR-reddevil

    WR-reddevil

    Joined:
    Apr 16, 2019
    Posts:
    33
    DO anyone know how to identify the face parts like Eyes, nose etc ?
     
  9. Pawnkiller

    Pawnkiller

    Joined:
    Apr 7, 2016
    Posts:
    4
    How can i switch between the assets that appear on the face.. Im able to switch between the different textures or different assets. but not between assets and textures.
     
  10. JBMS

    JBMS

    Joined:
    Sep 30, 2017
    Posts:
    6
    I am also trying to determine if eye transforms are supported in ARFoundation or its subsystems.

    With the (now deprecated) Unity ARKit Plugin, we could use leftEyePose and rightEyePose on the ARFaceAnchor.

    Is there an equivalent with ARFoundation, etc?
     
    AdrienMgm, ambergarage and efge like this.
  11. AdrienMgm

    AdrienMgm

    Joined:
    Oct 10, 2017
    Posts:
    1
    Same question here, is it possible to use eyes pose with ARFoundation?
     
  12. marcinpolakowski15

    marcinpolakowski15

    Joined:
    Aug 7, 2019
    Posts:
    1
    I find it completely irresponsible for Unity to deprecate a plugin before its successor has caught up. Eye Gaze tracking is an essential part of the AR experience, and the old ARKit Plugin did this perfectly well.

    Not only is eye gaze apparently unsupported by AR Foundation, we also have total radio silence from Unity on the subject. It's a disgraceful way to treat devs who rely on their technology, the least they could do is reply.
     
  13. davidmo_unity

    davidmo_unity

    Unity Technologies

    Joined:
    Jun 18, 2019
    Posts:
    12
    We understand your frustration and want to address your concerns. I am happy to inform everyone that eye transforms and fixation points will be available in the next preview release of ARFoundation and related packages (ARKit-Face-Tracking).

    A short preview of how it will work is that each ARFace has 3 new nullable properties added to them: leftEyeTransform, rightEyeTransform, and fixationPoint. Each of these are being exposed as Unity transforms so to utilize them, you simply need add your content as a child to the transform. We have also made sure to have a few samples made to assist with utilizing the "new" feature.

    Unfortunately, I can't speak to what the timeframe for the release will be but I did want to at least address the concerns surrounding this feature as it is a crucial feature for many AR applications.
     
    BuoDev likes this.