Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.

Feature Request ARkit facetracking with truedepth?

Discussion in 'Unity MARS' started by timbokoppers, Jun 2, 2020.

  1. timbokoppers

    timbokoppers

    Joined:
    Nov 14, 2016
    Posts:
    67
    Hi,

    I've seen in the documentation that MARS supports Facetracking based on a prerecorded video as input. Is it also possible to use it with Facetracking from ARkit, with TrueDepth?

    If not, is this something MARS will support in the near future?
     
  2. jmunozarUTech

    jmunozarUTech

    Unity Technologies

    Joined:
    Jun 1, 2020
    Posts:
    295
    Last edited: Jun 2, 2020
  3. timbokoppers

    timbokoppers

    Joined:
    Nov 14, 2016
    Posts:
    67
    Great, so to recap, it is possible to use pre-recorder video or data to simulate TrueDepth Facetracking in the editor without using a TrueDepth phone?
     
  4. jmunozarUTech

    jmunozarUTech

    Unity Technologies

    Joined:
    Jun 1, 2020
    Posts:
    295
    Hello @timbokoppers,

    If you are talking about face tracking in the editor, generally, then yes MARS Provides that; if you are talking specifically about face mesh simulation in the editor then no, MARS doesnt have that.
     
  5. timbokoppers

    timbokoppers

    Joined:
    Nov 14, 2016
    Posts:
    67
  6. jmunozarUTech

    jmunozarUTech

    Unity Technologies

    Joined:
    Jun 1, 2020
    Posts:
    295
  7. timbokoppers

    timbokoppers

    Joined:
    Nov 14, 2016
    Posts:
    67
    Thanks, will look into that.
     
  8. mtschoen

    mtschoen

    Unity Technologies

    Joined:
    Aug 16, 2016
    Posts:
    187
    Hi there!

    I wanted to build on @jmunozarUTech's response (thanks!). Blendshapes in particular are a tricky subject in today's AR landscape. As far as I know, ARKit is the only platform to use that particular set of blendshapes. As a result, if you author content to use them it will likely only work on Apple devices with TrueDepth cameras. It won't, for example, work on Android devices or other potential platforms which support face tracking with head pose estimation, but do not provide the same blendshapes as ARKit. It's feasible to try and "translate" blendshape values if they are similar enough, but things break down quickly as the different representations facial expressions diverge.

    Furthermore, as a way of responding to facial expressions, blendshapes can be a bit limiting. They are not all directly mapped to expressions (like smile, frown, wink, etc.) although they can be used this way. They are very good for deforming a mesh that has been authored specifically with the platform's configuration in mind, but not very good at providing general-purpose anchor points for locations on the face. For example, blendshapes do not help me place a mustache on the user's upper lip, because I still need the face mesh to locate the "staring point" for the mouth open/close expression.

    In MARS, we expose a set of landmarks and expressions which we intend to be a "minimum subset" that all platforms should support. In our ARFoundationFaceTrackingProvider, we utilize the platform-provided mesh to produce these landmarks and expression values based on the position of specific triangles that we know to correspond to facial features. If you want to make googly eyes or fake eyebrows, landmarks are a great way to drag and drop content onto specific facial features. You can read more about this in our documentation:

    https://docs.unity3d.com/Packages/c...acking.html#placing-digital-content-on-a-face

    As a final note, these landmarks are included in the face recordings that come with MARS, and can therefore be simulated in the Editor.

    For expression data on Android, we use a set of position ranges for these landmarks to do a basic estimation, however this approach does not work for all faces and we do not recommend its use for production apps. On iOS with truedepth, we actually use blendshape data to calculate the expressions, which is a bit more reliable. The expression system is best suited to future integrations with SDKs that are specifically intended to provide expression/emotion information.

    If your intent is to create an "animoji" type character based on ARKit blendshapes, this is unfortunately not a workflow that we support in MARS. You are able to access the blendshape data through the AR Foundation session that we use to implement face tracking in MARS, but we do not expose blendshapes through our MARS data types at this time.

    We released a code sample back in 2018 which may help you iterate on characters using ARKit blend shapes. https://github.com/Unity-Technologies/facial-ar-remote

    Hope this helps!
     
    jmunozarUTech likes this.
  9. timbokoppers

    timbokoppers

    Joined:
    Nov 14, 2016
    Posts:
    67
    Thanks for the clear and extended reply, this makes sense for me. We are working on an app that mimics the animoji functionality indeed, so it is not a problem for us only to rely on TrueDepth camera devices. It something our app requires.

    The only downside of developing such app right now it the testing iteration, you can only test on the device itself and not test in the Unity Editor.

    As far as I can see the code sample from 2018 doesn't support the latest ARKit / ARFoundation?
     
  10. mtschoen

    mtschoen

    Unity Technologies

    Joined:
    Aug 16, 2016
    Posts:
    187
    Correct. It is based on the old ARKit plugin. It shouldn't be too hard to update to the latest AR Foundation version, but I have not tried this. I will try to take some time soon to take a stab at this and report back.
     
  11. mtschoen

    mtschoen

    Unity Technologies

    Joined:
    Aug 16, 2016
    Posts:
    187
    jmunozarUTech and timbokoppers like this.
  12. fengkan

    fengkan

    Joined:
    Jul 10, 2018
    Posts:
    82
    I don't own an iPhone X, but I want to try my mesh, can anyone provide some data stream so that I can playback to verify the blendshapes of my mesh?

    Thank you.
     
  13. leweyg_unity

    leweyg_unity

    Unity Technologies

    Joined:
    Jan 29, 2020
    Posts:
    38
    MARS has built in face recordings (with video and tracking), hope that is helpful.
     
    fengkan and jmunozarUTech like this.
  14. fengkan

    fengkan

    Joined:
    Jul 10, 2018
    Posts:
    82
    I finally got an iPhoneX, and recorded some data of the sloth demo, I wish this may help someone some day.
     

    Attached Files:

  15. fengkan

    fengkan

    Joined:
    Jul 10, 2018
    Posts:
    82
    Is there any information about how should I set the eye bones? I have the rotations all zero, but the eyes just don't work, I have tried the Neg Z check.
     
  16. GeniusKoala

    GeniusKoala

    Joined:
    Oct 13, 2017
    Posts:
    97
    Thanks for the updates!

    I had built the previous Client version of the repository on my iPhone11. Sometimes the app was freezing. I don't think it was because of the communication with the Server scene. Do you think your update may correct or improve it?

    Could your explain maybe what really change with the previous version? I don't really understand how Unity manage to communicate with the iPhone. There was an ARKit plugin but now it's completly with AR Foundation? Is the communication with the iPhone different now?

    I really thank your team on augmented reality and wish you to continue in this way!
     
    Last edited: Nov 14, 2020
  17. mtschoen

    mtschoen

    Unity Technologies

    Joined:
    Aug 16, 2016
    Posts:
    187
    > Could your explain maybe what really change with the previous version?

    All that's changed in this new version was to upgrade to a newer version of Unity (2019.4) and use AR Foundation. The networking code is mostly unchanged, but I fixed a bug where you could accidentally leave out certain blendshapes if the settings weren't configured correctly.
     
    GeniusKoala likes this.