Search Unity

ARKit Face Tracking Implementation details

Discussion in 'AR' started by hardianlawi, Jan 5, 2019.

  1. hardianlawi

    hardianlawi

    Joined:
    Dec 16, 2018
    Posts:
    2
    Hi,

    I am currently working on a school project that involves using the blendshapes coefficients from the ARKit Face Trackers and I would like to know how the coefficients are calculated.

    Can anybody point me to any references (articles, documentation, papers, or codes) that are the closest to how the coefficients are calculated under the plugin?

    Any comments are really appreciated. Thank you.
     
  2. jimmya

    jimmya

    Joined:
    Nov 15, 2016
    Posts:
    793
    All the plugin does is to pull across the values that are generated by ARKit SDK across so that you can use it on the C# side and make it easier to use it by creating components in Unity that can be used to avoid coding at all if you want to do something standard.
    Here are some starting points:
    https://developer.apple.com/documentation/arkit/arfaceanchor/2928251-blendshapes
    (see each the entry for each blendshapelocation for diagrams that explain what that looks like e.g. https://developer.apple.com/documentation/arkit/arfaceanchor/blendshapelocation/2928261-eyeblinkleft)

    See the sloth Face Tracking examples in the unity ARKit plugin for how to use from Unity.
     
  3. hardianlawi

    hardianlawi

    Joined:
    Dec 16, 2018
    Posts:
    2
    @jimmya Hi, thanks a lot for your reply. I am looking for references to how the ARKit SDK actually comes up with the coefficients that are used to drive the animation. One similar work is this https://lgg.epfl.ch/publications/2013/OnlineLearning/index.php which is a paper written by Faceshift that was acquired by Apple. However, since the paper does not say anything about ARKit, I can't really use it as a reference on how the SDK works.