Search Unity

How the coefficients of Blendshapes in ARkit Unity SDK are set while face tracking?

Discussion in 'AR' started by ShoHyun, Mar 19, 2019.

  1. ShoHyun

    ShoHyun

    Joined:
    Jan 10, 2019
    Posts:
    5
    In ARKit Unity SDK, there are BlendShapes coefficients you can use while face tracking, like how much your facial movement is changed.

    However, it seems the codes don't show how the result data(coefficients) of BlendShapes is made.

    What I'd like to know is how 'ARFaceAnchor.blendShapes' are set, so that I can add another new BlendShapes which do not exist in current BlendShapesLocation(enum).

    I think the algorithm of adjusting the coefficients of blendshapes by facial expressions is hidden.

    Anyone has any idea about it?

    Thank you in advance.

    Blendshapes - ARFaceAnchor https://developer.apple.com/documentation/arkit/arfaceanchor/2928251-blendshapes?language=objc


    Add comment
     
    ina likes this.
  2. DigitalBeach

    DigitalBeach

    Joined:
    Jan 17, 2015
    Posts:
    37
    You can't generate new blend shapes. Those come direct from the Apple ARKit system and are determined by the parametric model they have for the face. You are correct, the algorithm for this is hidden.
     
    ina and ShoHyun like this.
  3. ShoHyun

    ShoHyun

    Joined:
    Jan 10, 2019
    Posts:
    5
    Oh I see...

    Thank You for your answer!