Hi guys! I am currently working on my Bachelor thesis and a part of it is the Face Tracking from Apple's ARKit. I am going to use the FaceCap App to record my face and connect it with Unity to drive a character. For a character I am thinking about using the Unity Digital Human. The Blendshapes there unfortunately are extremely different in comparison to the ARKit Blendshapes. So my Question is, is there any way to map the ARKit Blendshapes on the Digital Human character, which is using a Snappers Facial Rig. I think a normal Retargeting Asset is not suitable because the Blendshapes are just too different overall. But maybe i am wrong about that.. If there is actually no way to do that, i would just use a Fuse/Mixamo character, which would be way easier. Would appreciate some solutions! Thanks!