Hi all, Is it possible to get the AR Video Texture and apply it to a face mesh using either using ARFoundation or ARKit? I'm looking for functionality described in Apple's SDK documentation under "Map Camera Video onto 3D Face Geometry": https://developer.apple.com/documentation/arkit/creating_face-based_ar_experiences I'd like to map live video to the face mesh so the user's face blocks (occludes) 3D models in the scene. For example, if a user is wearing a helmet and turns his/her head, the user's face should appear in front of the far side of the helmet and block visibility to the far side of the helmet. Thanks!