As I understand, the official way to do computer vision with ARFoundation is to use the mechanisms described in "Accessing the Camera Image on the CPU", but when using CoreML it's much more convenient to pass the ARFrame captured image to a Vision request. I used to do this with the ARKit plug-in by passing the UnityARSessionNativeInterface.GetNativeFramePtr() to my Objective-C plug-in, which worked great. With ARFoundation, there is no obvious way to get access to this ARFrame pointer, but it's still defined in the plug-in, so I tried accessing it via reflection like this : Code (CSharp): var arKitApiClass = typeof(UnityEngine.XR.ARKit.ARWorldMap).Assembly.GetType("UnityEngine.XR.ARKit.Api"); var getNativeFramePtrMethod = arKitApiClass.GetMethod("UnityARKit_getNativeFramePtr", BindingFlags.NonPublic | BindingFlags.Static); This works, and I can invoke it statically to get access to the pointer, but when I try to use it from Objective-C (by making a bridged-cast back to an ARFrame*), it dies with a BAD_ACCESS exception. I assume that this is because the ARFrame was created from protected memory and I don't have access to it... Is there a way around this? It would be hugely convenient to have access to the ARKit session objects from Objective-C or Swift plug-ins. Thanks!