(I'm using REAR facing camera, iphone XS) I'm trying to pull the avdepthdata to analyze particular depth points while ARkit is running. This is so I can coreML obj/pose recognition, and then spatially place a 3d object where the pose/obj was detected. I ended up crashing after the first frame, and I'm hearing a lot of people tell me that you can't pull depthdata while arkit is running? Related Documentation: https://developer.apple.com/documen...and_media_capture/capturing_photos_with_depth Related Sample Proj (without arkit): https://github.com/AtsushiSuzuki/unity-depthcapture-ios Related Blog: https://richardstechnotes.com/2019/...dinates-for-openpose-or-anything-else-really/
(putting all this in one spot, random techniques and whatnot) Related shadergraph: https://answers.unity.com/questions/1587077/how-to-use-scene-depth-node-in-shadergraph.html https://answers.unity.com/questions/1619688/scene-depth-node-find-depth-value-from-xy-position.html Related render texture: https://forum.unity.com/threads/how...ge-is-not-getting-called.552091/#post-3692761 Related camera buffer: https://forum.unity.com/threads/mtl...to-access-buffer-of-depth-information.633853/ https://forum.unity.com/threads/commandbuffer-blit-isnt-stencil-buffer-friendly.432776/ Related `AVCaptureSession` and `ARSession` https://forums.developer.apple.com/thread/81971 https://stackoverflow.com/questions...session-and-avcapturesession-at-the-same-time
@jimmya @tdmowrer Starting to come to the conclusion where I can't spatially place a 3d object ontop of a coreml detected pose or object using depth buffers. Im just taking a screenpoint xy then find the depth value of that point. Pretty much assuming this is an impossible task unless you ray from the screenspace xy towards world space until you hit an AR plane, which isnt ideal. Related Raycast: https://blog.usejournal.com/object-detection-and-arfoundation-in-unity-8782b1ee6ea3
You could try raycast against feature points, and you can probably do a cone cast to be more accurate/get better results. You can also store feature points you have detected in previous frames to build up a depth cloud of your environment.
@jimmya Thx jim, conecast really brought it together for me. Using the technique below and applied colliders to the point clouds. The conecast shoots behind the screenspace detection point. I then filter out a point from those points to use for placement. https://github.com/walterellisfun/ConeCast
Ran into ARFoundation conecast, dropping this in here as a reference: public bool Raycast(Ray ray, List<ARRaycastHit> hitResults, TrackableType trackableTypeMask = TrackableType.All, float pointCloudRaycastAngleInDegrees = 5f); https://docs.unity3d.com/Packages/com.unity.xr.arfoundation@1.0/manual/index.html#raycasting