Search Unity

  1. Unity 2019.2 is now released.
    Dismiss Notice

AR Foundation - Occlusion

Discussion in 'Handheld AR' started by johnymetalheadx, Aug 21, 2019.

  1. johnymetalheadx

    johnymetalheadx

    Joined:
    Feb 20, 2016
    Posts:
    12
    Hi,

    So is there a way to occlude virtual objects with real world objects?

    For example, we can trace a pen using image processing and create an invisible virtual object (of the shape of pen) on the top of real world pen. That will occlude any virtual object coming in its way?
     
  2. Aaron-Meyers

    Aaron-Meyers

    Joined:
    Dec 8, 2009
    Posts:
    185
    No... nothing like this is currently possible with ARKit/ARCore. If the pen was static (never moved) and we had an accurate way of getting the depth of all pixels in the scene (via a depth sensor on the back of the device), then hypothetically we could build a very rough occlusion mesh that would work from the angle from which we are viewing.

    It does seem like robust occlusion is a big priority in AR at the moment so I wouldn't be surprised if we see a move in that direction in 2020 when ARKit 4 is on the radar.
     
    unnanego likes this.
  3. soorya696

    soorya696

    Joined:
    Dec 13, 2018
    Posts:
    24
    Hey is there a way to find the exact depth of a real-world object from the camera?
    In my project, I want to place the object just above the real-world object when the user tap on it
     
  4. Fl0oW

    Fl0oW

    Joined:
    Apr 24, 2017
    Posts:
    19
    No, as stated above ARKit only allows depth estimation for human bodies (or body parts) in the scene, not for arbitrary objects. There are 3rd party plugins on the asset store which allow meshing of the 3d environment, but it is pretty inaccurate, think room size rather than object size.

    As an idea: You could ask your users to bump their phones into the objects they want to put a marker on, then press a button. That way you can use the phone position to determine the marker position. Not ideal but might work depending on what you are trying to achieve. Good luck!
     
    soorya696 likes this.
  5. soorya696

    soorya696

    Joined:
    Dec 13, 2018
    Posts:
    24
    Thanks for your suggestion.
    I'm trying to ray-cast the PointCloud in ARFoundation so that the object placed on the tapped point cloud. But it gives the wrong result. I couldn't able to find the exact position of the feature point.
    Result: https://drive.google.com/file/d/1wnTT7qN7KI7K35oeh6OYjeu4b0rhLxQ-/view

    My code:
    Code (CSharp):
    1.         if (Input.touchCount > 0 )
    2.         {
    3.             Touch touch = Input.GetTouch(0);
    4.      
    5.             var hits = new List<ARRaycastHit>();
    6.             Ray ray = Camera.main.ScreenPointToRay(touch.position);
    7.             raycastManager.Raycast(ray, hits, TrackableType.FeaturePoint);
    8.             if (Input.touchCount == 1 && Input.GetTouch(0).phase == TouchPhase.Began && PrefabToPlace != null)
    9.             {
    10.                 Pose p = hits[0].pose;
    11.                 _ShowAndroidToastMessage("Hit at : " + (p.position.z) +  " ciunt L "+ hits.Count);
    12.                 var obj = Instantiate(PrefabToPlace, pose.position, pose.rotation);
    13.                 obj.name = PrefabToPlace.name;
    14.                 obj.transform.parent = ParentAR.transform;
    15.                 PrefabToPlace = null;
    16.             }
    17.         }
    Where I'm wrong?