Search Unity

  1. Unity 6 Preview is now available. To find out what's new, have a look at our Unity 6 Preview blog post.
    Dismiss Notice
  2. Unity is excited to announce that we will be collaborating with TheXPlace for a summer game jam from June 13 - June 19. Learn more.
    Dismiss Notice

Perform a single raycast to only detect the "gazing" distance to the wall before me

Discussion in 'VR' started by LR-Developer, Jun 13, 2017.

  1. LR-Developer


    May 5, 2017

    when I perform a gesture I Need to know the hit Point of the next "real world" object I am directly starring at.

    I do not want to have the spatial mapping running all the time, I think it costs too much Performance and is blocking Input to my holograms.

    I only Need the distance or better the point to the next wall or other "real world" object right before me.

    My idea was to turn the spatial mapping on, wait until my raycast right out of the camera hits something and check the distance.

    It always returned "1" for the distance even when I stand right before a wall.
    So I set the max distance to 0.95:

    Code (CSharp):
    1.     IEnumerator _wait(float time, Action callback)
    2.     {
    3.         yield return new WaitForSeconds(time);
    4.         callback();
    5.     }
    7.     public void Wait(float seconds, Action action)
    8.     {
    9.         StartCoroutine(_wait(seconds, action));
    10.     }
    12.     void GestureRecognizer_TappedEvent(InteractionSourceKind source, int tapCount, Ray headRay)
    13.     {
    14.         if (!audioSource.isPlaying)
    15.         {
    16.             StatusText.text = "Status: Scanning ...";
    17.             audioSource.clip = ScanningAudioClip;
    18.             audioSource.loop = true;
    19.             audioSource.Play();
    20.         }
    22.         HoloSpatialMapping.GetComponent<SpatialMappingManager>().CleanupObserver();
    23.         HoloSpatialMapping.GetComponent<SpatialMappingManager>().StartObserver();
    25.         float distance;
    26.         RaycastHit hitInfo;
    28.         // Wait for Raycast
    29.         bool isRaycasted = false;
    30.         int waitedSeconds = 0;
    31.         while (!isRaycasted && waitedSeconds < 5)
    32.         {
    33.             waitedSeconds++;
    34.             if (Physics.Raycast(Camera.main.transform.position, Camera.main.transform.forward, out hitInfo, 0.9f))
    35.             {
    36.                 distance = Vector3.Distance(Camera.main.transform.position, hitInfo.point);
    37.                 Debug.LogWarning("Scan gaze distance: " + distance);
    38.                 isRaycasted = true;
    39.             }
    40.             Wait(1, () => {
    41.                 Debug.Log("1 seconds is lost forever");
    42.             });
    43.         }
    45.         photoInput.CapturePhotoAsync(onPhotoCaptured);
    46.     }
    Now I do not get a single raycast at all. What am I doing wrong? Or is there a better or easier way just to get the hit Point I am starring at or the distance?

    Or is it possible to check the holograms depth camera one time to get the distance?

    I had a look at some examples doing something like this with the Cursor, but I would like to get it working without having spatial mapping on all the time, if this is possible...

    thanks a lot!
  2. Unity_Wesley


    Unity Technologies

    Sep 17, 2015
    I think there might be an issue with your ray cast, for this I would do this:
    if (Physics.Raycast(Camera.main.transform.position, Camera.main.transform.forward * 10, out hitInfo, 0.9f))

    This is how I get the raycast to work with the HoloLens when not using gesture.

    Also, the spatial components can be controlled via script which would allow you to run spatial mapping and then stop scanning by freezing the component when your are done scanning
  3. LR-Developer


    May 5, 2017
    Thanks for the Reply.

    But it is the same

    if (Physics.Raycast(Camera.main.transform.position, Camera.main.transform.forward * 10f, out hitInfo, 0.9f))

    is always false,

    if (Physics.Raycast(Camera.main.transform.position, Camera.main.transform.forward * 10f, out hitInfo, 2f))

    always shows a hitinfo.distance = 1

    And if I put a 3d marker (primitive) there, it is always in or behind the wall etc... I never get the correct hit Point.

    I also turned gesture recognizer of before scanning but this also does not help:

    gestureRecognizer.TappedEvent -= GestureRecognizer_TappedEvent;

    I will try with spatial mapping again. But I already did that before, I think it took too Long until I got it for the Point I am looking at.
  4. LR-Developer


    May 5, 2017
    When I use spatial mapping on my table I am looking at it takes half a Minute or something until it is detected.

    And if I scan my Barcode and use the Raycast to get the Barcode Position, it is still under the table's Surface although I included spatial mapping and got the table more or less.

    I am trying to find a way, to make scanned Barcodes detected and give them a "hard" Location / Position even in rooms that are not known to the hololens. I want to have Kind of "World anchors" or known positions, but at least after 10x10 meters or something you cannot do this with spatial mapping anymore, and in large halls / rooms spatial mapping won't help at all...

    How can I have known positions and qr codes in my whole house?

    I thought if I scan all qr positions and add them to a list and next time load that list and scan at least two of the known qr codes, I should have a new orientation Point (or two to have the direction) and know all others.

    But because the same scanning position is very different each time I scan this is hard to do...

    Or is there a better way, if I want to put real world markers over a large area / a whole house with a large hall or something like this?

    Thanks a lot!
  5. unity_andrewc


    Unity Technologies

    Dec 14, 2015
    That sounds like a bug. Looking at the backing code for Physics.Raycast, I don't think scaling the direction vector has any affect, since we just normalize that anyway - all that's used is the max distance. The max distance defaults to infinity though, so you might try removing that parameter. If that doesn't work, please file a bug for us so we can get our hands on your repro project and debug what's going on, because that sounds like a bug. The HoloLens does have issues with detecting dark surfaces though (can't tell whether there's something there or if what's it's looking at are shadows or something), so that might explain how long it takes to detect your table... hard to say without seeing it myself. Either way, that sounds strange.

    QR code stuff:
    Are you saving and loading spatial anchors? If you parent your QR code GameObjects to WorldAnchor objects, then once the device recognizes the area you're in, the GameObjects for your QR codes should appear in the correct position. However, there's a huge caveat to using just one WorldAnchor this way - the further away you get from a WorldAnchor, the less accurate the relative transforms become, and since the device can lose tracking (in, say, a dark hallway), it won't know where the one WorldAnchor you parented everything to is, and none of their transforms will be valid. I would try just making a WorldAnchor for each QR code; however, this isn't necessarily the best approach either, since it can be wasteful if you have multiple QR codes in the same small room - so you'd probably want to experiment with placing only one WorldAnchor in each room and parenting that room's QR code objects to that room's WorldAnchor. That said, dealing with dynamic objects when you have multiple WorldAnchor objects can be tricky. So hopefully these QR codes are supposed to be static.
    LR-Developer likes this.