Search Unity

  1. We've closed the job boards. If you're looking for work, or looking to hire check out Unity Connect. You can see more information here.
    Dismiss Notice
  2. We're running great holiday deals on subscriptions, swag and Asset Store packages! Take a peek at this blog for more information!
    Dismiss Notice
  3. Check out our Unite Austin 2017 YouTube playlist to catch up on what you missed. More videos coming soon.
    Dismiss Notice
  4. Unity 2017.2 is now released.
    Dismiss Notice
  5. The Unity Gear Store is here to help you look great at your next meetup, user group or conference. With all new Unity apparel, stickers and more!
    Dismiss Notice
  6. Introducing the Unity Essentials Packs! Find out more.
    Dismiss Notice
  7. Want to see the most recent patch releases? Take a peek at the patch release page.
    Dismiss Notice
  8. Unity 2017.3 beta is now available for download.
    Dismiss Notice

How does NavMesh system determine which horizontal surfaces to include?

Discussion in 'Navigation' started by trzy, Jun 19, 2017.

  1. trzy

    trzy

    Joined:
    Jul 2, 2016
    Posts:
    54
    Hi,

    I'm building nav meshes using the scripting API (needs to be done this way on HoloLens because the spatial meshes are not available until the user scans his/her room). It's unclear to me which surfaces the NavMesh selects. The default rotation for NavMesh data is Quaternion.identity and I know it uses the up vector but given a mesh with lots of raised platforms, as in the image below, how does it determine what to include and exclude?

    For example, the picture here shows a captured scan of my room. You can see the NavMesh system has processed the floor. But why not instead the bed, for example?

    Also, is it possible to have completely discontinuous regions? I would be quite happy to have raised platforms (e.g., the bed or the couch in this image) constitute NavMesh surfaces of their own with no links to the floor. Agents placed on the bed would be confined to the bed, etc.

    The way I presently build my NavMesh is to include all meshes and a gigantic bounding box that is guaranteed to enclose them all (where the origin at Vector3.zero is chosen arbitrarily by the HoloLens).

    Thank you!

    Bart navmesh.jpg
     
  2. Jakob_Unity

    Jakob_Unity

    Unity Technologies

    Joined:
    Dec 25, 2011
    Posts:
    269
  3. trzy

    trzy

    Joined:
    Jul 2, 2016
    Posts:
    54
    Thank you for the reply! I thought I had followed up but I guess my post got swallowed. It turns out the problem was the height of my nav mesh agent! It was very large, and I guess the voxelization is proportional the agent dimensions.

    By the way, how can multiple nav mesh agent settings be defined in the scripting API? Given n input meshes and k agent types, is the idea to generate k NavMeshData objects by using UpdateNavMeshData() multiple times, once for each agent type?
     
  4. Jakob_Unity

    Jakob_Unity

    Unity Technologies

    Joined:
    Dec 25, 2011
    Posts:
    269
    correct – each agent type can only 'see' navmesh built for its own type.

    In the editor you can use the 'Agents' tab to add/remove/modify agent settings.
    At runtime you can create and use navmesh settings for building as you please - the important bit is that the agentTypeID that you build for must match that of the agent.