Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

NativeMultiHashMap vs DynamicBuffers for spatial partitioning?

Discussion in 'Entity Component System' started by MintTree117, Apr 7, 2020.

  1. LudiKha

    LudiKha

    Joined:
    Feb 15, 2014
    Posts:
    138
    I managed a rather fast approach by using NativeMultiHashmap and spatial hashing. Supports up to approx. 32000 entities at 60FPS, including steering behaviours (local avoidance), collision , Navmesh pathfinding and a custom physics-based character controller.

    Haven't tested the partitioning system isolated, but it's almost certainly the fastest and most optimized for lots of dynamic entities.

    Both examples execute every frame, and could easily be optimized to run every n frames, with minimal loss of fidelity.



    Another example https://twitter.com/LudiKha/status/1230146055158849539
     
    Last edited: Apr 24, 2020
    DrBoum, mikaelK, lclemens and 4 others like this.
  2. MintTree117

    MintTree117

    Joined:
    Dec 2, 2018
    Posts:
    340
    Very nice job!
     
    LudiKha likes this.
  3. bobbaluba

    bobbaluba

    Joined:
    Feb 27, 2013
    Posts:
    81
    @Ragoo, would you be willing to share the source (or parts of it)?
     
  4. sngdan

    sngdan

    Joined:
    Feb 7, 2014
    Posts:
    1,131
    We had this discussion more than a year back and I do not know what has changed since then, a year back rendering was a bottle neck (I used a non optimized custom system to allow instance color for debugging at the time)

    I think I never stress tested the NMHM and buffer stand alone, but I guess the reason why I preferred buffer still holds:
    - fast clear
    - Access key values as native array
    - single thread write is not bad for performance (unless you test In an isolated environment, where no other independent systems run in parallel)
     
  5. tertle

    tertle

    Joined:
    Jan 25, 2011
    Posts:
    3,626
    I should point out NMHM is significantly faster at clearing now than it was a year ago and isn't as much of a bottleneck most of the time.

    Uses memset now for the 2 big internal arrays instead of iterating them and setting to -1

    Code (CSharp):
    1.         internal static unsafe void Clear(UnsafeHashMapData* data)
    2.         {
    3.             UnsafeUtility.MemSet(data->buckets, 0xff, (data->bucketCapacityMask + 1) * 4);
    4.             UnsafeUtility.MemSet(data->next, 0xff, (data->keyCapacity) * 4);
    5.  
    6.             for (int tls = 0; tls < JobsUtility.MaxJobThreadCount; ++tls)
    7.             {
    8.                 data->firstFreeTLS[tls * UnsafeHashMapData.IntsPerCacheLine] = -1;
    9.             }
    10.  
    11.             data->allocatedIndexLength = 0;
    12.         }
    If you used it a year ago, it did something along the lines of

    Code (CSharp):
    1.  
    2.             public static unsafe void Clear(NativeHashMapData* data)
    3.             {
    4.                 int* buckets = (int*) data->buckets;
    5.                 for (int i = 0; i <= data->bucketCapacityMask; ++i)
    6.                     buckets[i] = -1;
    7.                 int* nextPtrs = (int*) data->next;
    8.                 for (int i = 0; i < data->keyCapacity; ++i)
    9.                     nextPtrs[i] = -1;
    10.                 for (int tls = 0; tls < JobsUtility.MaxJobThreadCount; ++tls)
    11.                     data->firstFreeTLS[tls * NativeHashMapData.IntsPerCacheLine] = -1;
    12.                 data->allocatedIndexLength = 0;
    13.             }
    14.  
    which is why it was terribly slow

    i'm not certain what version it was updated, but i suggested a change like that on aug 23rd so it was sometime after that (not claiming any credit). So if you haven't benchmarked since then, probably worth comparing.
     
    Last edited: Apr 25, 2020
    lclemens, Kmsxkuse, sngdan and 3 others like this.
  6. sngdan

    sngdan

    Joined:
    Feb 7, 2014
    Posts:
    1,131
    @tertle thx for pointing out - good to know, I have not really been active other than following the forums.

    i was meant to link the thread where we discussed it but forgot (and on mobile now), you were part of the discussion and had developed an optimized version, happy they picked it up!
     
  7. MintTree117

    MintTree117

    Joined:
    Dec 2, 2018
    Posts:
    340
    When pusing near 100k units the array approach begins to out perform the hashmap, unless you have like 8 cpu cores.
     
  8. mikaelK

    mikaelK

    Joined:
    Oct 2, 2013
    Posts:
    281
    For me the SCD is really bad. If you use the SCD to move entities from one chunk to another, it pretty much killed the performance for me completely. If I have tons of individual entities and they are moving the scd will end up taking 85% of all performance. For me the system ended up completely unusable.
     
    Last edited: Jul 13, 2021
  9. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    714
    This is probably a stupid question.... but I'm just getting into this. All of the spatial partitioning systems I've seen so far operate on a 2D plane... but my game has a lot of mountains. I noticed that LudiKha's demo has some small hills. Is doing a spatial partitioning cube instead of plane something that people do, or is a plane good enough?
     
  10. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    3,983
    It doesn't make sense to go 3D unless you have the possibility for multiple entities that differ in that third axis only. Multiple floors in a building or multiple altitudes of interacting projectiles are two examples I can think of where this makes sense.
     
    lclemens likes this.
  11. DV_Gen

    DV_Gen

    Joined:
    May 22, 2021
    Posts:
    14
    3D partitioning is going to be more useful when things are stacked on the third axis. So, just being on the surface of a hill, no. But if there are also lots of caves in the hill and trees to climb in and possibilities for entities to have different heights, then sure, 3D partitioning might be useful. My system runs in 3D for now just because I wanted that detail worked out, and I'm not sure my final goal. But it is a very simple switch back to 2D if I'm not stacking much on 3D space.
     
    lclemens likes this.
  12. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,574
    Most games operates on 2d plane. Even tho games are 3d, with hills and mountains.

    Imagine just a projection from the top, on the map. It is simply 2d. So yes, unless you need some form of stacking, or may consider city with multi store buildings that you can walk in, you don't need 3d spatial mapping. For most and simplicity you probably can fake it, using 2d spatial mapping.

    If you have space shooter however, or game like minecraft, you will need 3d spacial mapping. Or in case of some fps games. Just examples.
     
    lclemens likes this.
  13. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    714
    DV_Gen likes this.