Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

How to efficiently check distances of neighbors?

Discussion in 'Entity Component System' started by kiaat, May 29, 2021.

  1. kiaat

    kiaat

    Joined:
    Apr 11, 2018
    Posts:
    7
    Here are my version numbers:
    - Unity 2020.3.7f1
    - Entities 0.17.0-preview.41

    What my program should do:
    Search for every particle (Spheres in 3D) its neighbors in a given radius and calculate the differences.

    My naive code:
    for every particle i

    for every particle j
    d = i.pos - j.pos
    if d <= radius

    #do funny things#
    end if
    end for
    end for

    Of course, that's a horrible code, but it gets the job done. So, usually, in object-oriented programming, I would try to use a tree-datastructure or some kind of spatial mapping. But after converting this into DOTS, I realized that my tools for spatial mapping are quite constrained (or maybe I just don't know the tools yet that I have). DOTS does a beautiful job, btw. Using the ECS+Jobs+Burst increased the fps magnanimously, despite the O(n^2).


    Here is my question: How can I efficiently get my neighbors that are in my radius? I looked into the Boids-Example of Unity and noticed that there is some Hash-Mapping going on. I also watched the youtube-presentation, but I can't see how it's done, because I started with DOTS just 1 week ago.

    In short, my goal is to minimize the number of checks. If I could just run through all particles n once and get all in-radius neighbors m for each n, that would be amazing. But reducing the search to n*log(n) would be enough for me, too.

    Edit29052021: The search domain has no boundaries. So, I tried the uniform grid approach. But every particle falling out of the grid, will be lost to the funny things.
     
    Last edited: May 29, 2021
  2. mikaelK

    mikaelK

    Joined:
    Oct 2, 2013
    Posts:
    281
    Here I get targets and then I just run query on the explosions to take down targets based on radius. It pretty much does the same thing as your code

    Code (CSharp):
    1.        
    2. var targetMarkedForDestructionQuery = CreateTargetMarkedForDestructionQuery();
    3. var targetsToDestroy = targetMarkedForDestructionQuery.ToComponentDataArray<Target>(Allocator.TempJob);
    4. int count = targetsToDestroy.Length;
    5. Entities.WithReadOnly(targetsToDestroy)
    6.     .ForEach((Entity entity, int entityInQueryIndex,
    7.         ref Explosion explosionComponent,
    8.         in MarkedForDestruction markedForDestruction) =>
    9.     {
    10.         if ( count > 0 &&
    11.             markedForDestruction.TakeTargetDownRadius.x > 0)
    12.         {
    13.             for (int i = 0; i < count; i++)
    14.             {
    15.                 if ((GeometryVisionUtilities.Float3DistanceUnSquared(targetsToDestroy[i].position,
    16.                         GetComponent<Translation>(entity).Value).x <
    17.                     markedForDestruction.TakeTargetDownRadius)
    18.                     .x)
    19.                 {
    20. //Do funny stuff
    21.                     if (HasComponent<MarkedForDestruction>(targetsToDestroy[i].entity) == false)
    22.                     {
    23.                         commandBuffer.AddComponent<MarkedForDestruction>(entityInQueryIndex,
    24.                             targetsToDestroy[i].entity);
    25.                     }
    26.  
    27.                     commandBuffer.SetComponent(entityInQueryIndex, targetsToDestroy[i].entity,
    28.                         new MarkedForDestruction()
    29.                         {
    30.                             AmountOfLifeTimeLeft = 0.05f
    31.                         });
    32.                 }
    33.             }
    34.         }
    35.     }).WithBurst().ScheduleParallel(this.Dependency).Complete();
    36. targetsToDestroy.Dispose();
    37.        
    38. ;
     
    Last edited: May 29, 2021
  3. kiaat

    kiaat

    Joined:
    Apr 11, 2018
    Posts:
    7
    Thanks for your input, but if I see correctly, your code has also a O(n^2) complexity. I don't want to implement my pseudo-code I mentioned in my first post, because I already implemented it. What I'm looking for, is an optimized way to do it.
     
  4. craftsmanbeck

    craftsmanbeck

    Joined:
    Nov 20, 2010
    Posts:
    40
    A K-D Tree is one of the accepted ways of doing this. There's a few resources with Unity code you can find, and I think the Heretic demo might have a Burst-compatible one. The general idea is sort all particles using their x y and z positions and search from there. Imagine I asked you to find a number between 1 and 100, you'd guess 50, I would say lower, higher than 25, and so forth, without having to go through every possiblility. Same deal, but in higher dimensions.
    With small data sets of like 100 or less though brute force may be preferable, as setting up the tree takes some time.
     
  5. kiaat

    kiaat

    Joined:
    Apr 11, 2018
    Posts:
    7
    A K-D-Tree was something I considered, but stopped thinking about it, because I can't ... well, create a tree within a JobSystem, can I? Wouldn't that need a big amount of arrays? But before thinking about building the tree, I've come across the problem, that I can't parallelize NativeArrays or am I missing a trick or implementation scheme to use NativeArrays in a parallelized job?
     
  6. DV_Gen

    DV_Gen

    Joined:
    May 22, 2021
    Posts:
    14
    For a first step, if you have colliders, the point distance query is actually really effective and easy to implement.
    https://docs.unity3d.com/Packages/com.unity.physics@0.0/manual/collision_queries.html

    If you want something better that doesn't require colliders, a spatial partioning system would be very helpful. This video could get you started in that direction. It is at least more direct than the boids sample. You would though want to make optimizations and use-case adjustments later:
     
    Last edited: May 29, 2021
    kiaat likes this.
  7. MintTree117

    MintTree117

    Joined:
    Dec 2, 2018
    Posts:
    340
    You can write to NativeArrays in parallel if you add the [NativeDisableParallelForResteiction] attribute to them in the job definition. This applies for all native containers.

    You can build trees in jobs, Unity physics BVH tree does this, although I think it's multithreaded.

    The easiest approach imo is to use a MultiHashMap. Every frame, loop over all entities and get their implicit "grid position" and use that as a key into the map and insert the value as entity or index. Then loop over all entities again, check their cell, and any cells they overlap, and compare collisions.

    Added performance boost, pass in a NativeBitArray into the job one bit for each entity. When you check an entity for neighbours, set its bit true so no other entity has to check against it.

    To queue collisions for resolving, I use a NativeQueue.ParallelWriter.

    I do something similar and easily get 100k collisions in under 16ms, with no other systems running, 4 cores laptop.

    Another forum user by the name of @DreamingImLatios made his own physics framework public, and used Sweep and Prune for collision detection. It's very well done.
     
    kiaat and DreamingImLatios like this.
  8. craftsmanbeck

    craftsmanbeck

    Joined:
    Nov 20, 2010
    Posts:
    40
    Definitely take a look at the Heretic kd tree. They use unsafe code, but it's jobified and burst-compiled.

    Edit: It looks like only some parts are burst/job-compatible
     
    Last edited: Sep 22, 2022
  9. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    3,983
    What you are looking for is a broadphase algorithm. Fixed grids are one such algorithm. Tree structures are another. Unity.Physics has a BVH implementation you may be able to use. As @martygrof3708 mentioned, I also wrote a multibox single-axis pruner which you can find documentation on how to use here: https://github.com/Dreaming381/Lati...ntation~/Psyshock Physics/Collision Layers.md
    The multibox is "soft" so even if you have it not quite correct, as long as it is a valid grid, it will still give correct results, just at reduced performance. If you want to give it a shot but struggle to integrate it, feel free to reach out to me!
     
    kiaat and craftsmanbeck like this.
  10. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    Here is a core hashing implementation. It does some stuff to avoid division which I'm not sure is much of a gain but that's why the offsets to keep stuff in positive ranges. The hashing itself does perform better then anything you will find in Unity demo's.

    https://gist.github.com/gamemachine/71cd0d16f7503f3971d8d78faf3a8d1f

    We have an extension method for CellsWithinBounds on the client that takes a NativeArray, as we use this outside of Unity also.

    Now that just gives you cell id's and iteration over the cell you are on plus neighbors. NativeMultiHashMap is the simplest container for the objects. Normally you then rebuild the entire thing every frame. Clear the container or just re allocate depending on which performs best.

    NMHM is a starting point, context can provide better containers.

    Spatial hashing is very efficient for some scenarios but how you access the data is very important. Like you can copy all the data in the cells to a temporary container which is bad. Or return references to the cells which is better. Or better yet return a wrapper over all 9 cells with a custom iterator to make it look like a single container.

    A well designed spatial hash should win here every time against a physics query. Physics BVH is just not optimized for points. a custom BVH designed for points would win but only significantly so at larger scale. The factor here is that whether it's a tree or a hash multiple points are stored in a node/cell, so it's inherently approximate. The node/cell can have points outside the specific range you want. So at really large scale a tree capable of more granular node sizes reduces the number of distance checks needed to filter objects outside of that exact range. But it takes some scale for that to be significant.
     
    kiaat likes this.
  11. kiaat

    kiaat

    Joined:
    Apr 11, 2018
    Posts:
    7
    I'll describe my use-case here, because it seems, the more constraints/non-constraints I can give, the more clearly can a solution emerge -- for me at least. With the answers so far, a NativeMultiHashMap seems fitting for me.

    Some might have smelled it from afar already, but I want to simulate a fluid-system (the method is SPH for the curious people). That also means that I only "fill" and "iterate" and then throw the list to the dumper, because in the next frame, neighbors might change, because my fluid-particles are moving. IF my dream of a Spatial-Bucket-HashMap in DOTS would come true, I would implement it as this:
    Code (CSharp):
    1. //The biggest size: Every particle's position is in its own "bucket"-cell
    2. HashMap hm = new HashMap(size = number of particles)
    3. for every particle i
    4. hashNumber hNum = hash(Floor(i.position) / radius)
    5. hm.Add(hNum, i.position)
    6. end for
    7.  
    That would solve my problem. With that, I could just iterate through all particles plus their neighbors. If no neighbors were found, the hash doesn't give anything back. If another position lands on the same hash, then the hash appends it behind the first one (like the Boids demo of Unity, no?). Also I'm not restricted by a fixed grid, which I want, because water can flow out of a fixed grid. But what I also discovered is that the NativeMultiHashMap doesn't append similiar keys ...?

    I didn't know I can use the Physics-API itself to help me. Thanks for the hint!

    The MultiHashMap-Approach is something I really want to do, because it fits my use-case. But the parallel-restriction still comes up or I'm doing something wrong. When I declare/initialize a NativeMultiHashMap outside the Entities.ForEach and then add a key-value-pair inside the ForEach, I still get the "Entities.ForEach Lambda expression uses field 'hashMap in a reference type'. Either assign the field to a local outside of the lambda expression and use that instead, or use .WithoutBurst() and .Run()"-Error Message.

    I'll definitely keep that in mind, if I opt-in your approach. Thanks!

    Have you ever implemented a NMHM? I would be interested in so. And thanks for your code example. It does clear up my understanding about hashing.
     
  12. DV_Gen

    DV_Gen

    Joined:
    May 22, 2021
    Posts:
    14
    My NativeMultiHashMap is somewhat similar to what you see in the video I posted. I'm starting to make a lot of changes though as I narrow in on my use case. I haven't tried to do it in parallel yet. It is already very fast.
     
  13. MintTree117

    MintTree117

    Joined:
    Dec 2, 2018
    Posts:
    340
    You need to pass a NativeMultiHashMap.ParallelWriter into the job. You get this by calling map.AsParallelWriter(), and pass that instance into the job.
     
  14. kiaat

    kiaat

    Joined:
    Apr 11, 2018
    Posts:
    7
    I get: "has not been assigned or constructed. All containers must be valid when scheduling a job."

    Welp, seems that DOTS doesn't want what I do. Still, thank you for your input. I'll probably try the Use-Physics-RayCast-Route.
     
  15. DV_Gen

    DV_Gen

    Joined:
    May 22, 2021
    Posts:
    14
    If you want to try unity dots physics, do the point distance query instead of ray casting. It will be worlds faster for what you are doing without being any more complex. You can take another pass at some form of spatial partitioning later. At some point, it will very obviously beat point distance queries, but the distance query should be good enough to get things running rather smoothly.
     
    kiaat likes this.
  16. MintTree117

    MintTree117

    Joined:
    Dec 2, 2018
    Posts:
    340
    Well I know it works because I do it. Post code so we can debug. You must not be instantiating it.
     
    DV_Gen likes this.
  17. kiaat

    kiaat

    Joined:
    Apr 11, 2018
    Posts:
    7
    Code (CSharp):
    1. public class TestSystem: SystemBase
    2. {
    3.     [NativeDisableParallelForRestriction]
    4.     NativeMultiHashMap<int, int> hashMap;
    5.     protected override void OnUpdate()
    6.     {
    7.         var hashy = hashMap.AsParallelWriter();
    8.         Entities.ForEach((ref MapTag tag, in Entity entity, in Translation translation) =>
    9.         {
    10.             hashy.Add(10, 20); //Test
    11.         }).ScheduleParallel();
    12.         hashMap.Dispose();
    13.     }
    14. }
     
  18. tertle

    tertle

    Joined:
    Jan 25, 2011
    Posts:
    3,626
    well yeah that error is telling you exactly what you did wrong; you've never created the hashmap

    also that attribute does nothing inside a system
     
  19. kiaat

    kiaat

    Joined:
    Apr 11, 2018
    Posts:
    7
    I'm getting different signals.
     
  20. tertle

    tertle

    Joined:
    Jan 25, 2011
    Posts:
    3,626
    Both those statements mean the same thing.
     
    MintTree117 likes this.
  21. Fribur

    Fribur

    Joined:
    Jan 5, 2019
    Posts:
    127
    Code (CSharp):
    1. public class TestSystem: SystemBase
    2. {
    3.     NativeMultiHashMap<int, int> hashMap = new NativeMultiHashMap<int, int>(1000, Allocator.Persistent);
    4.  
    5.     protected override void OnUpdate()
    6.     {
    7.      var hashy = hashMap.AsParallelWriter();
    8.         Entities.ForEach((ref MapTag tag, in Entity entity, in Translation translation) =>
    9.         {
    10.             hashy.Add(10, 20); //Test
    11.         }).ScheduleParallel();
    12.         hashMap.Dispose();
    13.     }
    14. }
    For parallel use it important to make hashmap bigger than worst case scenario.
     
    varnon likes this.
  22. mikaelK

    mikaelK

    Joined:
    Oct 2, 2013
    Posts:
    281
    Thats quite common error when creating a job and forgetting to give it an allocated container
     
  23. MintTree117

    MintTree117

    Joined:
    Dec 2, 2018
    Posts:
    340
    You are not instantiating the map, you are only declaring it, therefore when you try to use parallel writer it is trying to access an object which does no exist.

    Also you do not need the [NativeDisableParallelForRestriction] of you are using Parallel Writer. You mostly use that for arrays, or single variables which can be written to from multiple threads. Most other containers other than array have a Parallel Writer version.

    You must do:

    Code (CSharp):
    1. NativeMultiHashMap map;
    2. map = new NativeMultiHashMap(Allocator.Persistant);
    Then every frame:
    Code (CSharp):
    1. var writer = map.AsParallelWriter();
    Pass writer into job.

    Then on application or system close you must call map.Dispose(). If you dispose it every frame, you must reintaniaye it every frame. Instead, call .Clear(). You do not need to dispose parallel writer it is a struct instance.

    Finally, when you create the hashmap, I suggest making it at least two times as large as the max number of entities because otherwise it gets crowded and you get hashcollisions which kills performance.
     
    Last edited: May 30, 2021
    kiaat likes this.
  24. mikaelK

    mikaelK

    Joined:
    Oct 2, 2013
    Posts:
    281
    I was googling for the heretic kd-tree fro curiosity and came across this. Don't know if it helps
     
    bb8_1 likes this.