Search Unity

Question Parallel reading and writing to a hashmap

Discussion in 'Entity Component System' started by Zundrium, Jan 25, 2022.

  1. Zundrium

    Zundrium

    Joined:
    Aug 12, 2013
    Posts:
    5
    Here's a little test for index based counting without parallel scheduling:

    Code (CSharp):
    1. public class CounterSystem: SystemBase
    2. {
    3.  
    4.     protected override void OnUpdate()
    5.     {
    6.         NativeHashMap<uint, int> counters = new NativeHashMap<uint, int>(0, Allocator.Persistent);
    7.  
    8.         Entities
    9.         .ForEach((in CellIndexData cellIndex) =>
    10.         {
    11.             if (!counters.ContainsKey(cellIndex))
    12.             {
    13.                 counters.Add(cellIndex, 1);
    14.             } else {
    15.                 int counter = counters[cellIndex];
    16.                 counter++;
    17.                 counters[cellIndex] = counter;
    18.             }
    19.         })
    20.         .WithDisposeOnCompletion(counters)
    21.         .Schedule();
    22.     }
    23.  
    24. }
    25.  
    It reads to check if the index was already added, if not it adds the counter else in increments it. So in this situation I need both to read and write. I've seen something about a NativeStream but that does not support indexing. Is there anyway to make this `.ParallelSchedule()` compatible?
     
  2. Krajca

    Krajca

    Joined:
    May 6, 2014
    Posts:
    347
    You can't make adding and checking parallel. Not when you want to have newly added items to the NativeHashMap.
     
  3. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    4,264
    There are two techniques for this, and they depend on which is smaller: the range of keys or number of elements.

    If your range of keys is small, then you simply allocate an array sized of that range per thread, clear all elements to zero, and just increment indices as you iterate through elements. Then you sum the arrays in a separate job.

    If you have very dynamic keys, then you make a hashmap per thread (using UnsafeHashMap). Then in your second single-threaded job you merge those hashmaps together. If that's too slow, there are additional optimizations, but they really depend on your data counts and spreads so I won't detail them now.
     
    Krajca and apkdev like this.