Search Unity

Best way to add elements to the same collection in parallel jobs

Discussion in 'Entity Component System' started by mxmcharbonneau, Aug 23, 2019.

  1. mxmcharbonneau

    mxmcharbonneau

    Joined:
    Jul 27, 2012
    Posts:
    10
    So, a simple, general question that I can't find an exact answer to. Let's say I need to loop in parallel jobs on entities (or maybe even on a NativeArray that I'll loop through in a IJobParallelFor) and I need to add some elements to a collection that will be in a NativeList or NativeArray eventually. The order is not important.

    Code (CSharp):
    1. [BurstCompile]
    2. private struct PopulateCollectionJob : IJobForEachWithEntity<Component>
    3. {
    4.     public NativeCollection<Whatever> Collection;
    5.    
    6.     public void Execute(Entity entity, int index, ref Component component)
    7.     {      
    8.         if (shouldAddToCollection)
    9.         {
    10.             // Add something to collection
    11.         }
    12.     }
    13. }
    For now, I use a NativeQueue, since it's the only thing I know I can write to in parallel. Then, I have to dequeue everything in an array. I feel like it's probably not the most efficient way, since everything reads and writes to the same memory.

    I could probably schedule a bunch of jobs with a bunch of list, and then I could take all those lists and assemble them in one big array with it.

    Maybe I could write in a NativeMultiHashMap with some hash for each different thread, and then assemble those in one big array. I'm not even sure that would be possible.

    Thing is, I'm not sure either of those would be more efficient than my NativeQueue solution. Maybe there's another solution I'm missing?
     
  2. Soaryn

    Soaryn

    Joined:
    Apr 17, 2015
    Posts:
    328
    NativeStream is also a rather new interesting collection; however, I don't really have a good sample for you.
     
  3. Razmot

    Razmot

    Joined:
    Apr 27, 2013
    Posts:
    346
    If you can segment your data , you can write in parallel to slices of the same array.

    In the job you need
    [NativeDisableContainerSafetyRestriction] [WriteOnly] public NativeSlice<float> writeNoise

    And in update you need to use
    JobHandle.CombineDependencies

    concrete example : I'm writing concurrently to 6 slices of an array of floats that represent the 6 faces of a cubesphere planet

    Code (CSharp):
    1.  
    2.  for (int i = 0; i <6;++i )
    3.                 {
    4.                     dynamicNoiseJobs[i] = new DynamicNoiseJob()
    5.                     {
    6.                         writeNoise = _noise.Slice((int)Faces[i] * TexSizeSQ,TexSizeSQ),
    7.                         Face = Faces[i],
    8.                         [...]
    9.                     };
    10.                     _noiseJobHandles[i] = dynamicNoiseJobs[i].Schedule(TexSizeSQ, _batchSize, inputDeps);
    11.                 }
    12.                 var combinedHandle = JobHandle.CombineDependencies(_noiseJobHandles);
    13.  
    14.  
     
  4. mxmcharbonneau

    mxmcharbonneau

    Joined:
    Jul 27, 2012
    Posts:
    10
    Soaryn, I tried to toy with NativeStream with something like the code below, it seems to work, but the
    ToNativeArray<>() call I must do after is really heavy. The thing is, I don't know what I'm doing with the NativeStream's count value and then the BeginForEachIndex and EndForEachIndex in the jobs, so I'm probably not understanding its purpose well.

    Code (CSharp):
    1. [BurstCompile]
    2. private struct PopulateCollectionJob : IJobForEachWithEntity<Component>
    3. {
    4.     public NativeStream.Writer Stream;
    5.    
    6.     public void Execute(Entity entity, int index, ref Component component)
    7.     {
    8.         if (shouldAddToCollection)
    9.         {
    10.             Stream.BeginForEachIndex(index);
    11.             Stream.Write(Interactors[index]);
    12.             Stream.EndForEachIndex();
    13.         }
    14.     }
    15. }

    Razmot, I don't know how many values will be in there, since I'm filtering with an if in the job. I guess I could do something like your example but with lists instead of slices, then combine them in another job, like I wrote about in my post. I should probably try that in fact, in might just be the better.

    Also, I tried to work with IJobParallelForFilter, but it only populates a list of indices that I'll have to loop through to populate the actually NativeArray I want, and for some reason, it isn't parallel at all, it all runs on a single thread.
     
  5. NanushTol

    NanushTol

    Joined:
    Jan 9, 2018
    Posts:
    131
    Just the other day I got helped here and learned about dynamic buffer, I think that could help you too, try to read about it in the documentation.

    it goes something like this
    Code (CSharp):
    1. using UnityEngine;
    2. using Unity.Entities;
    3. using Unity.Mathematics;
    4. using Unity.Jobs;
    5. using Unity.Collections;
    6. using Unity.Burst;
    7.  
    8. public class DynamicBufferExample : JobComponentSystem
    9. {
    10.     BufferFromEntity<SomeDynamicBufferComponent> BufferComponentLookup;
    11.  
    12.     public struct PopulateHash : IJobForEach<SomeComponent>
    13.     {
    14.         [NativeDisableParallelForRestriction]
    15.         public NativeHashMap<int, float> SomeValue;
    16.  
    17.         public void Execute(ref SomeComponent someComponent)
    18.         {
    19.             SomeValue.TryAdd(index, someComponent.Value);
    20.         }
    21.     }
    22.  
    23.     public struct TransferSomeContent : IJobForEach<SomeComponent>
    24.     {
    25.         [NativeDisableParallelForRestriction]
    26.         public BufferFromEntity<SomeDynamicBufferComponent> ReceivedBufferComponentLookup;
    27.  
    28.         [ReadOnly]
    29.         public NativeHashMap<int, float> SomeValue;
    30.  
    31.         public void Execute(ref SomeComponent someComponent)
    32.         {
    33.             var buffer = ReceivedBufferComponentLookup[someEntity];
    34.             SomeDynamicBufferComponent someDynamic = new SomeDynamicBufferComponent();
    35.             someDynamic.Value = SomeValue[index].Value;
    36.             buffer.Add(someDynamic);
    37.         }
    38.     }
    39.  
    40.     public struct PullContent : IJobForEachWithEntity<SomeComponent>
    41.     {
    42.         [ReadOnly]
    43.         public BufferFromEntity<Received> ReceivedBufferComponentLookup;
    44.  
    45.         public void Execute(Entity entity, int index, ref SomeComponent someComponent)
    46.         {
    47.             DynamicBuffer<SomeDynamicBufferComponent> buffer = ReceivedBufferComponentLookup[entity];
    48.  
    49.             SomeDynamicBufferComponent content = buffer[index];
    50.             someComponent.Value = content.Value;
    51.             buffer.Clear();
    52.         }
    53.     }
    54. }
    this is not a working code, didn't tested this one specifically
     
  6. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    4,266
    If you just want to write data into a NativeArray similar to what EntityQuery.ToComponentDataArray does, then you can use EntityQuery.CalculateEntityCount to allocate a NativeArraySize and then use the index in IJobForEachWithEntity to write to the array at that index.
     
  7. mxmcharbonneau

    mxmcharbonneau

    Joined:
    Jul 27, 2012
    Posts:
    10
    No, it's not a simple array copy type of thing, I'm filtering it, so the Length will be lower and the indexes won't be the same.

    The DynamicBuffer method may be a better way than what I already have, I'd have to try it out to know for sure.
     
  8. PublicEnumE

    PublicEnumE

    Joined:
    Feb 3, 2019
    Posts:
    729
    Were you able to find a solution that met your needs? I am trying to do something similar, and appreciate you posting the topic.
     
  9. Razmot

    Razmot

    Joined:
    Apr 27, 2013
    Posts:
    346
    If you can just estimate the maximum size of parallel sets of data, then you can use the slicing technique I mentioned earlier. You will have an oversized array with oversized slices, so you will need to keep the real usage count.

    Same principle as "old school" procedural meshes writing in a fixed size vertex buffer.