Search Unity

  1. Unity 2019.1 is now released.
    Dismiss Notice

How to run IJobs in parallel that holds a DynamicBuffer?

Discussion in 'Data Oriented Technology Stack' started by davenirline, May 15, 2019.

  1. davenirline

    davenirline

    Joined:
    Jul 7, 2010
    Posts:
    415
    Say I have this entity setup:
    Code (CSharp):
    1. Entity entity = this.entityManager.CreateEntity();
    2. this.entityManager.AddComponentData(entity, new Request());
    3. this.entityManager.AddBuffer<IntBufferElement>(entity);
    For each of this entity, I want to run a single IJob:
    Code (CSharp):
    1. struct Job : IJob {
    2.     public int index;
    3.     public BufferAccesor<IntBufferElement> buffers;
    4.  
    5.     public void Execute() {
    6.         DynamicBuffer<IntBufferElement> list = this.buffers[this.index];
    7.  
    8.         // Do stuff with list
    9.     }
    10. }
    Here's my code so far (using chunk iteration):
    Code (CSharp):
    1. private JobHandle Process(ArchetypeChunk chunk, JobHandle inputDeps) {
    2.     BufferAccessor<IntBufferElement> buffers = chunk.GetBufferAccessor(this.bufferType);
    3.     JobHandle handle = inputDeps;
    4.  
    5.     for(int i = 0; i < chunk.Count; ++i) {
    6.         Job job = new Job {
    7.             index = i,
    8.             buffers = buffers
    9.         };
    10.  
    11.         handle = job.Schedule(handle);
    12.     }
    13.  
    14.     return handle;
    15. }
    This doesn't even run the IJobs in parallel. They are run one after the other. However, I'm getting this error:

    InvalidOperationException: The previously scheduled job Job writes to the NativeArray Job.buffers. You must call JobHandle.Complete() on the job Job, before you can read from the NativeArray safely.

    Tried using any of the NativeDisable* attribute but they don't work. How would you do this?
     
  2. psuong

    psuong

    Joined:
    Jun 11, 2014
    Posts:
    32
    From looking at your code, was inputDeps.Complete() called before you scheduled your job? I haven't done so much pure chunk iteration on my own since majority of my stuff can be handled through IJobForEach<>.

    And as an alternative how about a IJobForEachWithEntity instead?

    You can structure your job like so:

    Code (CSharp):
    1. struct Job : IJobForEachWithEntity<ComponentData> {
    2.     public GetBufferFromEntity<IntBufferElement> buffers;
    3.     public void Execute(Entity e, int index, ref ComponentData data) {
    4.        var list = buffers[e];
    5.         // Do stuff with list
    6.     }
    7. }
    Then in your actual JobComponentSystem

    Code (CSharp):
    1. protected override JobHandle OnUpdate(JobHandle inputDeps) {
    2.  
    3.    return new Job {
    4.       buffers = GetBufferFromEntity<IntBufferElement>()
    5.    }.Schedule(this, inputDeps);
    6. }
     
  3. Brendon_Smuts

    Brendon_Smuts

    Joined:
    Jun 12, 2017
    Posts:
    31
    The reason your jobs are not running in parallel is that each job you are scheduling is being declared with a dependency on the previous job. You're creating this dependency by passing the handle created by the previous job to the next. A job will only start performing its work once all handles up its chain are marked as complete.

    I'm not sure why you are getting that exception as your job dependencies should prevent multiple writes from happening at the same time. My guess is this is happening somewhere else in your system but its difficult to tell from the example.

    Anyway, with all that being said your approach to chunk iteration is a little off. Simple chunk iteration is best performed using the IJobChunk interface. A chunk based job and schedule would look something like this:

    Code (CSharp):
    1. public sealed class ChunkIterationSystem : JobComponentSystem
    2. {
    3.     private EntityQuery _chunkQuery;
    4.  
    5.     private ArchetypeChunkBufferType<IntBufferElement> _buffersTypeRW;
    6.  
    7.  
    8.     protected override void OnCreate()
    9.     {
    10.         _chunkQuery = GetEntityQuery(new EntityQueryDesc
    11.         {
    12.             All = new ComponentType[] { typeof(IntBufferElement) }
    13.         });
    14.     }
    15.  
    16.     private void GatherTypes()
    17.     {
    18.         _buffersTypeRW = GetArchetypeChunkBufferType<IntBufferElement>(false);
    19.     }
    20.  
    21.     protected override JobHandle OnUpdate(JobHandle inputDeps)
    22.     {
    23.         GatherTypes();
    24.  
    25.         return new ChunkIterationJob
    26.         {
    27.             BuffersTypeRW = _buffersTypeRW
    28.         }.Schedule(_chunkQuery, inputDeps);
    29.     }
    30.  
    31.  
    32.     private struct ChunkIterationJob : IJobChunk
    33.     {
    34.         public ArchetypeChunkBufferType<IntBufferElement> BuffersTypeRW;
    35.  
    36.  
    37.         public void Execute(ArchetypeChunk chunk, int chunkIndex, int firstEntityIndex)
    38.         {
    39.             var buffers = chunk.GetBufferAccessor(BuffersTypeRW);
    40.  
    41.             for (int i = 0; i < chunk.Count; i++)
    42.             {
    43.                 var buffer = buffers[i];
    44.                 // Do stuff with list
    45.             }
    46.         }
    47.     }
    48. }
    The job will automatically use one thread per chunk contained in the query and, because each thread is only working on the buffers within its own chunk, requires no NativeDisable* attributes to allow parallel access.
     
    psuong likes this.
  4. davenirline

    davenirline

    Joined:
    Jul 7, 2010
    Posts:
    415
    My example is simplified from the actual code. The job is more complex that needs more containers per run. I want to run each request in its own job so they will be distributed among the threads. There's no sense in running an IJobChunk since the request entities won't probably go over one chunk. The processing per request is heavy, so this means that the thread running the IJobChunk might run too long as each request is processed one after the other.

    I'm trying to use IJobParallelFor now since I've discovered that I can allocate native containers inside jobs.
     
  5. Brendon_Smuts

    Brendon_Smuts

    Joined:
    Jun 12, 2017
    Posts:
    31
    You can use the method above and use a ISharedComponent to split your entities across multiple chunks. This is one of the intended uses for shared components. i.e. if you have a shared component with an integer value from 0 - n-1 where n is the number of threads you want to split the workload across. This way you’re still playing nicely with the ECS system. If you are worried about unused chunk space, which I wouldn’t at the scale you’re describing, Unity will be adding functionality to customize chunk capacity in the future.
     
    psuong likes this.