Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. We have updated the language to the Editor Terms based on feedback from our employees and community. Learn more.
    Dismiss Notice

Parallel jobs

Discussion in 'Entity Component System' started by Radu392, Oct 20, 2019.

  1. Radu392

    Radu392

    Joined:
    Jan 6, 2016
    Posts:
    210
    I want to execute 2 parallel jobs at the same time on 2 different sets of entities that can never overlap, however Unity's throwing an exception

    InvalidOperationException: The previously scheduled job FindTargetSystem:FindTargetUnitJob writes to the NativeArray FindTargetUnitJob.Iterator. You are trying to schedule a new job FindTargetSystem:FindTargetUnitJob, which writes to the same NativeArray (via FindTargetUnitJob.Iterator). To guarantee safety, you must include FindTargetSystem:FindTargetUnitJob as a dependency of the newly scheduled job.

    Code (CSharp):
    1.             friendlyUnitsQuery = GetEntityQuery(ComponentType.ReadOnly<Translation>(), typeof(QuadrantEntity),typeof(Stats),typeof(HasTarget),typeof(PathStatus),typeof(Unit),ComponentType.Exclude<Enemy>(), ComponentType.ReadOnly<Friendly>());
    2.             enemyUnitsQuery = GetEntityQuery(ComponentType.ReadOnly<Translation>(), typeof(QuadrantEntity), typeof(Stats), typeof(HasTarget), typeof(PathStatus), typeof(Unit), ComponentType.Exclude<Friendly>(), ComponentType.ReadOnly<Enemy>());
    3.  
    4.  
    5.             NativeArray<int> keysFriendlies = QuadrantFriendlySystem.quadrant.GetKeyArray(Allocator.TempJob);
    6.  
    7.             FindTargetUnitJob findTargetJobForEnemies = new FindTargetUnitJob {
    8.                 quadrant = QuadrantFriendlySystem.quadrant,
    9.                 quadrantType = LogicHandler.Quadrant.Friendly,
    10.                 pathMapRequesterJob = PathfindingSystem.pathMapRequester.AsParallelWriter(),
    11.                 deltaTime = Time.deltaTime,
    12.                 keys = keysFriendlies,
    13.                 unrecheablesBuffer = GetBufferFromEntity<UnrecheableTarget>(true),
    14.                 random = LogicHandler.instance.random
    15.             };
    16.             NativeArray<int> keysEnemies = QuadrantEnemySystem.quadrant.GetKeyArray(Allocator.TempJob);
    17.             FindTargetUnitJob findTargetJobForFriendlies = new FindTargetUnitJob {
    18.                 quadrant = QuadrantEnemySystem.quadrant,
    19.                 quadrantType = LogicHandler.Quadrant.Enemy,
    20.                 pathMapRequesterJob = PathfindingSystem.pathMapRequester.AsParallelWriter(),
    21.                 deltaTime = Time.deltaTime,
    22.                 keys = keysEnemies,
    23.                 unrecheablesBuffer = GetBufferFromEntity<UnrecheableTarget>(true),
    24.                 random = LogicHandler.instance.random
    25.             };
    26.  
    27.             JobHandle jh1 = findTargetJobForFriendlies.Schedule(friendlyUnitsQuery, inputDeps);
    28.             JobHandle jh2 = findTargetJobForEnemies.Schedule(enemyUnitsQuery, inputDeps);
    29.             jh1 = JobHandle.CombineDependencies(jh1, jh2);
    30.  
    31.  
    As entities can never have both 'Enemy' and 'Friendly' components at the same time, why is the safety system complaining even though I specified the entity queries as 2 non overlapping queries? How can I let the system know that it's safe to do both jobs at the same time? I would prefer to have them run at the same time instead of running them serially.
     
  2. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    3,993
    Unfortunately you can't. The safety system looks at types, not queries. The best you can do is use an IJobChunk to combine your jobs and decide per chunk whether to do friendly logic or enemy logic.
     
  3. Radu392

    Radu392

    Joined:
    Jan 6, 2016
    Posts:
    210
    Well that's unfortunate. I think I might actually change the system to a generic one instead and work with that, if this ever becomes a performance issue. Thanks for the answer!
     
  4. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,594
    I would run in any parallel job, i.e. IJobChunk as suggested.
    Then once one set is done, next will be picked up asap. Just remember chain dependencies.

    It doesn't really matter, if you got 10k enemies and 10k friendly, wheather they search all at once, or in two stages. That still total 20k entities.
    In either way, I expect CPU usage approximately the same.
     
  5. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    3,993
    It does matter if you got less chunks than threads. I could definitely see it being annoying on a mobile game trying to do an O(n^2) search. Or lots of little jobs all writing to the same component on different queries in which there ends up being one chunk per query. In practice, it is pretty rare and IJobChunk and maybe even some clever use of Burst function pointers can solve it.

    I don't see how that would solve your issue. Also note that Burst does not really support generic systems at the moment.
     
  6. Radu392

    Radu392

    Joined:
    Jan 6, 2016
    Posts:
    210
    You’re right, it wouldn’t solve it. Only realized that after I gave some thought to what I posted.

    Burst does work with generic systems and jobs btw, as long as components are known at compile time which is fine by me.

    Anyway, luckily I’m not doing an n^2 search, but merely using spatial targetting using simple quadrants. For now, doing those 2 jobs in serial is no problem. I’ll just have to judge performance after I’m done adding more logic to the game. It sounds like IJobChunk is always the goto answer whenever someone posts a question asking for fine control of data/logic so I’ll definitely be giving that a chance when I enter the optimization stage. For now, IJobParallelFor bursted is doing miracles already.
     
  7. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,594
    I would ask, how many entities do you expect to run on average game/mobile? How many chunks it will be created as the results.

    Also, important question is, is that only job running in a frame. If not, there will be probably other jobs, which will fill spare CPU logical cores.

    Profile it.
     
  8. Radu392

    Radu392

    Joined:
    Jan 6, 2016
    Posts:
    210
    No they don't use a Complete() call, so the job system is most likely scheduling them to run at the same time as other jobs like you said.

    I guess my original post wasn't about saving an extra 0.1ms as much as it was about just knowing why Unity wouldn't let me schedule them in parallel. I think I have another case or two that I know for sure there are no logical dependencies between jobs, yet the safety system is requiring them anyway. It would be nice to have an attribute, similar to [DisableNativeParallelForRestriction], but for entire jobs instead of individual arrays, though I don't know if that would be possible.
     
  9. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,594
    Could you just not give them common component tag, and execute all in same job, with that query for such component?

    If you would need Identify if entity is friendly /enemy, you can do that in job.

    Would that fit your requirement?
     
  10. Radu392

    Radu392

    Joined:
    Jan 6, 2016
    Posts:
    210
    No because the job takes a NativeMultiHashMap<int,QuadrantData>. Basically all entities with the Enemy component are added to an 'Enemy' quadrant hashmap and similarly, all entities with a Friendly component are added to a 'Friendly' quadrant and as you can see from the code, both quadrants have their own system.

    I used to have only one quadrant for all entities, but that was a poor implementation. The 'Enemy' entities don't need to 'Find target' on themselves, only for entities in the 'Friendly' quadrant and vice versa. So if you have 20k enemies and 200 friendlies, the one single job used to take 20200*20200 calculations, which is 400 million hashmap gets (In practice, it's not that much since I only check neighbor cells, but it was still quite a lot). However now, with 2 different hashmaps, I only get 20000*200 + 200*20000 Gets, which is only 4 million checks (again, in practice it's not that much, but it's still much better than the other implementation).

    I guess what I could try is to explicitly pass both quadrants into the same job and inside of it, check which unit is friendly and which is enemy and check the correct quadrant. However, that requires more boilerplate code AND if I ever decide to maybe have multiple quadrants based on different types of units, maybe buildings (I already have another for resources), then I'd have to get back to this job and modify it accordingly which isn't cool.
     
  11. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,594
    This was my thought, specially when I mentioned
    I don't expect any more boiler plate than you have already. You just pass extra quadrant data to existing job.

    I suspect, most job logic is the same anyway?

    If you want to expand later into buildings or what not, I would worry about it later. Check if your concept is actually functional and cause bottleneck. Or maybe you will need redesign system, how it works.

    At least you will have reference, to compare against.
     
    Radu392 likes this.
  12. Attatekjir

    Attatekjir

    Joined:
    Sep 17, 2018
    Posts:
    23
    I think your problem is in setting up the dependencies
    1. JobHandle jh1 = findTargetJobForFriendlies.Schedule(friendlyUnitsQuery, inputDeps);
    2. JobHandle jh2 = findTargetJobForEnemies.Schedule(enemyUnitsQuery, inputDeps);
    3. jh1 = JobHandle.CombineDependencies(jh1, jh2);
    Should be
    1. JobHandle jh1 = findTargetJobForFriendlies.Schedule(friendlyUnitsQuery, inputDeps);
    2. JobHandle jh2 = findTargetJobForEnemies.Schedule(enemyUnitsQuery, jh1);
    Because, even if the entityQueries are completely seperate, you are still writing/reading from data present in both job1 and job2 (like pathMapRequesterJob and unrecheablesBuffer). I believe that if you make the two jobs dependent on eachother the system will figure out on its own whether they are allowed to be run in parallel or not depending on the read/write status of the data.

    Job2 depending on Job1 through job1's Jobhandle does not always mean that Job2 will only start once Job1 is complete.
     
  13. eizenhorn

    eizenhorn

    Joined:
    Oct 17, 2016
    Posts:
    2,655
    No. They never overlaps, and always will be one after another.
     
    Attatekjir likes this.
  14. sngdan

    sngdan

    Joined:
    Feb 7, 2014
    Posts:
    1,131
    not sure if this is relevant to you, just in case (somewhat old API, this was an example I posted for someone a long time ago).

    targeting system between "players" and "enemies", each looking for closest - this runs in parallel

    Code (CSharp):
    1.     public class SphereTargetSystem : JobComponentSystem
    2.     {
    3.         public Vector3[] Vertices;
    4.      
    5.         private EntityQuery prefabAGroup;
    6.         private EntityQuery prefabBGroup;
    7.  
    8.         [BurstCompile]
    9.         struct SphereTargetJob : IJobParallelFor
    10.         {
    11.             [ReadOnly] public NativeArray<Translation> TranslationsStart;
    12.             [ReadOnly] public NativeArray<Translation> TranslationsEnd;
    13.             [WriteOnly, NativeDisableParallelForRestriction, NativeDisableContainerSafetyRestriction] public NativeSlice<Vector3> Results;
    14.          
    15.             public void Execute(int i)
    16.             {
    17.                 var minDistance = float.MaxValue;
    18.                 int minPos = 0;
    19.                 for (int j = 0; j < TranslationsEnd.Length; j++)
    20.                 {
    21.                     var distance = math.distancesq(TranslationsStart[i].Value, TranslationsEnd[j].Value);
    22.                     // compiles to same machine code as math.select
    23.                     if (distance < minDistance)
    24.                     {
    25.                         minPos = j;
    26.                         minDistance = distance;
    27.                     }
    28.                 }
    29.                 Results[i*2] = TranslationsStart[i].Value;
    30.                 Results[i*2+1] = TranslationsEnd[minPos].Value;
    31.             }
    32.         }
    33.  
    34.      
    35.         protected override void OnCreateManager()
    36.         {
    37.             prefabAGroup = GetEntityQuery(ComponentType.ReadOnly<PrefabATag>(), ComponentType.ReadOnly<Translation>());
    38.             prefabBGroup = GetEntityQuery(ComponentType.ReadOnly<PrefabBTag>(), ComponentType.ReadOnly<Translation>());
    39.         }
    40.  
    41.  
    42.         protected override JobHandle OnUpdate(JobHandle inputDependencies)
    43.         {
    44.             // prepare 2 parallel job handles
    45.             JobHandle handleA, handleB;
    46.          
    47.             // setup results array
    48.             var lengthA = prefabAGroup.CalculateEntityCount();
    49.             var lengthB = prefabBGroup.CalculateEntityCount();
    50.          
    51.             var vertices = new NativeArray<Vector3>(lengthA * 2 + lengthB * 2, Allocator.TempJob);
    52.             NativeSlice<Vector3> verticesA = new NativeSlice<Vector3>(vertices, 0, lengthA * 2);
    53.             NativeSlice<Vector3> verticesB = new NativeSlice<Vector3>(vertices, lengthA * 2, lengthB * 2);
    54.          
    55.             // get translation array of prefab A and prefab B
    56.             var translationsA = prefabAGroup.ToComponentDataArray<Translation>(Allocator.TempJob, out handleA);
    57.             var translationsB = prefabBGroup.ToComponentDataArray<Translation>(Allocator.TempJob, out handleB);
    58.             inputDependencies = JobHandle.CombineDependencies(handleA, handleB, inputDependencies);
    59.  
    60.             // schedule jobs
    61.             handleA = new SphereTargetJob
    62.             {
    63.                 TranslationsStart = translationsA,
    64.                 TranslationsEnd = translationsB,
    65.                 Results = verticesA
    66.             }.Schedule(translationsA.Length, 1, inputDependencies);
    67.          
    68.             handleB = new SphereTargetJob
    69.             {
    70.                 TranslationsStart = translationsB,
    71.                 TranslationsEnd = translationsA,
    72.                 Results = verticesB
    73.             }.Schedule(translationsB.Length, 1, inputDependencies);
    74.          
    75.             // combine the parallel dependencies
    76.             inputDependencies = JobHandle.CombineDependencies(handleA, handleB);
    77.          
    78.             // for convenience here, to simplify debug line drawing
    79.             inputDependencies.Complete();
    80.          
    81.             // copy results to managed array - this is used to draw debug lines between targets
    82.             Vertices = vertices.ToArray();
    83.  
    84.             // cleanup native containers
    85.             translationsA.Dispose();
    86.             translationsB.Dispose();
    87.             vertices.Dispose();
    88.  
    89.             return inputDependencies;
    90.         }
    91.     }