Search Unity

about job dependencies

Discussion in 'Entity Component System' started by laurentlavigne, Jan 21, 2018.

  1. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    6,363
    the documentation says "Dependencies are used to ensure that a job executes on workerthreads after the dependency has completed execution. Making sure that two jobs reading or writing to same data do not run in parallel."

    What's the usage?

    My understand of this WAS that handle2.Schedule(job, handle1).Complete() first completes handle1 by itself but that would spit out errors.
    which is weird because "The JobSystem automatically prioritizes the job and any of its dependencies to run first in the queue, then attempts to execute the job itself on the thread which calls the Complete function."

    Can we get an example of dependency that's how it should be (in a coroutine)

    Code (CSharp):
    1. using System.Collections;
    2. using System.Collections.Generic;
    3. using UnityEngine;
    4. using Unity.Jobs;
    5. using Unity.Collections;
    6.  
    7. public class JobSimple : MonoBehaviour {
    8.  
    9.     struct FillTheArray : IJobParallelFor
    10.     {
    11.         public NativeArray<float> output;
    12.  
    13.         public void Execute(int i)
    14.         {
    15.             output[i] = Mathf.Log10( i);
    16.         }
    17.     }
    18.  
    19.     struct CalculateThings : IJobParallelFor
    20.     {
    21.         [ReadOnly]
    22.         public NativeArray<float> input;
    23.  
    24.         public NativeArray<float> output;
    25.  
    26.         public void Execute(int i)
    27.         {
    28.             output[i] = Mathf.Sin( input[i]);
    29.         }
    30.     }
    31.  
    32.     void OnEnable()
    33.     {
    34.         input = new NativeArray<float>(computeSize, Allocator.Persistent, NativeArrayOptions.None);
    35.         output = new NativeArray<float>(computeSize, Allocator.Persistent, NativeArrayOptions.None);
    36.         StartCoroutine(JobCompute());
    37.     }
    38.  
    39.     public int computeSize=1000000, batchSize = 100;
    40.     NativeArray<float> input, output;
    41.     JobHandle handleCalculate;
    42.     IEnumerator JobCompute()
    43.     {
    44.         while (true)
    45.         {
    46.         //    input = new NativeArray<float>(computeSize, Allocator.Persistent);
    47.         //    output = new NativeArray<float>(computeSize, Allocator.Persistent);
    48.  
    49.             var time = Time.time;
    50.  
    51.             var jobFiller = new FillTheArray()
    52.             {
    53.                 output = input
    54.             };
    55.             var handleFiller = jobFiller.Schedule(computeSize, batchSize);
    56.  
    57.             handleFiller.Complete();
    58.  
    59.             var job = new CalculateThings()
    60.             {
    61.                 input = input,
    62.                 output = output
    63.             };
    64.             handleCalculate = job.Schedule(input.Length, batchSize, handleFiller);
    65.  
    66.             yield return new WaitUntil(() => handleCalculate.IsCompleted);
    67.  
    68.             handleCalculate.Complete();
    69.  
    70.     //        Debug.Log(new System.Text.StringBuilder((Time.time - time).ToString()));
    71.     //        input.Dispose();
    72.         //    output.Dispose();
    73.         }
    74.     }
    75.  
    76.     void OnDisable()
    77.     {
    78.         handleCalculate.Complete();
    79.         input.Dispose();
    80.         output.Dispose();
    81.     }
    82. }
     
  2. Joachim_Ante

    Joachim_Ante

    Unity Technologies

    Joined:
    Mar 16, 2005
    Posts:
    5,203
    In short. You can and should remove handleFiller.Complete(). It will make your code faster.

    The long version:
    You clould definately schedule a job. Wait for it. Schedule another. Wait on that. However that introduces sync points. Sync points are the enemy of multithreaded performance, because they lead to a pattern of going wide with jobs, then waiting and usually no jobs running in parallel while waiting and executing main thread code.

    So instead you can schedule a second job, and tell the job system that on the worker threads one should run after the other. Now you have removed a sync point. And waiting on the second job, will also ensure the first has completed. Good times.

    What you would optimally like to do, is express the whole game as a chain of jobs and have a single sync point at the end of the frame. This is in fact the only realistic way of getting 100% multicore utiliziation. The ECS approach of writing code lets you do this in a simple way. (Open Source preview project & demos will be released soon)

    Using monobehaviours, you can do some optimization here and there with C# jobs but its practically impossible to get to 100% multicore utilization, thats why we are proposing ECS to get Unity truly to the next level of performance. All combined (Burst compiler, C# jobs, ECS data layout & iteration, no job sync points) generally results in a more than 100x speedup compared to the old school MonoBehaviour.Update method approach
     
  3. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    6,363
    yes, the that is how i thought it worked. but the console error asked me to complete job1 before the dependent...

    Could you give a code sample of the job chain how it is done now?

    PS: as i am thinking more in term of data instead of object, I am starting to see the underlying fabric of the universe, we are one, life is good
     
    LeonhardP likes this.
  4. Joachim_Ante

    Joachim_Ante

    Unity Technologies

    Joined:
    Mar 16, 2005
    Posts:
    5,203
    I deleted this line handleFiller.Complete(); from your code and it runs without errors for me.
     
  5. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    6,363
    Same here ... another case of ¯\_(ツ)_/¯
     
  6. recursive

    recursive

    Joined:
    Jul 12, 2012
    Posts:
    669
    I think the greatest issue we're going to have with this (and the ECS) is a lot of people trying to wrap their heads around writing code this way and all of the brand new gotchas that entails. I think that once we have SOLID examples of using the both the new jobs system, the ECS, and also the new Execution loop control setups, it'll click into place for more people.

    @Joachim_Ante - if docs/examples are on the way, could there also be a quick list of Dos/Don'ts when it comes to launching and awaiting jobs since that seems to be a hefty source of confusion?
     
    Enrico-Monese likes this.
  7. dadude123

    dadude123

    Joined:
    Feb 26, 2014
    Posts:
    789
    But doing things in parallel doesn't always make sense, right?
    I mean if you have a game that has "heavy" GameObjects with up to 10 components of all sorts (imagine characters in an RPG game, with special effects, IK scripts, scripts for special physical movement, inventory, ...) then it wouldn't make any sense to make all of them into a system like that, right?

    The only thing I can imagine that could be done in my game would be replacing all projectiles with an entity component system because they are created and destroyed very often, and sometimes there can be up to 200 effects on screen at the same time.

    But even that number sounds so ridiculously low now that I hear what ECS is normally used for...

    Can you tell me your opinion on what systems you personally would make into some sort of ECS in a game like Skyrim or World of Warcraft, or similar?

    I can only maybe think of two things that could be at least job-ified.
    - The characters that make use of the "kinematic character controller" (tl;dr: a character controller asset based on Physics.ComputeDepenetration that does tons of stuff for you) one could gather all the controllers when the scene starts and update them together in a job because their updates can sometimes take somewhat long.
    - All NPC navmesh pathfinding.

    I don't think I have anything where I deal with literally tens of thousands of very similar and lightweight objects; and that's what ECS is for, or am I missing some major possibilities here?
     
  8. Joachim_Ante

    Joachim_Ante

    Unity Technologies

    Joined:
    Mar 16, 2005
    Posts:
    5,203
    I believe there is a misconception that "some code can't be multithreaded". I don't think this is true. What is true, is that today (Across any game engine that exists) Writing all game code multithreaded is too hard, requires too much skill to do it.

    Is it desirable? Hell yeah. CPUs are stalled in terms of individual core performance. Broadly oversimplified there are only two things that Intel & ARM can really do to make things faster:
    1. more cores
    2. specialized instructions (SIMD, more and more weird instructions to make code faster)

    Our focus with burst, C# jobs, ECS. Is about making sure that unity is the best solution bar none to work in that world.
    I'll just state that compared to the current way of MonoBehaviour.Update way of writing code, a jobified ECS style way of writing code using burst, C# jobs, ECS can give you 100x speedup.

    Even if you have a game with just 8 characters that have complex AI. Definately that can take the same approach and can get the same speedup as the "RTS style" demo with 100k units on screen we showed at Unite Austin.

    So I am going with the assumption that we all want to write code that gets 100x faster and we want to be ready for the future where more cores and specialized SIMD instructions are happening.

    So what it all comes down to is... Can we make it so easy, that writing this style of performant code is something Unity developers can do by default. Because why not, if its so easy that you can just write fast multithreaded code by default, why wouldn't you just always do it. Make it second habit.

    Clearly if its not simple enough to do it will fail. Its why no one really writes this kind of code today. I think we could generally say that today in C++ / C# / across all game engines, all tools are stacked against the developer who wants to write truly efficient code. It's hard to write super efficient code. Unity's responsibility is to change the odds. Make it simple, just like 10 years ago we made Component based design simple, and look today the whole game industry uses it.


    There is a lot Unity has to prove here obviously. We haven't yet shipped ECS in experimental builds, and when we do there is still a long path to "making it as easy to write highest performance code by default, as easy as monobehaviour approach" ... But ultimately thats our goal...

    It is good times indeed. I can definitely say it's the most exciting thing I have ever done.
     
    JohngUK, eterlan, GarthSmith and 10 others like this.
  9. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    6,363
    What you've done so far is very easy to use, Took me days to jobify old messy OOP code and after understanding a few things I never bothered learning like memory or reftypes (!) it's been error free.
    Also I have to add that I can't stand OOP, I've always been bad at it, turning any code into spaghetti, and since I've started writing job code it's been simpler and smaller code, much easier to read and no tripping over spaghetti code.

    One thing to remember when you guys write job&ECS documentation is that most people have been doing OOP all their programming life. I have seen horrors worst than mine committed by experienced programmers.
    So, You'll need to do some hand holding to help folks give up old habits, and one thing that works is to always show the light at the end of the tunnel. The light is what? clear thinking, simplicity, code that feels natural and easier to talk about - all this brings a sense of freedom. Really. How to show that? Give example of legacy vs jobECS code patterns.

    Now regarding the goal of making ECS and jobs the default, I'd say that for any sim type games this choice is a no brainer, it is perfectly suited because such games are based on a small set of rules applied millions of times. Less rules = more time to figure out how the data moves along the job chain and to massage each job.
    But would I tackle game GUI or more nebulous games filled with custom events? I wouldn't know where to start. Maybe you have something in mind.
     
  10. dadude123

    dadude123

    Joined:
    Feb 26, 2014
    Posts:
    789
    I agree with a lot you've said, but:

    There are problems that are simply inherently serial in nature.
    But I guess that's not what you meant, right?
    You were talking about executing all sorts of systems in parallel (and each system can be a thing that has to run in serial)

    And as for the 8 characters, I don't think there's much you can do (at least you didn't mention any example systems or example components that could be parallel-ized).

    You always have to have some kind of integration phase where all the results come together again, or not?


    You can do pathfinding stuff for each character in parallel, and the part of the AI that doesn't call into components of other characters (but I don't think that is much).
    All sorts of visual effects (particles and even projectiles) sure.

    But if we stay with those 8 characters for now, isn't the overhead of scheduling higher than just running everything in serial?

    And what about things that need to call out of the job? (by accessing some static property)
     
  11. recursive

    recursive

    Joined:
    Jul 12, 2012
    Posts:
    669
    For 8 characters, off the top of my head, you could parallelize the following:
    * AI Sensor systems
    * AI decision making / state processing
    * AI Group behavior / pathfinding
    * Locomotion processing
    * Locomotion collisions
    * Locomotion constraints
    * Writing the states to a network stream without having to resort to high-level voodoo. It's a network stream. You add data too it. You flush it, you do it again.

    Yeah, you have to integrate back with your main driver logic, but you have to do that now with coroutines or manager classes, and controlling the order things execute in roundabout ways. The actual integration part is simpler with serial execution, I'll give you that, but we don't exactly have a lot of examples in Unity-land (whereas there are AAA engines doing jobified systems like this for years now, albeit in C/C++ mostly).

    The thing about jobified systems (and related to the concept of thread pooling) is that the worker threads are already scheduled, and are just waiting for something to do anyway (thus avoiding one task swap issue which is setting up the threads with the OS).

    The other thing to consider with jobified code is you're often working with sets of related data and not randomly fetching as often. Part of the issue with task switching is swapping what's live on the CPU caches. If you have a bunch of processes operating on related data, you don't have to fetch as arbitrarily.

    Calling out of the job is an area of concern, but maybe it'll be possible to call certain managed functions if they follow some rules (like they're static with no side effects and only operate on structs), or jobs could launch and wait for... other jobs! We'll need to see some more examples I think.
     
    Krajca likes this.
  12. Joachim_Ante

    Joachim_Ante

    Unity Technologies

    Joined:
    Mar 16, 2005
    Posts:
    5,203
    Exactly. Some things can be parallelized by subdividing across the amount of entities, data, arrays. This is true for a lot of code / data but not all of course. Sometimes you just have one job doing all the work in one system, and thats totally fine. Often that code can run in parallel to other systems.

    Most things you can express via dependencies instead of sync points. The optimal setup is where you have a boat load of dependencies and then a single sync point or potentially even the sync points of one component system just syncing the next frames component system.

    Jobified code can't use static variables. Static variables aren't inherently necessary to write game code. They are a choice that come at a cost.

    So a big part of it here is that I am assuming an ECS system, where you can talk to other components safely from jobs and its from the start laid out to enable that. I know the wait is killing everyone, some patience it's coming soon.
     
    Last edited: Jan 25, 2018
    starikcetin likes this.
  13. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    6,363
    Last edited: Jan 24, 2018
  14. mike_acton

    mike_acton

    Unity Technologies

    Joined:
    Nov 21, 2017
    Posts:
    110
    GUI is actually a pretty good case for an ECS and job system like this in general.

    With a data-oriented approach, you want to pay a cost that's proportional to the amount of change or "surprise" in the data. a GUI broadly is a very stable system (one given frame is almost certainly to be very much like the previous frame, with small spikes that happen on events.) So frame-to-frame, you want cost to be very low. And that's not usually what you see in a conventional object-oriented GUI.

    In terms of design, you can conceive of the GUI elements as components in exactly the same way you would game objects. They have positions and relative transforms and particular display properties, etc. And you have a bunch of them. Which you need to update and cull based on some state and rules, similar to game objects.

    If you think of a component as one of many similar pieces of data (instead of an "attribute" of an "object"), then events aren't any different really. They can be components in the same way. i.e. "Give me all Events X so I can make a change to some other data" is the same thing as "Give me all Components X so I can make a change to some other data." Events are just a type of ephemeral component.

    In principle, you might do something like:
    1. Give me all the mouse-click components (Probably only one match)
    2. Give me all the bounding group components that overlap with the position of that. (Similar to a broad-phase collision in game code.)
    3. Walk through all the group components and filter which specific component bounding areas definitely bound the position (Similar to a narrow-phase collision in game code.)
    4. Query for the specific "mouse-click" response components on that narrow list and do whatever the thing is. (You might be doing something complex like CSS matching rules where the click could fall through to a "parent" element, so this might not just be one result.)

    These are data-dependent steps, so like Joachim said above, you probably wouldn't have an "integration" sync as such, but just make sure the jobs are run in the right order through data dependencies.

    I know that description is a little rough and hand-wavy - and we'll need to develop good examples over time which demonstrate exactly this sort of thing so we can get folks developing good habits, as you pointed out.
     
  15. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    I think what you are going to find is people will end up creating higher level abstractions to work with the ECS system. At least those of us who have used ECS before. Almost universally when I talk to people who have used it in real games, I get mixed reactions that mirror my own experience. It's a great abstraction for the hardware and data driven design is always good, but the 'normal' ECS design is not a good high level abstraction to work with. It needs another layer.

    Invariably when you look at projects or games that have used ECS, 90% of the time is trying to figure out what that additional layer needs to look like. Most try to just piecemeal in parts that cause the most pain, like dependencies. My gut feeling is it really needs a complete layer designed from the ground up with an api to the core ECS system. Kind of how actor models solved it, they keep the batching aspect that the hardware likes but it's entirely abstracted away, so that they are free to design the high level abstractions in a way that works best for the end user. Not that actor systems were designed like that intentionally, but it's how they ended up and it works really well. Not saying the actor model is a good fit for games...
     
  16. Krajca

    Krajca

    Joined:
    May 6, 2014
    Posts:
    347
    It's just me or ECS is more or less Model from MVC design? So just write good Controllers (mostly with C# jobs) and at the end of frame just render it on the screen.

    I have question tho: How to make job dependent on two other jobs?
     
  17. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    6,363
    Krajca likes this.
  18. Enzi

    Enzi

    Joined:
    Jan 28, 2013
    Posts:
    966
    I'm using BrokenBricksECS right now for a hack&slay RPG with 3d world and 2d interface.

    Currently I have a single ECS composition with 18 systems where most is data crunching and some calls to trigger events or methods from UI monobehaviours. I tried to reduce "going outside" as much as possible. With some events like UI it's not possible, but if it happens, it's not a big deal if you have the reference in a component.

    Then I have Factories where different types of entities with their correct components and data are made. Big helper classes you could say. The UI layer, where I use NGUI, has classic monobehaviour and subscribes to events for their respective data.

    I have tried many different styles over the years to program systems/managers, UI data binding and event management and all where OOP based and sooner or later a complete mess to maintain or too complicated to use. I really was never happy with my code and ECS has solved so many issues I can't believe it took us so long.

    It's a speed up, it's easier to maintain and re-use and if it clicks, it's easier to make features that are bug-free.

    The ECS provides you with a set of features based on the logic of components. It's a direct extension of the engine and unlike scripts works with the engine and not against it. Like a symbiosis.

    Now that I have it running, it's like the old days of Quake modding. You get a limited API (the factories) and you have to build a game with scripts. Those factories use prefabs, so you can make a ton of variations with a good amount of components and running systems.

    I also have a lot of variations of a complex spell system inspired by WoW and those games and I could never find a good implementation. With my ECS spell system now, some mechanics where so easy to implement it was absurd. So much code wasn't necessary because the ECS handles all of it, mainly getting data.

    It's also easier to maintain states. States are suddenly defined by the present components and their data, so easier to debug. My first AI wasn't even using a state machine, it was spaghetti code and the bugs were hilarious. Before reading up, I was stumped, I need more control! This step-up is exactly how I feel from OOP to ECS.

    Joachim and all those who invest into ECS are on to something. Speedup is great but what it does for coding standards impresses me even more.
    Assets from the store will also improve and may work better together.

    Sorry this got long and derailed the thread I guess. :3
     
    starikcetin likes this.
  19. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    ECS is really just a combination of known patterns that have been used in various places. But it's nothing like MVC. There are two core concepts. One is how the data flows, it's a batching system. It was intentionally designed that way to be efficient in some areas. The other main difference is it's composition based. You build entities out of components and the entity is just the sum of all those parts.

    ECS has some good core concepts, it's the implementations I have a problem with. I've written half dozen implementations over the last decade, I still use some form of ECS quite a bit. I like the core concepts.

    My issue is how the systems are designed in all of the implementations I've seen.

    Most ECS systems are abstracted to favor batching of operations and data to hardware. The problem is this is often at odds with good abstractions that are easy to work with. It's also not necessary, it can and should be decoupled from your higher level domain logic.

    So what I do is decouple things a bit more. Instead of all logic in systems, systems only handle stuff that actually benefits from the batching paradigm. That is their domain, that's all they do. Some systems might indeed have a lot of the logic, but there is no requirement that all the logic be in systems.

    That simple decoupling solves most of the known issues with ECS, or rather frees you to solve them without systems forcing their concerns where they don't belong.
     
    davidfrk and Krajca like this.
  20. Eleana_G

    Eleana_G

    Joined:
    Apr 16, 2019
    Posts:
    1

    Hi, I am having a very similar problem to the one described in this thread. I am chaining 3 separate jobs in a loop and calling handle.Complete() at the end of the loop. Each job reads data that were produced in the previous job(s) so it is crucial that previous jobs have been completed. However I keep getting the error that while a job writes on a native array another job is trying to read from it and I should call handle.Complete() for the first job. Specifically (see code below), I am getting this error for jobCandidates. Job0 writes to this nativeArray and job 1 has to read from it (I have use the [readOnly] attribute).

    Does it make a difference that job0 and job2 implemet the IJob interface while job1 implements IJobParallelFor? However I did a test with chaining parallel and non-parallel jobs prior to this and it seemed to be working just fine. I am wondering what I am doing wrong, any help would be greatly appreciated!!

    here is a code sample:
    Code (CSharp):
    1.  JobHandle handle = default(JobHandle);
    2.  
    3. for (int iii = known_counter; iii < cellRes; iii++)
    4.         {
    5.             var job0 = new AddCandidateJob
    6.             {
    7.                 groupCells_READ = job_groupCells,
    8.                 known_READ = knownBOOLS,
    9.                 candidate_READ_WRITE = candBOOLS,
    10.                 batch_Candidates = job_candidates,
    11.                 testCell = job_best_cand,
    12.                 cand_counter = job_cand_counter
    13.             };
    14.  
    15.             handle = job0.Schedule(handle);        
    16.      
    17.             var job1 = new PotentialJob
    18.             {
    19.                 groupCells = job_groupCells,
    20.                 new_groupCells = job_groupCells_Updated,
    21.                 candidates = job_candidates
    22.             };
    23.          
    24.  
    25.             handle = job1.Schedule(cellCount, 32, handle);
    26.  
    27.  
    28.             var job2 = new BestCandJob
    29.             {
    30.                 candidatesIN = job_candidates,
    31.                 candidatesOUT = job_candidates_Updated,
    32.                 groupCellsIN = job_groupCells_Updated,
    33.                 cand_length = job_cand_counter,
    34.                 knownBOOL = knownBOOLS,
    35.                 candBOOL = candBOOLS,
    36.                 known_length = known_counter,
    37.                 best_cand = job_best_cand
    38.             };
    39.  
    40.             handle = job2.Schedule(handle);
    41.  
    42.             // double buffering
    43.             var tempCells = job_groupCells;
    44.             job_groupCells = job_groupCells_Updated;
    45.             job_groupCells_Updated = tempCells;
    46.  
    47.             var tempCands = job_candidates;
    48.             job_candidates = job_candidates_Updated;
    49.             job_candidates_Updated = tempCands;
    50. }
    51. handle.Complete();
     
  21. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    6,363
    <necro threads are the best threads>
    I have two systems that don't depend on one another directly but one needs to execute after the other, so a priori the job system can't auto manage dependencies.
    How do I explicitly set one system execution after another or for future reference a group of other systems?
    Thank you.

    And the answer is
    [UpdateAfter(typeof(WobblerSystem))]

    (I would so love the generic notation
    [UpdateAfter<WobblerSystem>]
    )
     
    PublicEnumE likes this.
  22. elcionap

    elcionap

    Joined:
    Jan 11, 2016
    Posts:
    138
    Unfortunately C# doesn't support generic attributes.

    []'s
     
    laurentlavigne likes this.
  23. PublicEnumE

    PublicEnumE

    Joined:
    Feb 3, 2019
    Posts:
    729
    Might be worth noting that the UpdateBefore/After attributes can make Systems update in the order you want, but they won’t necessarily control the order of jobs scheduled by those systems.

    if a job scheduled by SystemB still has no direct dependency on the components used in the job scheduled by SystemA, it may end up running first.

    if for whatever reason you want to force SystemB’s job to be dependent on SystemA’s job, then SystemB just has to have access to SystemA job’s JobHandle.
     
    laurentlavigne likes this.
  24. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    6,363
    oh that's an interesting detail and I don't understand the difference between job schedule and order of update.
    In my case it might be good enough to control update order and not worry about schedule order.
     
    PublicEnumE likes this.
  25. PublicEnumE

    PublicEnumE

    Joined:
    Feb 3, 2019
    Posts:
    729
    Might be useful to know how all the threads are being used. A simple way to think about it is:

    1. All systems in your project are updated, one after the next, on the main thread. No jobs are (normally) running in between system updates. All of the systems update first, in one long batch. While those systems are updating, they can optionally schedule jobs to be run at some point later (often after all of the systems have been updated). You can think of systems as a 'first phase' of work - creating and throwing a bunch of new jobs in a bucket, to be processed in a 'second phase'.

    2. If one of those systems schedules a job, that job is handed to the job scheduler. That scheduler figures out when it can run, based on what other jobs its dependent on. if it's dependent on nothing, then it will be scheduled to run as soon as possible (whenever a worker threads is free). If it has dependencies on other jobs, it is scheduled to run after those jobs are complete.

    ...(It's worth noting that there are tons of ways to make this loops run differently from how I described. But this is the simplest way to think about what's happening. :) )

    The job scheduler uses a few things to figure out what other jobs a new job should be dependent on:
    1. The JobHandle passed into the Schedule() method.
    2. The component types defined in the EntityQuery used to schedule it (I believe this is correct) OR possibly in the Execute() method of the job itself.
    3. NativeContainers passed in as members of the job (*maybe* I'm fuzzy on this one).​

    It uses this data to figure out when it can most efficiently schedule a job. But it does not know the update order of the system which scheduled it. That's not part of its consideration.

    The [UpdateBefore] and [UpdateAfter] attributes will control the index of a system in the big list of systems to be updated each frame. But that won't cause the jobs scheduled by those systems to run in the same order.

    Simple example:

    1. SystemA updates 1st, and schedules JobA, which uses ComponentTypeA and ComponentTypeB.
    2. SystemB updates 2nd, and schedules JobB, which only uses ComponentTypeC.

    Since there's no overlap between the ComponentTypes used by JobA and JobB, it's entirely possible that JobB will run first. If you want to force JobA to run first, you must find a way to pass JobA's JobHandle into the Schedule method of JobB.

    The two most common ways people have been doing this is:

    1. Including an "AddDependency(JobHandle dependency)" API to SystemB, which can be called by SystemA.
    2. Storing JobA's JobHandle in SystemA, such that SystemB can go grab it before it schedules JobB.

    Both of these will work, though they will add more work for the Systems to do on the main thread, which can slow things down. But it's also worth considering whether this is actually necessary. If the two jobs really aren't using the same ComponentTypes, then why does JobA really need to run first? You'll always know your project best, so if you know there's a good reason, then do what you need to do. :)
     
    Last edited: Dec 30, 2019
    friflo and NotaNaN like this.
  26. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    6,363
    oh S***, so when spawning jobs from within a system, which is the recommended pattern, UpdateAfter seems pretty much useless from what you're saying.

    It's weird, so far it's worked consistently that UpdateAfter, is the job scheduling maybe reworked in version 0.4?

    In any case, what's the way to get access to one jobhandle from the other system to do that dependency stuff? And what's the syntax nowadays? I did jobH.AddDependency but that didn't trigger autocomplete.
     
  27. PublicEnumE

    PublicEnumE

    Joined:
    Feb 3, 2019
    Posts:
    729
    [UpdateBefore/After] aren’t useless. They’re very useful! The order in which systems update *does* affect the order that their jobs will be run, but only if those jobs both work on the same component types.

    I’ll write a code example tomorrow to illustrate that better.
     
    laurentlavigne likes this.
  28. elcionap

    elcionap

    Joined:
    Jan 11, 2016
    Posts:
    138
    If your systems doesn't share the same data there is no reason for them to not run the jobs in any order. If they do share the same data, the dependency will take care of the order.

    []'s
     
    PublicEnumE likes this.