Search Unity

Performance overhead of DOTS systems

Discussion in 'Data Oriented Technology Stack' started by maxxa05, Sep 27, 2019.

  1. maxxa05

    maxxa05

    Joined:
    Nov 17, 2012
    Posts:
    113
    We just started the development of our new game using mostly DOTS. At first I was quite happy with the kind of performance I could have, especially when I had thousands of entities to process data on. But then, after we add more and more systems, a bunch of which only process data on queries of 1 or 2 entities, Something is becoming increasingly clear: there is gonna be a ton of systems, and they all have a non negligeable overhead. To the point where I'm starting to believe it would be more optimal to deal with MonoBehaviours instead.

    So my analysis of the issue is this. For a really simple, run-of-the-mill, burst compiled JobComponentSystem that only schedules a job (See spoiler below), I'm getting a 0.007 to 0.01ms overhead on the main thread in a build. I know, it doesn't sound that bad. But I wouldn't be surprised that a finished game could have thousands of these. At 10+ms for the update loop alone, now it does look pretty bad. And that's on a really good CPU. I know that if you organize the queries right and they are empty, the system is not run. But even then, the check alone seems to take around 0.001 to 0.002ms, so kinda significant on a large scale too.

    I also have a single query in this example. If I need, for example, to get a singleton or to add/remove components, that makes everything even heavier.

    So my question is this: can we hope for that kind of overhead to be lower in the future? Or is this an intrinsic overhead we have to deal with in DOTS? If that's the case, I fear the performance by default claim would only be true for games that often have to deal with tons of similar entities or complex calculation. For the rest, it seems to be quite the opposite.

    Code (CSharp):
    1.     public class DoorInteractorIdleDoorAvailabilitySystem : JobComponentSystem
    2.     {
    3.         private EntityQuery idleEntityQuery;
    4.        
    5.         [BurstCompile]
    6.         private struct SetIdleAvailableJob : IJobForEach<DoorInteractorAvailability>
    7.         {
    8.             public void Execute(ref DoorInteractorAvailability availability)
    9.             {
    10.                 availability.IsClockwiseAvailable = true;
    11.                 availability.IsCounterClockwiseAvailable = true;
    12.             }
    13.         }
    14.        
    15.         protected override void OnCreate()
    16.         {
    17.             var idleQueryDesc = new EntityQueryDesc
    18.             {
    19.                 None = new ComponentType[] { typeof(DoorOpened), typeof(DoorAnimation) },
    20.                 All = new ComponentType[] { typeof(DoorInteractorAvailability) },
    21.             };
    22.             idleEntityQuery = GetEntityQuery(idleQueryDesc);
    23.         }
    24.  
    25.         protected override JobHandle OnUpdate(JobHandle inputDeps)
    26.         {
    27.             var idleJobHandle = new SetIdleAvailableJob().Schedule(idleEntityQuery, inputDeps);
    28.             return idleJobHandle;
    29.         }
    30.     }
     
    Seb-1814 likes this.
  2. learc83

    learc83

    Joined:
    Feb 28, 2018
    Posts:
    34
    If you're only acting on 1 or 2 entities, you probably shouldn't schedule a job. When doing something on multiple threads there is always overhead, and it's not always worth it.

    Also you shouldn't have thousands of systems in production, I don't think the dependency solver would handle that well, and that kind of granularity would be very hard to follow. You'd have to skip through hundreds of files to follow the logic of what's going on.

    And with that many systems you're not getting the benefits of DOTS, because each new system is likely hopping over to a new spot in memory. What's the benefit of tightly packed linearly laid out memory, if you're just going to query for 1 or 2 entities at a time in a thousand different systems? You're essentially just randomly accessing everything at that point.
     
  3. siggigg

    siggigg

    Joined:
    Apr 11, 2018
    Posts:
    235
    Afaik the system is indeed scaled to have thousands of systems. When there is no work to do the system should have zero overhead, but I suspect what you are pointing out is the overhead when there are some entitites (like a few).

    I might be wrong here, so it would be interesting to hear someone from Unity to chime in here?
     
  4. FONTOoMas

    FONTOoMas

    Joined:
    Sep 26, 2015
    Posts:
    2
  5. diesoftgames

    diesoftgames

    Joined:
    Nov 27, 2018
    Posts:
    46
    I'm also kind of curious about this because my primary motivation for working with DOTS is it fits better with how I prefer to have my data and logic broken up, and if you're following the single responsibility design principle, you'll likely have lots of systems. Something to keep in mind though, is even if you have thousands of systems, do you expect all of those to be running every frame? If your EntityQueries don't have any any Entities, the job won't run (simplification of how they decide how they should update, but you get the idea), so I've been trying to be mindful of that when designing systems.

    Looking forward to someday being able to see my FixedUpdate systems in the debugger though so I can at a glance see which systems are running and which aren't like I can with my default Simulation systems :D
     
  6. maxxa05

    maxxa05

    Joined:
    Nov 17, 2012
    Posts:
    113
    There is indeed an overhead to an empty query system, as the check to see if the system is empty or not isn't free. But yeah, I do try to be mindful of that.

    Also, I won't make systems which make a ton of unrelated stuff, because that will be a pain to maintain (single responsability principle) and I'm pretty sure it won't be more optimal anyway. I also try to schedule burst jobs as much as possible, because it seems to use the main thread less than waiting for the query job to end on the main thread, then executing your work on it.

    Edit: The post about mobile does make me more confident about the future performance of DOTS though. I guess I will keep myself up to date about that.
     
    Last edited: Sep 27, 2019
  7. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    7,234
    What if you could group dots systems by data load and low load systems could be amalgamated, my theory being that the overheads for DOTS could maybe be mitigated or amalgamated for these systems?

    Or maybe some of the overheads for DOTS could be reduced e.g. Multi-threading to Single-threading.

    After all your systems are probably just a few lines of processing code.

    Alternatively can you manually combine systems and just switch/case depending on a flag?
     
  8. learc83

    learc83

    Joined:
    Feb 28, 2018
    Posts:
    34
    Unity has said the will support "many" systems, but I don't think they are talking about thousands. I've never heard anyone mention anything about scaling to thousands of systems. And as far as I'm aware no one using an ECS architecture (whether they are using Unity or not) organizes their game like that.
     
  9. calabi

    calabi

    Joined:
    Oct 29, 2009
    Posts:
    118
    The Unity dev says right there in the above linked thread they expect about 60 fps performance on mobile with 1000 to 2000 systems.
     
  10. learc83

    learc83

    Joined:
    Feb 28, 2018
    Posts:
    34
    You're write. Thanks! I hadn't seen that. I still don't think an architecture with thousands of systems makes sense from an organizational and performance standpoint.
     
  11. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    7,234
    Maybe once you get over about 20 systems you should re-analyse what they actually do as I would guess, you will start to see common atomic data operations. After all there are only about 23 basic math and logic operators.
     
  12. Joachim_Ante

    Joachim_Ante

    Unity Technologies

    Joined:
    Mar 16, 2005
    Posts:
    4,837
    We are 100% aware of current performance issues in small entity counts combined with many system update calls.
    It's caused by 3 seperate inefficiencies in that particular scenario all of which are either already fixed or assigned to a dev working on fixing it for the next release.

    We have very much noticed the same thing in dots shooter production, given that client side prediction runs multiple system update ticks per frame against a single character only.

    We are seeing total speeds of 10-50x in these specific low entity count, cases with the different bug fixes / codegen optimizations applied.


    I'll end by saying that the claim of performance by default in ECS / DoD only applies to many entities is not a correct assumption. DoD has a higher impact when you process more than one thing, but the expectation is that it is faster in all cases. If you have one thing, OO might be only a little bit slower while when you have many things it is a lot slower.

    The current issues with low entity counts, are simply performance bugs that are in the process of being fixed.
     
    Last edited: Sep 28, 2019
    pal_trefall, Vacummus, defic and 19 others like this.
  13. Joachim_Ante

    Joachim_Ante

    Unity Technologies

    Joined:
    Mar 16, 2005
    Posts:
    4,837
    I expect that large games with > 1MIO lines of game code will easily have thousands of systems.
    We are making sure dots can handle that well.
     
  14. maxim-zaks

    maxim-zaks

    Joined:
    Jul 24, 2019
    Posts:
    8
    First of all the beauty of ECS is that, it is relatively easy to replace multiple system, with one.
    So I personally try to go with fine grained systems and only if I can find a measurable overhead, I start thinking about joining the systems.

    I know and worked on Games, which used ECS with Single Responsibility Principal in mind and have big number of systems.
    It was not Unity ECS, but from what I see and tried, Unity ECS should be at least on par.

    From my experience, there are three techniques you can apply to reduce the overhead of a system call:
    1. Early exit. The World calling an execute on a system should not be a huge overhead, however in the update it is better if you can avoid performing unnecessary work on every tick and identify if your system can return early.
    2. Write better queries. A system iterates on a group of entities, try to design your components in a way that let your systems iterate only on "proper" set of entities / components. Best case scenario, your set is empty and the system has nothing to iterate on. AFAIK in this case the system will be marked as "not running" in Entity Debugger.
    3. Think about grouping systems (ComponentSystemGroup) and disabling whole groups in cases where it is clear that in current state those system has nothing to iterate on.

    That all sad, take my advice with a grant of salt and always profile ;).
     
  15. mmankt

    mmankt

    Joined:
    Apr 29, 2015
    Posts:
    35
    From my experience some systems that that don't take hundreds of entities and are processed in IJobParallelFor (or equivalent), and require a post update command buffer will have enough overhead that they'd be faster in a simple component system on the main thread. But it depends on the use case.
     
  16. maxxa05

    maxxa05

    Joined:
    Nov 17, 2012
    Posts:
    113
    That's good news, thanks!
    If you need to run commands on the main thread directly after, it may indeed be better to run on the main thread directly. but if you only need to modify values on existing components, it's definitely faster to run a job, since you don't have to wait for the query. The query will be run in parallel and then your job will run, and you'll already be running another system on the main thread while this is happening. However, if you need to change components after your job, you have to wait for your jobs to complete, so it may not be worth it to schedule a job for that.
    I already use several groups similar to this:
    Code (CSharp):
    1. public class InteractGroup : ComponentSystemGroup
    2. {
    3.     [Preserve] public InteractGroup() {}
    4.  
    5.     protected override void OnCreate()
    6.     {
    7.         RequireForUpdate(GetEntityQuery(typeof(Interaction)));
    8.     }
    9. }
    It allows to group every system that work on the same component/query together, you know when all the work on those components is done, and if the query is empty, it culls everything with one check.
     
    Vacummus, maxim-zaks and florianhanke like this.
  17. maxxa05

    maxxa05

    Joined:
    Nov 17, 2012
    Posts:
    113
    Sorry for reviving this, but I just tested the same thing with the new Entities release (0.2.0), and it seems that it's basically the same as before for a simple system (about 0.007ms or more). Is there a reason I don't see the improvements that were supposed to be in the release?

    I'm kinda starting to fear that the ECS won't be as performant as good old MonoBehaviours for games without a bunch of systems operating on a ton of entities.
     
    Last edited: Nov 23, 2019
  18. Joachim_Ante

    Joachim_Ante

    Unity Technologies

    Joined:
    Mar 16, 2005
    Posts:
    4,837
    Code (CSharp):
    1.  
    2. [AlwaysSynchronizeSystem]
    3. public class RotationSpeedSystem_ForEach : JobComponentSystem
    4. {
    5.     // OnUpdate runs on the main thread.
    6.     protected override JobHandle OnUpdate(JobHandle inputDependencies)
    7.     {
    8.         float deltaTime = Time.DeltaTime;
    9.      
    10.         // Schedule job to rotate around up vector
    11.         Entities.ForEach((ref Rotation rotation, in RotationSpeed_ForEach rotationSpeed) =>
    12.              {
    13.                  rotation.Value = math.mul(
    14.                      math.normalize(rotation.Value),
    15.                      quaternion.AxisAngle(math.up(), rotationSpeed.RadiansPerSecond * deltaTime));
    16.              })
    17.             .Run();
    18.  
    19.      
    20.         // Return job handle as the dependency for this system
    21.         return default;
    22.     }
    23. }
    24.  
    For low entity counts + many systems. This is now the most efficient way of writing that code.
    It uses burst for the execution, but does not schedule a job. Instead it uses the Burst delegate compiler to run the job directly on the main thread without going through the job system which at the moment has too much overhead when the code being executed is as simple as above and just process 1-2 entities.

    [AlwaysSynchronizeSystem] above makes it so that system isn't passed a job handle but instead all systems that write dependent data will simply be synchronized before the system runs. This is an important optimization.

    Lets call this a workaround for now. The end goal is to make it unnecessary to type out [AlwaysSynchronizeSystem].

    We also want to make it more automatic so you can write code once and then configure globally if you actually do want to schedule jobs or execute on main thread instead etc. Additionally we want to will make sure that scheduling overhead is less than it is today.

    Do note that you should profile in player. If you must profile in editor, then please turn off the job debugger especially for many systems with low entity count the overhead of job debugger can get huge.

    But at least for the time being there is "a way" of doing it. But clearly there is still more for us to do in this area.
     
    Last edited: Nov 23, 2019
  19. Scorr

    Scorr

    Joined:
    Jul 2, 2013
    Posts:
    54
    The above method seems to indeed be much faster, I get 0.01-0.02ms in profiler compared to 0.02-0.03ms of jobified (not sure how to get more granular with the profiler other than profiling manually). Base ComponentSystems show up at about 0.05ms. In development build (not sure in non-dev because profiler) these numbers jumped all over the place but the general trend was still the same.

    I find it weird that JobComponentSystem seems to run main thread code more efficiently than ComponentSystem. With 1-entity 1-component the base ComponentSystem is not only slower but creates garbage as well (also in dev build). I assumed this was because of the lambda but JobHandle has the same lambda style and seems to not create any garbage, perhaps because of burst. JobComponentSystem's ForEach also runs faster, perhaps also because burst.

    So as of right now it seems like there is no reason to use ComponentSystems, since JobComponentSystems can now run on main thread. I imagine the same optimizations could be applied to ComponentSystems but then there are two ways to basically do the same thing, with JobComponentSystems being more powerful because it can do both (main thread and job). Is there still a use case for them? Is the plan to deprecate them later on?
     
  20. Joachim_Ante

    Joachim_Ante

    Unity Technologies

    Joined:
    Mar 16, 2005
    Posts:
    4,837
    Yes. With these new changes JobComponentSystem should always be used for both main thread & jobified code.

    We introduced the new code-gen based Entities.ForEach only for JobComponentSystem in order to not have any breakage in existing code on upgrade.

    Ultimately I think we need to merge the two into one. Having two seperate one's wasn't a great idea in the first place. It somewhat undermined the idea of performance by default. Now we are moving to a place where the simplest code is also the fastest. Thats ultimately where we want to be.
    (Unless you really need, IJobChunk style level access to the guts, but i think that's pretty rare at least in game code)

    Fortunately we can do that in a way where it is easy to migrate and we can leave the old ones around for easy upgrade / deprecation reasons for a while longer. We hope to get this done before end of year.
     
    dzamani, Sarkahn, pal_trefall and 4 others like this.
  21. optimise

    optimise

    Joined:
    Jan 22, 2014
    Posts:
    510
    ComponentSystem is still super important tool until there is dots version available to replace all the current oop version features. Besides that, I think also need to do something on ComponentSystem to fully eliminate all the GC caused by lamba expression.
     
  22. Radu392

    Radu392

    Joined:
    Jan 6, 2016
    Posts:
    192
    In the code posted by Joachim, I don’t see [BurstCompile] anywhere yet the job is apparently bursted. I haven't upgraded yet to 0.2, so is burst on now by default for all jobs or just for the ones with lambda expressions? Is there a way to turn it off for certain jobs or will Unity now automatically figure out what is and what isn't burstable?

    Also, I seem to remember from a Unite video that that code with the lambda expression doesn't actually produce garbage, but maybe that's just my memory playing tricks on me. Would be nice to have confirmation.

    But anyway, this goes to show that not literally every little thing should be delegated to other cores as that has inherent costs. This has been said many times over these forums and I don't see how Unity can solve this 'automatically'. Scheduling jobs will have overhead no matter what. I don't know if this is possible, but I think the best solution would be a warning from Unity saying that a particular system doesn't have enough work to do to justify being jobified.

    I like the idea of being able to configure jobs globally and maybe be able to switch between main thread and parallel at runtime with the debugger in order to see the difference.
     
  23. Joachim_Ante

    Joachim_Ante

    Unity Technologies

    Joined:
    Mar 16, 2005
    Posts:
    4,837
    You can access class components as well as GameObjectEntity components via Entities.ForEach from JobComponentSystem now. It uses codegen'ed foreach which is significantly faster.
     
  24. Vacummus

    Vacummus

    Joined:
    Dec 18, 2013
    Posts:
    130
    He is not declaring any jobs in his code above. He just processing the data in the OnUpdate function that runs on the main thread. And I don't think Burst will ever be turned on by default because you are not allowed to run any side effects (such as console logs) for jobs that are burst compiled. So you want it to be configurable by devs.
     
  25. Vacummus

    Vacummus

    Joined:
    Dec 18, 2013
    Posts:
    130
    Looks like I was wrong, he is declaring a job in the ForEach function. So Radu, your question still stands.
     
  26. Joachim_Ante

    Joachim_Ante

    Unity Technologies

    Joined:
    Mar 16, 2005
    Posts:
    4,837
    What you are looking at above is a burst job.

    In Entities.ForEach it is now on by default.

    you can turn it off using Entities.WithoutBurst().ForEach, if you have class components. We made a really nice integration where when you have a class component based in you get a nice compile error right in the IDE right away telling you about the existance of WithoutBurst().

    This is more aligned with our values of performance by default, while still making it easy.
     
  27. Joachim_Ante

    Joachim_Ante

    Unity Technologies

    Joined:
    Mar 16, 2005
    Posts:
    4,837
    And yet we did... In 19.3 we introduced an ILPostprocessor featuree. In Entities we use it to transform horrible (for performance) lambda expressions, into amazing jobified code. if you look in ILSpy you can see that we generated an IJobChunk and cache the EntityQuery automatically for you in OnCreate. It's the optimal way you would write low level high perf code, with minimal typing...

    It's a bit like magic, but in this case, it's the right kind. Unlike previous mistakes from 15 years ago of magic OnEnable / Update methods. It's all very explicit and when using wrong, we give compile errors & warnings right in the IDE.
     
    Last edited: Nov 23, 2019
    felipin, maxim-zaks, dzamani and 14 others like this.
  28. Radu392

    Radu392

    Joined:
    Jan 6, 2016
    Posts:
    192
    Wow, I'm really happy to hear that you guys are sticking to your 'performance by default' principle without cutting any corners! I was in the middle of refactoring my code when this update hit, so I think I'll have to revert and start over again and use this new way of writing jobs.

    But anyway, back on topic. What I was talking about in my third paragraph was that if you only have 1 or 2 operations working on only 1 or 2 entities, then you're much better off at doing this work on the main thread instead of going through the scheduling process, which is what your code posted above does (it's a bursted job running on the main thread). That avoids all the overhead of scheduling which is quite high at the moment. I know you guys plan on further optimizing this scheduling process, but at the end of the day, even after all those optimizations on the scheduling process, will it still be worth it to schedule a job to do a single addition of 2 vectors instead of just doing it on the main thread? Hats off to you if it will be!
     
  29. Joachim_Ante

    Joachim_Ante

    Unity Technologies

    Joined:
    Mar 16, 2005
    Posts:
    4,837
    adding two vectors will never be faster to run on a job than immediately on main thread. That said, there can be other reasons like there are jobs running before that you would have to sync on before the code can run. But yes generally speaking for such simple code, main thread code has less overhead. With Entities.ForEach you can easily make that choice late in the coding process. You can just switch Schedule / Run without changing the content.
     
  30. calabi

    calabi

    Joined:
    Oct 29, 2009
    Posts:
    118
    This is probably a stupid question but would it not it be possible to pre-schedule and pre-calculate the data. We have all these cores with these spare cycles, I think even the Data Oriented Design book mentions doing that. I guess it depends on the data, but if all states are known and fixed then it shouldn't be to difficult, even if there not it might be possible. I've seen google stadia say they are going to use some sort of predictive methods to reduce latency. Jobs are easy to use and create so why not use spare threads to predict and create the data so its ready immediately for when you need it.
     
  31. maxxa05

    maxxa05

    Joined:
    Nov 17, 2012
    Posts:
    113
    I was under the impression that scheduling jobs was almost always better, since there may be ongoing jobs on the same data so you shouldn't wait for that data to be ready for further processing, and since you have to wait for the query related jobs (GatherComponentDataJob and so forth) anyway, so might as well schedule your job and go do something else on the main thread. Is that gathering stuff always in jobs like before? If it is, does it not defeat the purpose of not scheduling job, since you have to wait for a scheduled job anyway?
     
  32. Joachim_Ante

    Joachim_Ante

    Unity Technologies

    Joined:
    Mar 16, 2005
    Posts:
    4,837
    Yes. Not introducing sync points is a very good reason to schedule a job even though there is very little work. It's a tradeoff of course obviously. So really depends on how big jobs you have scheduled before that this sync point would have to wait on. So the only way to know which is better in the particular case is to profile.
     
    pal_trefall likes this.
  33. Lucas-Meijer

    Lucas-Meijer

    Unity Technologies

    Joined:
    Nov 26, 2012
    Posts:
    157
    the word job can be confusing here. Joachim uses it to refer to "a unit of code that has to run", whereas others in the thread use it to refer to scheduling the job using the jobsystem, which comes with some overhead.

    When you invoke .Run() on an Entities.ForEach() or a Job.WithCode() statement, the code will run immediately, burst compiled, on the main thread, and will not pay for any job system scheduling overhead. We spent quite some time making sure this path, as we, and you in this thread, have noticed we were not in an amazing place yet for "1000 systems, 10 entities" kind of scenarios.

    Both Entities.ForEach() and Job.WithCode() will by default use burst. because we like performance by default. You could argue we should have chosen a different default for "manual jobs declared through a struct", and you'd have a point. When we picked that default, burst had a smaller featureset.

    We are aware of a few outstanding issues with the new compiler base Entities.ForEach() and Job.WithCode(), that we are fixing (especially when you use variables from outside of the lambda, and then use different ones from different scopes), but overall I hope this way of writing code to become the default, where you get both convenience and maximum performance in one go. We'll be listening here to your experiences with it to figure out if that will become true or if you manage to find problems with it that we haven't yet.

    If you find yourself wanting to use an IJobForEach or IJobForEachWithEntity, over an Entities.ForEach(), we would love to hear from you why
     
  34. Nyanpas

    Nyanpas

    Joined:
    Dec 29, 2016
    Posts:
    145
    Very interesting. Can this be done within a regular Monobehavior Update() as well, since this seems to follow the Task()-syntax?
     
  35. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    839
    I can give you quite a few reasons actually, although most of them have to do with me trying to associate collections with entities with automatically updating JobHandles (I heavily rely on building lots of acceleration structures).

    But for a simple one, what is the intended equivalent of IJFE.ScheduleSingle? I commonly have a bunch of independent job chains scheduled in parallel to each other, each with a temporary NativeList.
     
  36. s_schoener

    s_schoener

    Unity Technologies

    Joined:
    Nov 4, 2019
    Posts:
    1
    No, not right now. It takes some time after compilation to do the IL rewriting and the specific Entities.ForEach call also makes use of the JobComponentSystem itself (e.g. caching the EntityQuery).
     
    pal_trefall likes this.
  37. maxxa05

    maxxa05

    Joined:
    Nov 17, 2012
    Posts:
    113
    Is it possible to use EntityQuery with the new JobComponentSystem's Entities.ForEach(), (equivalent to the Entities.With(query).Foreach() in ComponentSystem)? Is WithStoreEntityQueryInField what I need?

    This is something I use everywhere, I like to know quickly what I'm working on EntityQuery-wise in a system, so I always create explicit EntityQueries.
     
  38. Lucas-Meijer

    Lucas-Meijer

    Unity Technologies

    Joined:
    Nov 26, 2012
    Posts:
    157
    The new entities.foreach makes an entityquery for you automatically, based on the lambda parameters, .WithAll<> calls etc. You cannot provide your own, but you can get a reference to the one that was generated for you with WithStoreEntityQueryInField

    this is useful in cases where you need to know the amount of matching entities before you run the entities foreach.
     
    pal_trefall likes this.
  39. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    6,928
    I am looking into Entities.ForEach, and I am consider convert some IJobForEach. I already converted some IJobChunk to IJobForEach, to reduce amount of boiler plates, when applicable.

    However, I looked into samples on github, but somehow I can not buy into the concept with that whole lambda approach.
    Maybe if I study it more closely. But somehow for now, it puts me off. So I personally stick with IJobForEach. It may be just me.
     
    Enzi likes this.
  40. Nyanpas

    Nyanpas

    Joined:
    Dec 29, 2016
    Posts:
    145
    Ok, so is there a plan to enable something like this at some point?
     
  41. joepl

    joepl

    Unity Technologies

    Joined:
    Jul 6, 2017
    Posts:
    13
    @maxxa05 We do plan on offering support for explicitly specifying the query. This will likely be needed for iterating through chunks with a similar syntax (since component types can be optional). We would like to make this syntax expressive enough to do anything you would need to write an IJobForEach/IJobChunk or even an IJob/IJobParralelFor.

    @Antypodish You should be able to combine static method calls with your lambdas to bridge the gap if you aren't sold on writing your job code inside of a lambda yet (or it seems messy to). This allows you to be explicit about which local are captured as well (since only the parameters you pass will be available inside your static method.
     
  42. joepl

    joepl

    Unity Technologies

    Joined:
    Jul 6, 2017
    Posts:
    13
    @Nyanpas No current plans. IL post-processing does come at a cost so we currently limit it to to specific methods of specific types.
     
    Nyanpas likes this.
  43. maxxa05

    maxxa05

    Joined:
    Nov 17, 2012
    Posts:
    113
    I think I'll try to see if I can write my own With(query) extension method, since I always prefer to create explicit queries in OnCreate instead of the automatically generated ones. It's way more readable than having to guess everytime what the automatic query will be, and I can use the EntityQuery variable in a RequireForUpdate, for CalculateCount or for IsEmptyIgnoreFilter if needed.

    Edit: I think it may be harder than I thought, since it seems there's no way to get the ComponentTypes back from the EntityQuery.
     
    Last edited: Nov 26, 2019
    Enzi likes this.
  44. optimise

    optimise

    Joined:
    Jan 22, 2014
    Posts:
    510
    Seems like SystemBase in 0.6 already make it unnecessary to type out [AlwaysSynchronizeSystem]. How about write code once and then configure globally feature? Will it ship at future version of Entities any time soon?
     
  45. maxxa05

    maxxa05

    Joined:
    Nov 17, 2012
    Posts:
    113
    You don't have to use [AlwaysSynchronizeSystem] with SystemBase if you use .Run()? I guess it makes sense, but I just want to make sure.
     
    tarahugger likes this.
unityunity