Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Playing by the rules doesn't work for complex systems

Discussion in 'Entity Component System' started by snacktime, May 6, 2018.

  1. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    This isn't so much a complaint as to just highlight a real issue with the current job system and ECS.

    In a nutshell, the right kind of complexity can drive much larger throughputs with much lower overall numbers of entities then a simple system with much larger ones.

    The core issue is we need more control over how shared collections are created and how we query them. Naively filtering doesn't scale with well with some types of complexity. You have to solve it at the data layer. Sometimes this just involves actually creating separate collections out of a larger one. Sometimes it will involve duplicating data. It all depends on the situation and there is no one right answer. You cannot solve it by just making iteration faster.

    It's easy enough using jobs to work around this, but it does involve breaking the rules more or less. It would be nice to see some work going into this area, so that those of us creating complex systems wouldn't be faced with refactoring en masse down the road. I've tried to keep things abstracted in a way where I think they would be simple to refactor based on what I'm guessing will be added. It's about the best I can do.
     
  2. GabrieleUnity

    GabrieleUnity

    Unity Technologies

    Joined:
    Sep 4, 2012
    Posts:
    116
    @snacktime do you have an example of what you have in mind? We have a few plans on exposing more control over how to filter data, but I'd like to understand a bit better what you are looking for.
     
  3. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    Sure I'll just throw out some real data I'm working with to get a better picture of the challenge.

    I have a complex combat oriented multiplayer game. It's largely npc agents with a lot of combat ai.

    So let's say I have 200 agents. Agents belong to factions, groups, classes, and can further be broken down by what weapons they are using and what skills they have slotted. Of the attributes/characteristics that can change, a lot of those changes propogate from the network, so they need to potentially be updated every update.

    On the logic side there is movement, targeting, and item/weapon usage as the primary logic. It gets broken down to a fairly granular level with different logic based on group, class, skills, etc..

    The logic is mostly per individual agent. So worst case each agent has to evaluate every other agent based on attribute values. That's 40k iterations for just a single naive filter.

    Now I'm never actually doing that. What I do is at the start of the job cycle I build out collections. One is a set per faction, another per group, and another per weapon type. Then I build out spatial structures built on those base collections. I also have some other things like damage accumulators that are updated as damage happens, but updated at the start of the cycle to decay them correctly.

    A lot of that data is shared between targeting and movement.

    I am also using the job friendly pathfinding api and building a LOS map using RaycastCommand. Which again both targeting and movement end up sharing the results of.

    The ability to create arbitrary collections and share them would I think solve most of this. Better filtering is fine but just filtering on it's own won't solve some use cases.

    Like I have 3 variants of spatial mapping for different use cases. The first task is get them into ECS and accessible from multiple systems. Additional filtering on them would be great, but they need to be there and sharable as a starting point.
     
    S_Darkwell, MechEthan and one_one like this.
  4. Irushian

    Irushian

    Joined:
    Dec 13, 2015
    Posts:
    7
    I too was wondering this, but more for a coordinate or room system.
     
    S_Darkwell likes this.
  5. GabrieleUnity

    GabrieleUnity

    Unity Technologies

    Joined:
    Sep 4, 2012
    Posts:
    116
    Sorry for the delay. Are you referring to custom containers (https://github.com/Unity-Technologi...t/custom_job_types.md#custom-nativecontainers) or to expose a way to allow users to control the way component data is laid out in memory and fed to the systems?
     
    SugoiDev likes this.
  6. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    I was thinking higher level, implementation not as important. The problem is a very basic one. If you need to query data by dynamic attributes, then there are times you need to index/group it in some way so you aren't iterating over all the data just to get a small subset of it.

    As to implementation, I would borrow some ideas from the relational model because it's so easy to understand and I think it fits well. But it's important not to get hung up on comparing to say relational databases, which are just one implementation. Because in the context of ECS you would want to define your sets in advance. In advance being whatever makes sense, anything from start of job to the head of a chain of systems/jobs.

    So in the ECS context you would have something that's an equivalent of a WHERE query. You declare it like in a struct format at compile time which would define the components involved basically. You apply it at runtime, where you can set the attribute values used to filter the component data by. When applied it creates a new set of entities/components.

    Accessing the data is then just a matter of asking for components like normal but also specifying the query id. It doesn't change how ECS structures data in any way.

    You could also make it an option of whether it filters at query time or if it creates new sets of data. Which gives developers the option of when to make that trade off.
     
    zyzyx and Cynicat like this.
  7. Joachim_Ante

    Joachim_Ante

    Unity Technologies

    Joined:
    Mar 16, 2005
    Posts:
    5,203
    The first step for us is to add support for reactive systems, we are working on that. Essentially events for add / change / remove.
    When those exist you can build & update spatial structures based on what has changed.

    Not sure if there is really much we can do to this automatically for you. But of course if you have a concrete idea on the API you would like to see that would be useful.
     
    illinar, optimise and starikcetin like this.
  8. recursive

    recursive

    Joined:
    Jul 12, 2012
    Posts:
    669
    Will these reactive systems buffer changes to be executed at certain points that tie into the dependency graph and we can explicit control if needed (like the other systems) or will they execute processing immediately (or the option for either)?
     
  9. Joachim_Ante

    Joachim_Ante

    Unity Technologies

    Joined:
    Mar 16, 2005
    Posts:
    5,203
    As a general rule we don't do immediate callbacks. They prevent jobification, and our goal is for all game code reactive or not to be possible to be written multithreaded.

    So change tracking is essentially about a system being able to process the entities that have changed since the system ran the last time in OnUpdate. That should also work correctly when systems run at different frequencies.
     
    GarthSmith, illinar and recursive like this.
  10. starikcetin

    starikcetin

    Joined:
    Dec 7, 2017
    Posts:
    340
    What if we decide not to jobify a system, will we have an option to have it react instantly then?
     
  11. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    My comment on spatial structures was just a final note, it really didn't have anything to do with the core issue I was describing. And reactive systems won't solve the core issue I described.
     
  12. mike_acton

    mike_acton

    Unity Technologies

    Joined:
    Nov 21, 2017
    Posts:
    110
    If you can spec out a scenario/sample that would demonstrate the issue as you see it, perhaps we can build an example that would show how we envision it working (and if it uncovers any problems, all the better for us to solve now.)
     
  13. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    I think I did that clearly enough already. Not really sure what's so difficult to understand?
     
  14. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    I feel like it would help clarifying things a great deal if you can come up with some simple pseudocode of how you currently have to handle a specific case VS how you'd want things to work ideally
     
    GarthSmith, 5argon and starikcetin like this.
  15. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    Third post first paragraph describes the problem in the simplest terms I can think of. Asking for a use case for that, is kind of like asking for a use case for why an RDBMS needs indexing. You don't really need use cases or pseudo code to reason about that. In fact all the detail I went into most likely just cluttered the issue.
     
  16. mike_acton

    mike_acton

    Unity Technologies

    Joined:
    Nov 21, 2017
    Posts:
    110
    > Asking for a use case for that, is kind of like asking for a use case for why an RDBMS needs indexing.

    No. What I'm asking is for a specific case because it's not clear why the problem you did describe isn't completely solvable using the tools that are already there. You can certainly already cache lookups. You already have hash tables. You can combine lookups into a single result type and either create shared components or specific component types. There are lots of possibilities for broadly solving the type of problem you described. However, you suggested there was a problem, and I would like to be able to analyze that problem.
     
    starikcetin likes this.
  17. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    Ok so how can I create 500 entities all with the same components. And query for just 20 of them by a specific attribute of a component without it iterating over all of them to get those 20?
     
  18. mike_acton

    mike_acton

    Unity Technologies

    Joined:
    Nov 21, 2017
    Posts:
    110
    A few options:
    1. Value-as-type is what a SharedComponent is. For one component type value that's relatively stable, that's a good option.
    2. If that 20/500 represents one group on many exclusive (non overlapping) groups of that 500, sorting is a reasonably good default. Sort the 500 when that component type has changes. 500 isn't that many, but a merge sort is still probably a reasonable starting point. But a single threaded sort in a job would probably also be fine if you have other work to fill the gap on other threads.
    3. If that 20/500 represents one group that's not an exclusive group or is a true filter and not a categorization, you could create new entities which represent your list and update when the component type in question changes.

    At this small/mediium scale, rebuilding the list when the component type changes (e.g. via component version number) would be a reasonable default approach. It's a necessarily relatively stable case (otherwise, you could just build the list in temp memory per frame, in which case nothing interesting is needed to solve it.)

    But the nature of a data-oriented approach that it's very difficult to give specific advice to a general problem. What we are doing is precisely *not* abstracting these problems away. The more concrete sample data I have, the better I can be sure we have a reasonable approach to the problem.
     
  19. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    Ya so I wasn't really looking for advice on how to solve it generally. I actually layed out a real world use case in broad terms and said what was the most efficient way to solve it. I was making the case that the proper place to do what I did *is* in ECS core.
     
  20. sngdan

    sngdan

    Joined:
    Feb 7, 2014
    Posts:
    1,154
    I was following this because it felt like a critical yet constructive discussion.

    But now I am lost ;(

    In a nutshell, what was the problem and what is the ‘best’ solution at the moment?
     
  21. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    I'll try to sum it up as best I can.

    You have data that's logically grouped by a dynamic group of some type. In my case entity factions are one example. The complexity of the logic and how the data is used means the problem of selecting entities in a group are scaled up. I need to select entities in these logical groups many times. Add in multiple dynamic groups and it scales up more.

    The solution is to logically index the data. In my case simply by creating arrays of entities (logical not ECS) belonging to the groups at the start of each job cycle. Once per frame in my case. I pay a cost up front to create those which isn't much. I save something like over 100k iterations by doing so.

    The groups being dynamic is what throws the monkey wrench into this in the context of ECS. You can't use value as type. So you can't really solve this elegantly within ECS.

    The core set of data is not large. Its when you have loops that do things like for each entity evaluate every other entity in range that is a member of groups A, B, and C. And where the logic is different for each group. That's what I mean by complexity is what drives this. You can very easily get into hundred of thousands of iterations with code that is complex enough even with a relatively small data set.
     
    sngdan likes this.
  22. sngdan

    sngdan

    Joined:
    Feb 7, 2014
    Posts:
    1,154
    If I understand you right, are you saying that you once per frame iterate through the icomponentarrays that hold “selector data” and then create various native arrays with subsets that you feed into jobs (to avoid unneeded iterations in the jobs)?

    And that it would be nice to have some build in functionality that slices the icomponententdata, ie give me an array of all entities that have a hit point component with values between 10-30? (Not using shared component)
     
  23. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    I'm still just using jobs not ECS, but yes same thing really.

    So average total set of entities is 200, but max can be 500 so that's what I have to design for.

    I have 4 dynamic groups right now. The number of variants is basically capped by the total number of entities. Each entity could be in a unique grouping, although the norm is something closer to 5-10.

    So at the start of each frame I'm just iterating over all logical entities, and for each entity I write the entity to an array at most 4 times. It's worth noting here that I sometimes create secondary groups based on a logical join of two or more of the 4. But for the sake of illustration it's not necessary to go into the details of that. The math is still basically the same. Ie it's still complex enough that in the end I need to create several groupings regardless of how I slice it up.

    The key point is that in my feature logic, instead of having for each entity evaluate entities in groups B, C, and D. I can say for each entity in group A evaluate entities in groups B, C, and D. The actual logic is a bit more dynamic.

    I've measured it's roughly 5x slower to write an array then iterate over it. So it really doesn't take long before creating these collections at the start of the job cycle pays off.
     
  24. Coroknight

    Coroknight

    Joined:
    Jul 10, 2012
    Posts:
    26
    Why do you need to create arrays every frame? Let's say that you have 500 entities and each entity has some unique identifier (like an index). Then you have 500 potential groups.

    Make a 500x500 array of integers that store the indexes of entities in each group.
     
  25. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    The arrays I'm using are pooled/reused. I have an abstraction around the whole thing that contains the group id, arrays for the variants in the group, and an index into each one using a NativeHashMap. And a NativeMultiHashMap to track active variants in a group. Abstracting out the data as you say is a good idea. It's not a large/important savings in time/space right now with my specific data patterns (relatively small number of unique variants per frame), but at some point worth doing.
     
  26. Orimay

    Orimay

    Joined:
    Nov 16, 2012
    Posts:
    304
    Wouldn't you need only a single loop (per frame) to build all the group collections at once? You don't have to iterate separately over 500 entities for each group. For loop for 500 entities must be relatively cheap, and then you end up with a set of NativeList collections that have all the data you wanted separately
     
  27. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    I only use a single loop over the entities to build the arrays I query later. In that loop I potentially write out to multiple arrays. But ya it's pretty cheap. Even in the editor is under half a ms.
     
  28. MikeMarcin

    MikeMarcin

    Joined:
    May 15, 2012
    Posts:
    17
    Starting at the first layer of your described problem:
    For a query based on faction why isn't SharedComponentData with ComponentGroup.SetFilter sufficient?
     
  29. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    He said because the group is dynamic he could not use value as type (ISharedComponentData)

    But by making weapon type as `ISharedComponentData` and also faction as `ISharedComponentData` and add/change/remove them to your entity dynamically. (that is making a dynamic group) then wouldn't `.SetFilter` return the changed group correctly whether we want all with that faction or all with some weapon type? I also wanted to know what prevents him from using this design. There's also `CreateForEachFilter` to use many `ISharedComponentData` at once. I think this is exactly his WHERE query. The latest v0.0.6 version you can even get an `EntityArray` using filter (not just CDA) so he could still use this to cache his own logical entity list like what he did currently if he is worrying about having to use `SetFilter` every frame. (when the group does not change)

    The only downside is that you can't say like give me all the entities with attack power > n (and the attack power is a floating point) but I don't think that is what he want because factions, weapon types, etc. are discrete groups.
     
    Last edited: May 29, 2018
  30. optimise

    optimise

    Joined:
    Jan 22, 2014
    Posts:
    2,114
    Hi @GabrieleUnity, @Joachim_Ante and @mike_acton. I think currently Unity ECS lacks the ability to get the piece of data u want directly giving index. The core idea is after defining the data you want to find with the index, it will automatically create the best possible data structure for you to get the best possible data lookup speed. For example, you want to change the color of all the entities that has the
    Position
    component with
    Position(1,1)
    and
    Position(2, 2)
    .

    Code (CSharp):
    1. struct Group
    2. {
    3.     public int Length;
    4.     [Filter] public Position Position = new Position(1f, 1f);
    5.     [Filter] public Position Position2 = new Position(2f, 2f);
    6.     public ComponentDataArray<Color> Color;
    7. }
    8. [Inject] private Group _group;
    Another example is you want to change the color of all the entities that has the
    Attack
    between
    5 to 10
    .
    Code (CSharp):
    1. struct Group
    2. {
    3.     public int Length;
    4.     [Filter] public Attack Attack = 5 < value < 10;
    5.     public ComponentDataArray<Color> Color;
    6. }
    7. [Inject] private Group _group;
    I'm not sure whether this magic can happen without defining the attribute at
    IComponent
    to tell Unity ECS to build the data structure. The best scenario is you dun need to put the attribute at
    IComponent
    as it will intelligently scan
    [Inject] private Group _group
    inside all your
    System
    codes then figure out and build the best possible data structure for you automatically. For example, put EntityIndex attribute at Position component, it will build the C# Dictionary data structure. So, for each new Position component added, it will add to the Dictionary. When u want to find the data with Position index, it will get the data from the Dictionary.

    Code (CSharp):
    1. [EntityIndex]
    2. public struct Position : IComponentData
    3. {
    4.     public float3 Value;
    5.  
    6.     public Position(float3 position)
    7.     {
    8.         Value = position;
    9.     }
    10. }
     
    Last edited: Jun 2, 2018