Search Unity

Buff/Debuff system for ECS?

Discussion in 'Entity Component System' started by Guedez, Jan 17, 2022.

  1. Guedez

    Guedez

    Joined:
    Jun 1, 2012
    Posts:
    827
    So what are the solutions everyone is using to change RPG-like attributes to their entities? I could come up with a couple of them but I am not satisfied with any of them. One prerequisite is that I don't know what they can affect at compile time and all/any of them could be added as a mod and they must be read through a XML.
    Code (CSharp):
    1. <PlantTraitData>
    2.     <ID>Flood_Vulnerable</ID><!--String ID so that other XMLs can reference it-->
    3.     <icon>
    4.         <path>Items/Traits/Flood_Vulnerable</path>
    5.     </icon>
    6.     <Name>$$Flood_VulnerableName$$</Name>
    7.     <Description>$$Flood_VulnerableDesc$$</Description>
    8.     <TraitEffect><!--This is the actual data that goes to ECS/Burst to change the RPG-like value-->
    9.         <!--The implementation in the question would change how these are defined in the XML-->
    10.         <Description>$$Flood_Vulnerable1Desc$$</Description>
    11.         <Type>BufferWaterExcess</Type><!--Class that is dynamically loaded/instanced through Reflection which implements the effect.-->
    12.         <Application>Multiplicative</Application>
    13.         <ValueBase>1</ValueBase>
    14.         <ValueLevel>-0.25</ValueLevel>
    15.     </TraitEffect>    
    16.     <MaxLevel>3</MaxLevel><!--Goes to TraitEffect as a parameter-->
    17.     <!--Beyond this point it's all about which plants can get this trait and what are the chances it mutate, not really relevant for the question at hand-->
    18.    <TraitType>Negative</TraitType>
    19.    <TraitClass>Natural</TraitClass>
    20.    <Filters>
    21.        <Any>
    22.            <IsCrop/>
    23.            <IsGrass/>
    24.            <IsTree/>
    25.        </Any>
    26.    </Filters>
    27.    <Nu>0</Nu>
    28.    <Ka>0</Ka>
    29.    <Pi>0</Pi>
    30.    <ChanceSpawn>0.02</ChanceSpawn>
    31.    <ChanceLevelUp>0.1</ChanceLevelUp>
    32.    <ChanceLevelDown>0.15</ChanceLevelDown>
    33.    <ChanceMutation>0.01</ChanceMutation>
    34.    <MutatesToID>Drought_Resistant</MutatesToID>
    35.    <CounterID>Flood_Resistant</CounterID>
    36. </PlantTraitData>
    This example is for my plant traits, when you harvest a seed, the traits on the seed have a small chance to change, old ones be lost and new ones appear. The traits are applied when the seed is planted and mostly will not change until the plant dies (with rare exceptions of using items that affect traits on the plants)
    The solutions I came up with are:

    Buffs as ComponentData
    How: A buff will affect the entity it is placed upon, and it's effect is implemented through systems.
    GetAllBuffs implementation:
    Code (CSharp):
    1. var ComponentTypes = EntityManager.GetChunk(Entity).Archetype.GetComponentTypes(Allocator.Temp);
    2. System.Type[] Buffs = ComponentTypes.Cast(T => T.GetManagedType())
    3.                                     .Where(T => typeof(MyBuffInterface).IsAssignableFrom(T)).ToArray();
    4. ComponentTypes.Dispose();
    Pros:
    • Burstable
    • Can add as many systems as one wants to affect as many effects at once
    • Mods can either add/replace systems to change how buffs work, either incrementally changing buffs or outright replacing the original implementation
    • Buffs can have synergies by creating systems that reads multiple buffs at once
    • One buff can even change the values of another buff with relative ease
    • No random access
    • Trivial implementation
    • Trivial to ensure no repeated buffs
    Cons:
    • Butchered chunks, the most likely situation is each plant is on it's own chunk
    • The butchered chunks makes so there is basically no performance advantage over other implementations
    • Adding/Removing a effect will cause the entity to change chunks
    • GetAllBuffs and creating an UI element for each buff is terrible, even if possible
    • Tons of Systems overhead
    • Implementing stacking buffs from multiple sources would require a dynamicbuffer on the entity per type of effect that supports it
    Buffs as Entities ~ Shared Component Data
    How: Each buff is an entity, which then is indexed by a SharedComponentData, both to determine it's target, and to implement GetAllBuffs.
    GetAllBuffs implementation:
    Code (CSharp):
    1. var EQ = EntityManager.CreateEntityQuery(typeof(MySharedBuffTarget));
    2. EQ.SetSharedComponentFilter(new MySharedBuffTarget(Entity));
    3. Entity[] Buffs= EQ.ToEntityArray();
    Pros:
    • Burstable
    • Trivial to add/remove buffs from an entity
    • Trivial to implement stacking buffs from multiple sources
    • Adding/removing buffs do not move the target entity's chunk
    Cons:
    • Abhorrent performance and memory usage
    • Butchered chunks, the most likely situation is each buff is on it's own chunk
    • Systems will likely end up running one job per target per buff
    • Everything is random access
    • Incredibly hard if not outright impossible to detect sibling buffs for synergies
    • Tons of Systems overhead
    Buffs as Entities ~ DynamicBuffer
    How: The same as above, but a DynamicBuffer on the target is used to keep track of what affects it rather than SharedComponentData indexation
    GetAllBuffs implementation:
    var EQ = EntityManager.GetBuffer<MyBuff>(Entity).AsNativeArray().ToArray();


    Pros:
    • Every Pro the above implementation has
    • Slightly less terrible chunk utilization as the above implementation
    Cons:
    • Every Cons the above implementation has except slightly less bad chunk utilization
    • Needs to manage a DynamicBuffer on the target entity in addition to the buff Entities
    Out of ECS ~ NativeCollections
    How: Keep track of one NativeArray<BuffData> for each Entity that has buffs in whichever way you prefer; For each NativeArray<BuffData> run a Job that processes all of the Entity buffs, might be possible to parallelize it all in a single job using pointers
    GetAllBuffs implementation: Just go through the NativeArray<BuffData> for it's Entity and process it
    Pros:
    • No chunk utilization, the memory layout is as good as you manage to manage your NativeArray
    • Possibly the fastest Random Access Implementation performance
    • Multiple ways to go about it
    Cons:
    • Incredibly complex/hard to implement
    • Adding/Removing buffs is incredibly complex
    • Limits the kinds of things buffs can do
    • Incredibly hard to extend through mods
    • Need to manage a list of NativeArrays and the NativeArrays themselves
    • Pointers and Reinterpret tricks required
    • Can't make buffs that add buffs
    • Buffs can't have any complex effect that is not directly tied to an float (Can't have "Reduces incoming fire damage by 13% unless "FireDamageResist" is a float, therefore it's unlikely that you could make a "Reduces Incoming Fire Damage from your back on Thursdays" buff)
    For the last one, my idea of how to implement would be as such:
    Layer buffs into multiple levels to handle additive and multiplicative buffs
    Each Buff is allocated an amount of bytes that can implement the biggest buff
    BuffData is a short for the buff ID + long for expiration date + N# bytes for the buff data
    Each buff type is implemented through a function pointer
    The job will receive a NativeArray of "function pointers" pointers
    "RPG-like-data arquetype"s are created, and each system runs on one of those (Plants have health and grow speed, Characters have health, attack and defense, buildings have Health and resistance, etc)
    Each buff type can only affects one arquetype, so you can't reuse Add10Health buff for different Entity types like you could in the previous implementations
    All modifiable RPG-like data is pre-registred and given an ID per "RPG-like-data arquetype"
    All modifiable RPG-like-data from the target entity is preloaded into an NativeArray (RPGDATA)
    Create a CacheData NativeArray to give some shared memory for the effects to work with
    On the JOB:
    Read *RPGDATA from the Entity
    Foreach buff:
    Extract it's ID from BuffData
    Call it's related Function pointer with the arguments: (*RPGDATA, *BuffData+10, *CacheData)
    Check for expiration date and add to a List for later removal​
    Write back *RPGDATA into the Entity​

    CacheData would be used for things like (Adds +10% HP) buffs that stack additively
    So Layer N would first run all (Adds +10 HP), then add up in CacheData all multiplicative (Adds +10% HP) buffs, then apply all Multiplicative buffs from CacheData, this unfortunately means that each layer needs an "ApplyMultiplicatives" effect at it's end, then Layer N+1 would do the same. That would make so that (Adds +10% HP) from gear all stack additively with one another, but stacks multiplicatively with (Adds +10% HP) from skills, etc
    Effects like an aura that (Adds 10% of the owner HP to all allies in range) would need to refresh it's BuffData with the actual value of the buff each "buff value update interval"

    TLDR: I am currently using the Buffs as ComponentData implementation but want to move on to Out of ECS ~ NativeCollections implementation and wanted to hear the community ideas/input/suggestions on the subject. My idea of Out of ECS ~ NativeCollections was probably not well explained, so please ask anything you want about it.
     
    Last edited: Jan 17, 2022
  2. swejk

    swejk

    Joined:
    Dec 22, 2013
    Posts:
    20
    In my rpg stat system i am storing the stats in single NativeHashMap<StatKey, StatModifierValues>.
    Before that i tried solutions with IComponentData per stat, this might be good when you only need few basic stats, or when all units have same set of stats at all times, otherwise as you already know chunkUtilization will be next to none.

    Also tried having stats in bufferElementData, where every stat has dedicated index.
    This way it was easier to generalize stat operations because you refer to stats via index not component type, but again buffer would be unecessarily too long if you used only few of the stats.

    Code (CSharp):
    1. public struct StatKey {
    2.    public Entity StatEntity;
    3.    public Entity OwnerEntity;
    4. }
    5.  
    6. public struct StatModifierValues{
    7. public float Flat;
    8. public float Additive;
    9. public float Multiplicative;
    10. public float Min;
    11. public float Max;
    12.  
    13. public float Value {
    14.   get {
    15. return math.clamp(Flat * (1+Additive) * Multiplicative, Min,Max);
    16. }
    17.  
    18. }
    19. }
    20.  
    Changes to these stats are done directly (when applying instant effects such as damage) or via buffs.

    Code (CSharp):
    1. public struct Buff : IComponentData {
    2.     public Entity ChangeStatEntity;
    3.     public Entity ApplyToEntity;
    4.     public ModifierValue ModValue; // Which Value to change (Flat, additive, multiplicative ...)
    5.     public float Value;
    6.  
    7. }
    8.  
    9. public struct StatManager {
    10.   public NativeHashMap<StatKey, StatModifierValues> Stats;
    11.  
    12. public void ApplyBuff(Buff buff) {
    13.    var key = new StatKey {
    14.    StatEntity = buff.ChangeStatEntity,
    15.   OwnerEntity = buff.ApplyToEntity
    16. };
    17.  
    18.                           if (!Stats.TryGetValue(key, out var statValues))
    19.                         {
    20.                             statValues = GetDefaults(buff.StatEntity);
    21.                         }
    22.  
    23.                          statValues.Flat += buff.Value;
    24.                        Stats[key] = statValues;
    25. }
    26.  
    27. }
    To remove buff bonuses I just destroy the buff entity, it has ISystemComponentData applied which I query for and then revert the buff value which was applied to stats.

    I am using this system for almost every dynamic value in gameplay.
    Unit health, mana, ability damages, ability cooldowns, fireball_burn_duration, poison_arrow_poison_dot, ...
    Even some static ones, or status effects like IsUndead, IsChannelingAbility, IsStunned, PoisonDot

    Most gameplay interactions are declared as what stats to read to the buff value and what stat to change.
    For example:
    Poison Arrow ability creates buffs:
    buff where Value = GetStat(damageStat, casterEntity), ChangeStatEntity= CurrentHealthStat,
    buff where Value = GetStat(poisonArrowDotStat casterEntity), ChangeStatEntity= PoisonDotStat
    buff where Value = GetStat(poisonArrowCooldownStat,casterEntity), ChangeStatEntity=PoisonArrowCurrentCooldownStat

    Then you have systems which changes current cooldown, applies damage based on PoisonDotStat etc.
    Stats are entities so it will be easy to generalize some systems into one like FireDotStatSystem, PoisonDotSystem into single DotSystem.

    Some negative about this approach but I think you would run into it using any other approach:
    It gets more complicated when the way the buff Value should be computed involves multiple stats from different sources, most obvious scenario being damage and damage resistances.
    In such cases it would be nice to have some virtual function which would compute the final value. I dont how to do this so I am using kinda cringe way where i encode the postfix expression of value calculation into bufferelementdata (i.e token[0] = GetStat(fireDamage, caster), token[1] = GetStat(fireResistance, target), token[2] = Subtract
    In turn based rpg its good enought to abuse it everywhere, but in realtime might be too slow.
     
    eterlan and Guedez like this.
  3. msfredb7

    msfredb7

    Joined:
    Nov 1, 2012
    Posts:
    163
    Unless you already have perf problems, I would go with Buffs as Entities ~ DynamicBuffer.
    * Decent chunk utilisation (depending on the ratio # buff archetype vs. # buffs entities)
    * Pretty simple
    And most important to me:
    * You use the common Unity API. This means that you don't have to recode a complex API for adding buffs, removing buffs, visiting buffs. It also means you can see the buffs in the inspector as regular entities (much easier to debug/work with). This also means that serializing the world state will catch the buffs. And there are probably other advantages I'm missing.

    We don't have a complex buff system like you in our game, but we do have some similarities.

    My rule of thumb: Unless you already foresee big perf issues, go with the easiest to work with first.
     
    Last edited: Jan 17, 2022
    lclemens and Antypodish like this.
  4. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,769
    I agree with @msfredb7
    Dynamic buffers are easy to work with and easy to fulfill modding support to certain extent. Having tons of various components is doable, but can quiqly come into tangling, if not though carefully.

    Dynamic buffers can store values, and if needed enums, which basically allows for ease during debugging via inspector, as you see, which buff is executed by enum name. But that js just one of many ways to implement bufs. Probably the simplest and quicker.
     
  5. Enzi

    Enzi

    Joined:
    Jan 28, 2013
    Posts:
    962
    Buffs/debuffs as entities. Create NativeHashMap with bitfield of effect types at frame start to have a fast lookup with either indexing to the entity or small subset of data for values. (can be multithreaded)
    Process effect logic in systems with lookup into the NHM. Downside, random RW memory access of effect values. This is the fastest solution I could come up with. Unless some implementation eludes me, there's always some caveat in memory access. Like how would you implement in a linear fashion an effect that absorbs damage? I think it's impossible. (But never stop thinking about it ;) )

    You can linearize the access beforehand but that introduces multiple writes based on which systems requires each state and would require an additional write operation and overhead with a very high chance of being slower overall because multiple systems will read or write to the same effect(s).

    The shared component data is a needless abstraction IMO.
    Dynamicbuffers are trash for performance but easy to understand and good enough for prototyping or a small caster/effect count. Unless you cap the number of buffs the dynamic buffer will reside in heap memory.
     
    Last edited: Jan 18, 2022
  6. Guedez

    Guedez

    Joined:
    Jun 1, 2012
    Posts:
    827
    Seems the consensus is Buffs should be entities.
    But what if the RPG stat themselves were entities too? Would that have any benefit? That way you don't really need to care which stat the buff is affecting, just get the IStatVal of the target Entity since it will have only one anyways.
    Something like:
    RPGCharacter Entity
    Attack Stat Entity
    DynamicBuffer<Buffs>
    Buff Entity 1
    Buff Entity 2
    etc​
     
  7. Enzi

    Enzi

    Joined:
    Jan 28, 2013
    Posts:
    962
    Hard to say, the problem in RPGs is that everything can be intertwined. Might work for one example but it has to be thought through many different attacks and effects to see if the design holds up.
    I have an Icomp with all stats. Honestly, I'm not sure what the advantage is to micro manage stats. They are hardly more than 1 float or int value and the amount if overall stats an RPG has is not as many. We are talking about 50 to 80 bytes of data. So why make it complicated and not pack them together in memory?
     
  8. Krajca

    Krajca

    Joined:
    May 6, 2014
    Posts:
    347
    What about damage as an entity? I sort it out by target and then apply all damage to a specific target in parallel. It's quite fast, it's linear for the heaviest part and ends up with one write per target. I would approach a buff system in the same manner. Am I missing something?

    You can but then access patter would be messy.
     
  9. Enzi

    Enzi

    Joined:
    Jan 28, 2013
    Posts:
    962
    Depends on what your goals are honestly. It works well enough in the 1k area but doesn't really scale that well. I would never call it bad design if it stays in that area. If you want to push it further the design breaks down though.
    The problem being, entity instantiation, write data and sorting takes a lot of cpu time. In case it's strictly damage that lives on for a few frames this is okay. For any data that lives only for 1 frame, Entities is not fit enough. You're better off keeping the data in stack and process it directly.
    I've invested quite a few weeks in approaches and my current implementation uses NativeStreams and direct processing where possible.
    I've made benchmarks with other approaches and they all performed really bad against it. Once you start breaking down the algorithm into reads/writes and access patterns it's quite astonishing how much is lost on allocations and writes. Even when put linearly, the time lost can't be compensated. For 1 frame data, linearizing it is never worth the effort. (sadly) I wish it would be otherwise. The time alone on allocing memory space is huge and the timings get longer than just reading random memory. Pre-allocating helps but then there's the writing the data.
    Small anecdote, creating damage data, sorting it and then applying it linear to 1 target takes around 10x longer than directly applying the damage to a target with atomic writes (interlocked) and that was for a test where 8 threads were all fighting to write to the same target, so worst case really for interlocked and best case for the other method.

    Can't be worse than having dynamicbuffers all over the heap. This would need a very specific memory layout to perform better and chunks with their 16k allocs fit this quite well. That would also be true for dynamicbuffers that have a fixed InternalCapacity but that's problematic in itself because of possibly empty space and the awfully slow access time on dynamicbuffers that I never really figured out why. A fixed dynamicbuffer should perform as well as an IComp but it doesn't. Not even close.
     
  10. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    I've created a similar Stats & Modifiers system recently.

    First, here are my requirements & design constraints for this system:
    • the whole system has to be easy to use, hassle-free, reusable between projects, and have as little "stuff you have to remember" as possible
    • it must be possible for stats to "react" to changes of other stats
      • Ex: I raise my Intelligence stat when I level up, which then makes my MagicDamage stat go up based on a custom rule, which then makes my weapon's TotalDamage stat go up. Basically just imagine a Diablo/DarkSouls/DnD character sheet, and seeing all kinds of secondary stats increasing/decreasing as you change your primary stats.
    • we won't 100% optimize for extreme frequent stat changes, but just in case they happen, we don't want them to be too heavy either
      • we want to avoid relying on structural changes for stat changes, since the performance hit could become serious
      • it would be great if stat changes didn't necessitate any sync points, especially since stat recalculation could be called multiple times per frame (explained later)
    • we also want the system to have an as-small-as-possible constant update cost when stats aren't changing:
      • we don't check every stat every frame for changes
      • we don't want our strategy to involve scheduling one job per stat type. We could easily end up with over 100-200 stat types in a game, so the scheduling cost would be very high and would be constantly present even when little to no changes are happening (remember: no structural changes, so these jobs would be running all the time)
    • we want the possibility of accessing the final stat value in proper DoD fashion when needed
      • If you have an RTS game with 10000 units, and their MoveSpeed is a Stat that is accessed every frame for all units, it needs to be possible to make that final stat value reside directly in a component on the unit entity, instead of getting it from another entity or a dynamicBuffer or a hashmap of some sort
    • stat recalculation of all reacting stats has to be instant; not done over multiple frames. This helps a lot with code simplicity and preventing bugs where you forgot things won't necessarily always be up to date.
    • stat recalculation must easily and efficiently be callable multiple times per frame if needed. This could become really useful when we spawn something mid-frame and its stats must be up to date before some later systems do their update.
      • I've encountered this sort of situation a lot with stuff like a weapon(with stats) spawning a projectile(with stats) spawning an explosion(with stats) which deals stats-based damage, all in one frame, with no latency. You may or may not need this, but it's good that it's a possibility
      • in other words; it would be really useful if our stat recalculation approach costs next to nothing if no stats (or very few stats) have been changed. So there's not much of a price to pay for calling it many times
    • it should be as easy as possible for users to implement any kind of reactive behaviour when a specific kind of stat changes (update health in UI when health stat changes, etc...)
    The combination of all these requirements really narrows down our implementation options I think.

    _____________________________________


    Here's what I ended up with:

    Stat
    • Is an Entity
    • Has a "Stat" component that contains BaseValue and FinalValue
    • Has a DynBuffer<StatModifier>
      • A StatModifier is a buffer element with a SourceEntity (the buff entity) and the modifier type/value
    • Has a DynBuffer<StatObserver>
      • This is Entity references to other stat entities that must react to this stat's changes
    StatModifier
    • Is a bufferElement with a enum ModifierType (add, mul, set, etc...) and a few Entity fields for when the modifier needs to include another stat in its calculations
    • When applying a modifier, we do a switch statement on the ModifierType, and based on the type, we do that operation on the stat value
    Buffs
    • Is an entity
    • When created, it adds its StatModifiers to target Stat entities
    • When destroyed, it removes its StatModifiers from target Stat entities
    • It can also just go update its StatModifiers in target Stat entities
    Applying changes to Stats or StatModifiers
    • All stat/modifier changes must be done through "events" (a dynbuffer of stat change events on a StatsManager singleton entity)
      • I prefer a dynbuffer singleton over a NativeList, due to easier dependency handling and Entity Remapping support
    • We work with "events" because it makes things easier (no need to pass all kinds of data arrays to our job in order to make changes, etc...), and because the system must remember to mark these stats Dirty (more explanations in StatsSystem section)
    StatsSystem
    • Processes & applies all stat change events in the dynbuffers on the StatsManager singleton entity
      • Whenever a change happens, the Stat Entity is appended to a DynBuffer<DirtyStats> on the StatsManager singleton entity
    • At this point, we have a DynBuffer<DirtyStats> that tells us exactly which stats need a recalculation. No need to iterate on every single stat in the game on every frame
    • We launch a single-threaded job iterate through our DynBuffer<DirtyStats>, and we fully recalculate each one of the based on their stack of StatModifiers
    • For each recalculated stat, if it had any StatObservers, we add all of those observer entities at the back of the DynBuffer<DirtyStats> list. That way they'll also be recalculated later

    In the scenario of MagicDamage scaling with Intelligence, the MagicDamage stat is an observer of the Intelligence stat, and MagicDamage has a StatModifier that gives it a value of "TargetStat * MultiplierStat" for example (in this case, TargetStat would be the Intelligence Entity, and MultiplierStat would be a stat of the buff entity). So whenever Intelligence changes (and also when MultiplierStat changes), it adds its observer (MagicDamage) to the dirty stats list, so MagicDamage will be recalculated AFTER the final Intelligence/MultiplierStat have been recalculated. This is why this is a single-threaded job. There is a need for things to be done in a certain order, and in my system I've decided that this order cannot be predicted in advance (I want to be able to create some temporary "NoName" Stats at runtime)

    _______________________

    It's a solution that unfortunately doesn't do much linear data access and is pretty close to what an OOP implementation would look like, but I think this is due to the nature of the problem with the requirements I have. I see bigger issues with most alternatives:
    • Solutions based on structural changes, sync points, and multiple jobs (several stat update job iterations) may give you better linear data access during the stat recalculation phase, but may very well perform much worse overall when factoring in all other costs
    • Solutions based on polling all stats for changes every frame are likely to have a much bigger constant cost that stays even when no stats are changing
    • With solutions based on NativeCollections, you have to be very careful about the fact that they don't have Entity remapping support, which means it's risky to store any Entity field in there beyond the lifespan of a single job
    • With solutions where stats aren't Entities, I've really struggled with making it easy for users to efficiently implement custom OnStatChanged behaviour

    An obvious downside of stats as Entities is the per-frame cost of accessing the final stat value: you always have to get value from entity. But I have a solution: were it matters, you can write a job that for each changed stat this frame, will go and write that final value directly as a float field in a component on the Entity that "owns" the stat. It could be in a component specific to the stat, or in a component like "CharacterStats" that holds all final stat values in the same place. This gives us the option of having proper DoD data access to up-to-date stat values. Sadly I haven't found a *generic* way of handling this though. It has to be done manually for each stat that needs to have this. Source Generators are an interesting potential solution

    I think this solution still has room for improvement performance-wise. Stat recalculation is single threaded for now, but we could multithread things by building graphs of dependent stats and solving each graph in parallel. Unclear if that would be better or worse in common scenarios (the graph building phase may cost more than it saves, performance-wise). But I really appreciate its great versatility. It can pretty much tackle any Stat scenario you could imagine, and it's a bit of a jack-of-all-trades in terms of what scenarios it is optimized for. It works well as a hassle-free all-purpose tool to keep in your library across multiple projects
     
    Last edited: Jan 21, 2022
    Neiist, lclemens, Morvar and 2 others like this.
  11. Krajca

    Krajca

    Joined:
    May 6, 2014
    Posts:
    347
    How they were fighting if you write all sorted damage to one target? Why sort them in the first place then?
     
  12. Enzi

    Enzi

    Joined:
    Jan 28, 2013
    Posts:
    962
    They are not sorted. 8 threads write to the same health value.
     
  13. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,769
    Well, that is design issue. It should never be allowed in the first place. Extremely risky, specially when working with unsafe code.
     
  14. Enzi

    Enzi

    Joined:
    Jan 28, 2013
    Posts:
    962
    What? :confused:
    Name one thing that's risky about it. NativeContainers use atomic operations via Interlocked all over the place. Would you consider their usage in parallel extremely risky?
    Really weird statement you're making. If you want to talk about risk-reward, the reward is huge and the risk is not really there because when the design problem is kept in mind, there's one place that applies damage. Every other system for health runs either before or after.

    Honestly, if you want to talk about parallel programming and work your way around every race condition chances are, the solutions to erase the race condition is making it worse and just end in overhead. Computers and programming languages are designed with race conditions in mind. If you don't want to touch it, fine, your choice but don't make these dogmatic statements. The mindful thing about it is to evaluate every option and not just mark something as not allowed in the first place.

    As this use case is quite simple. I encourage you to find a better solution that is faster. ;)
     
  15. Guedez

    Guedez

    Joined:
    Jun 1, 2012
    Posts:
    827
    Another question, how would you solve for:
    StatA gets 110% bonus from StatB and StatB gets 110% bonus from StatA recursion?
    Bonuses from stats would be locked for base value? Or these would be forced to be "the last thing that applies, using the values before it started applying it"?
    Example for the second solution:
    StatA base is 14, after all buffs it is 29
    StatB is 19, after all buffs it is 31
    StatA gets 110% of 31
    StatB gets 110% of 29
    That would require a "BaseValue", "PostBuffs" and "FinalValue" IComponentDatas for the stats, but should ensure no infinite recursions
     
  16. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,769
    You went in to race condition problem, not me.
     
  17. Enzi

    Enzi

    Joined:
    Jan 28, 2013
    Posts:
    962
    I came to the same conclusion. I have BaseStats and FinalStats. At one point I had BuffedStats but I could optimize that data away for my use case. For your use case I'd recommend using the additional PostBuffs that you're suggesting. A previous RPG design had a more complex buff system and having the additional data for calculations was worth its gold to not mess up any buffs in a wrong way or end up with values which are impossible to calculate from.

    I have a linearized code path for this without any race conditions in my codebase. Should I uncomment it because of your expertly crafted comments? lol
    As you don't seem able to bring anything to the discussion, why don't you just take a backseat in this or write something worthwhile to read?
     
  18. Guedez

    Guedez

    Joined:
    Jun 1, 2012
    Posts:
    827
    Why not use https://github.com/tertle/com.bovinelabs.entities ?

    What about storing the "FinalStatValueIComponentData" TypeIndex and use the internals trick to set it's float value through the TypeIndex? Should also be possible to make a CommandBuffer version of the methods below as an extension, although I never tried.
    These have been slowly replacing all of my EntityManager operations
    Code (CSharp):
    1.         [BurstCompatible(GenericTypeArguments = new[] { typeof(BurstCompatibleComponentData) })]
    2.         public static void SetComponentDataReinterpret<T, E>(this EntityManager aThis, Entity entity, E Value) where T : struct where E : struct {
    3.             SetComponentDataReinterpret(aThis, TypeManager.GetTypeIndex<T>(), entity, Value);
    4.         }
    5.         [BurstCompatible(GenericTypeArguments = new[] { typeof(BurstCompatibleComponentData) })]
    6.         public static void SetComponentDataReinterpret<T>(this EntityManager aThis, int typeIndex, Entity entity, T Value) where T : struct {
    7.             unsafe {
    8.                 EntityDataAccess* ecs = aThis.GetCheckedEntityDataAccess();
    9.                 ecs->EntityComponentStore->AssertEntityHasComponent(entity, typeIndex);
    10.                 ecs->EntityComponentStore->AssertZeroSizedComponent(typeIndex);
    11.  
    12.                 if (!ecs->IsInExclusiveTransaction)
    13.                     ecs->DependencyManager->CompleteReadAndWriteDependency(typeIndex);
    14.  
    15.                 var ptr = ecs->EntityComponentStore->GetComponentDataWithTypeRW(entity, typeIndex,
    16.                    ecs->EntityComponentStore->GlobalSystemVersion);
    17.                 UnsafeUtility.CopyStructureToPtr(ref Value, ptr);
    18.             }
    19.         }
    20.         [BurstCompatible(GenericTypeArguments = new[] { typeof(BurstCompatibleComponentData) })]
    21.         public static E TryGetComponentDataReinterpret<T, E>(this EntityManager aThis, Entity entity) where T : struct where E : struct {
    22.             return GetComponentDataReinterpret<E>(aThis, TypeManager.GetTypeIndex<T>(), entity);
    23.         }
    24.         [BurstCompatible(GenericTypeArguments = new[] { typeof(BurstCompatibleComponentData) })]
    25.         public static T GetComponentDataReinterpret<T>(this EntityManager aThis, int typeIndex, Entity entity) where T : struct {
    26.             unsafe {
    27.                 EntityDataAccess* ecs = aThis.GetCheckedEntityDataAccess();
    28.                 ecs->EntityComponentStore->AssertEntityHasComponent(entity, typeIndex);
    29.                 ecs->EntityComponentStore->AssertZeroSizedComponent(typeIndex);
    30.  
    31.                 if (!ecs->IsInExclusiveTransaction)
    32.                     ecs->DependencyManager->CompleteWriteDependency(typeIndex);
    33.  
    34.                 var ptr = ecs->EntityComponentStore->GetComponentDataWithTypeRO(entity, typeIndex);
    35.  
    36.                 T value;
    37.                 UnsafeUtility.CopyPtrToStructure(ptr, out value);
    38.                 return value;
    39.             }
    40.         }
    41.         [BurstCompatible(GenericTypeArguments = new[] { typeof(BurstCompatibleComponentData) })]
    42.         public static bool TryGetComponentDataReinterpret<T, E>(this EntityManager aThis, Entity entity, out E value) where T : struct where E : struct {
    43.             return TryGetComponentDataReinterpret<E>(aThis, TypeManager.GetTypeIndex<T>(), entity, out value);
    44.         }
    45.  
    46.         [BurstCompatible(GenericTypeArguments = new[] { typeof(BurstCompatibleComponentData) })]
    47.         public static bool TryGetComponentData<T>(this EntityManager aThis, Entity entity, out T value) where T : struct {
    48.             return TryGetComponentDataReinterpret<T>(aThis, TypeManager.GetTypeIndex<T>(), entity, out value);
    49.         }
    50.  
    51.         [BurstCompatible(GenericTypeArguments = new[] { typeof(BurstCompatibleComponentData) })]
    52.         public static bool TryGetComponentDataReinterpret<T>(this EntityManager aThis, int typeIndex, Entity entity, out T value) where T : struct {
    53.             unsafe {
    54.                 EntityDataAccess* ecs = aThis.GetCheckedEntityDataAccess();
    55.                 if (!ecs->EntityComponentStore->HasComponent(entity, ComponentType.FromTypeIndex(typeIndex))) {
    56.                     value = default;
    57.                     return false;
    58.                 }
    59.                 ecs->EntityComponentStore->AssertZeroSizedComponent(typeIndex);
    60.  
    61.                 if (!ecs->IsInExclusiveTransaction)
    62.                     ecs->DependencyManager->CompleteWriteDependency(typeIndex);
    63.  
    64.                 var ptr = ecs->EntityComponentStore->GetComponentDataWithTypeRO(entity, typeIndex);
    65.  
    66.                 UnsafeUtility.CopyPtrToStructure(ptr, out value);
    67.                 return true;
    68.             }
    69.         }

    What about recursive buffs? StatA gets 20% of StatB that gets 20% of StatA?
    Would dependant stat values inconsistently change depending on which order they end up updated?
     
  19. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    I read the page very quickly, but it looks like this creates entities for events (?). This approach gives you more flexibility and is a good strategy for events in many cases, but since it creates structural changes, I'm almost certain it would be much heavier on performance than DynamicBuffer events. For something like a Stats system, I think we need a more specialized solution in order to perform well at large scale.

    If you can imagine an RTS game where all damage is handled through stat change events, entity events would be causing large-scale structural changes almost all the time during battles. Even worse for a game like Factorio/Mindustry/Satisfactory/OxygenNotIncluded where the many stats of every building/object would be fluctuating constantly every frame

    Not to mention that DynamicBuffer events are as simple as it gets implementation-wise: you add elements to a DynamicBuffer on an entity, and a system later iterates through those events & executes them. Multiple jobs can write to them in parallel by using "ecb.AppendToBuffer()". And if you don't want to poll them every frame, you still have the option of pairing them with a "NewEventsAdded" tag component next to the buffer, which means less structural changes than one entity per event. They're so simple that they don't even need a pre-made library for them; you just write them directly. And unlike events in NativeLists, DynamicBuffer events are safe for storing Entity fields that are guaranteed to be still valid after structural changes

    At a glance, this sounds like it can be promising. But those are areas of the API that I'm pretty unfamiliar with so I'd have to do some tests to see what kind of performance price this has

    This is a valid concern, and I haven't fully decided what I want to do with this issue yet.

    My solution for now is that whenever we add StatObservers, we automatically do a check in the rest of the chain of observers to see if we detect an infinite loop. If so, we cancel the adding of the observer, and we can output an error. This check is only done when observers change, so it's not a performance price that we're paying constantly, and most observer chains will only be 2 entities long either way. The downside is obviously: nothing in the system/APIs prevents you from accidentally creating buffs that will just not do anything in situations of infinite observer loops

    As mentioned somewhere else in my huge post, I want to keep the flexibility of being able to create "NoName" stats at runtime, and setup observers howerver I want with them. So having a predefined order by stat type would prevent me from doing this. There probably are alternatives I haven't yet thought about though.
     
    Last edited: Jan 21, 2022
  20. Guedez

    Guedez

    Joined:
    Jun 1, 2012
    Posts:
    827
    I was assuming most people would be familiar with it, as far as I gather it does not create entities and it's all NativeArray under the hood.
    More info here: https://forum.unity.com/threads/event-system.779711/#post-5189462

    If anything, it would have a negative cost due to skipping the step to get the TypeIndex.

    Code (CSharp):
    1.         [BurstCompatible(GenericTypeArguments = new[] { typeof(BurstCompatibleComponentData) })] //Internals extension API
    2.         public static void SetComponentDataReinterpret<T>(this EntityManager aThis, int typeIndex, Entity entity, T Value) where T : struct {
    3.             unsafe {
    4.                 EntityDataAccess* ecs = aThis.GetCheckedEntityDataAccess();
    5.                 ecs->EntityComponentStore->AssertEntityHasComponent(entity, typeIndex);
    6.                 ecs->EntityComponentStore->AssertZeroSizedComponent(typeIndex);
    7.  
    8.                 if (!ecs->IsInExclusiveTransaction)
    9.                     ecs->DependencyManager->CompleteReadAndWriteDependency(typeIndex);
    10.  
    11.                 var ptr = ecs->EntityComponentStore->GetComponentDataWithTypeRW(entity, typeIndex,
    12.                    ecs->EntityComponentStore->GlobalSystemVersion);
    13.                 UnsafeUtility.CopyStructureToPtr(ref Value, ptr);
    14.             }
    15.         }
    16.  
    17.     //The code above runs on EntityManager, the code below runs on EntityDataAccess, which causes the code above to require "ecs->" on all calls rather than doing it directly. I could probably rewrite it to run on EntityDataAccess so that it would be objectively faster due to being the same except skipping one line of code.
    18.  
    19.         [BurstCompatible(GenericTypeArguments = new[] { typeof(BurstCompatibleComponentData) })]
    20.         public void SetComponentData<T>(Entity entity, T componentData) where T : struct, IComponentData
    21.         {//EntityManager.SetComponentData<T>
    22.             var ecs = GetCheckedEntityDataAccess();
    23.             ecs->SetComponentData(entity, componentData);
    24.         }
    25.  
    26.         [BurstCompatible(GenericTypeArguments = new[] { typeof(BurstCompatibleComponentData) })]//Default Unity set component data
    27.         public void SetComponentData<T>(Entity entity, T componentData) where T : struct, IComponentData
    28.         {//EntityDataAccess.SetComponentData<T>
    29.             var typeIndex = TypeManager.GetTypeIndex<T>();//Some kind of custom made witchcraft that seems to be similar to a dictionary access, skipped on the custom made API since you already know the type index
    30.  
    31.             EntityComponentStore->AssertEntityHasComponent(entity, typeIndex);
    32.             EntityComponentStore->AssertZeroSizedComponent(typeIndex);
    33.  
    34.             if (!IsInExclusiveTransaction)
    35.                 DependencyManager->CompleteReadAndWriteDependency(typeIndex);
    36.  
    37.             var ptr = EntityComponentStore->GetComponentDataWithTypeRW(entity, typeIndex,
    38.                 EntityComponentStore->GlobalSystemVersion);
    39.             UnsafeUtility.CopyStructureToPtr(ref componentData, ptr);
    40.         }

    Edit: Turns out making my code run on EntityDataAccess instead would only replace all
    ecs->
    with
    aThis.
     
  21. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926

    This makes me feel like it wouldn't be a good fit for my system. I need stat change updates to be callable multiple times per frame and to be fully processed before the next systems in the frame do their update.

    I do also see a "CreateEntities" function in "EntityEventSystem", which makes me think it does end up creating entities for events. I'd have to look at it more carefully to be really sure

    All in all the requirements are so specific that I think it's really worth having a tailor-made event system. I wouldn't want the system to have a third party dependency either way
     
    Last edited: Jan 21, 2022
  22. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    4,264
    He wrote two versions of the event system in two separate places. The first one used entities as events. The second uses stream buffers and is considered the superior implementation.
     
    lclemens and PhilSA like this.
  23. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    One more thought when it comes to working with a specific order of stat updates by type, instead of working with a "StatObservers" buffer that can lead to infinite loops (StatA depends on StatB depends on StatA)

    Working with a buffer of StatObservers allows us to know specifically which stats need an update when a given stat is changed. We limit our calculations to only the stats that truly need recalculation, and we don't need to poll every stat all the time for changes. This makes the system efficient for scenarios where most stats don't change all the time

    Whereas when working with a specific update order by stat type, I'm not sure we can also have that advantage. If a specific StatB depends on a specific StatA, how will we know we only need to update that specific StatB when that specific StatA has changed? I haven't thought about it for a really long time, but I'm under the impression that we'd have to either constantly poll every stat for changes in their dependent stats, or manually implement an equivalent of "StatObservers" either way. Or maybe say that if any StatA has changed, we have to reevaluate all StatBs; it could still lead to too many unnecessary operations unless most stats are changing most of the time.

    Since a stat can depend on a stat on another entity, we can't really assume that only stats belonging to the same Entity have to be recalculated when one of them changes either

    _____________________

    When it comes to events; after some reflection I do think the best would be to support both:
    • DynamicBuffer events: best for peace of mind and a safe default (structural changes can't mess up their Entity fields), but needs sync points if they're written to in parallel
    • NativeSteam events: best for performance, no sync points, but you need to be careful to process all events before any structural changes happen
     
    Last edited: Jan 21, 2022
  24. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    I was curious about the performance differences between 2 approaches, so I created a stress test to compare the two.

    • DirtyList: The approach described in my posts above.
      • stat change events do their modifications & add the stat entity to a list of dirty stats in a single-thread job
      • a single-thread job updates dirty stat entities in that list one by one by getting/setting components from Entity
      • observers of dirty stats are appended at the back of the dirtystats list, so the job will end up processing them too
    • Polling: An approach focused on benefiting from linear data access and parallelism as much as possible, but with a higher constant cost every frame regardless of changes
      • stat change events do their modifications & set a "isDirty" byte to true in stat components in a single-thread job
      • a StatUpdateJob (IJobEntityBatch) goes through all stat entities and recalculates those that are dirty. We use chunk change checks to skip entire chunks
      • We schedule X amount of StatUpdateJobs every frame, where X is a user-defined "MaxStatDependencyDepth"
      • if a stat has changed, it marks all observer stats as dirty, and so the next instance of the StatUpdateJob in the same frame will end up recalculating those
    Note 1: neither approaches use any structural changes, but they do use an ECB for appending stat change events in parallel to a dynamicBuffer

    Millisecond values represent the total time of the stats update

    Test 1 : 100k stats that never change
    • DirtyList : 0.02ms
    • Polling : 0.2ms
    Test 2 : 100k stats that change every frame
    • DirtyList : 4.2ms
    • Polling : 2.35ms
    Test 3 : 100k stats that change every frame and each have 1 observer who itself has 1 other observer (so 100k chains of 3 dependant stats)
    • DirtyList : 9.5ms
    • Polling : 5.4ms
    Bonus: constant cost of the Polling approach when no stats are changing, depending on stats entities count:
    • 0 - 0.02ms
    • 1000 - 0.08ms
    • 10000 - 0.10ms
    • 100000 - 0.20ms

    The pros/cons of each are pretty much as expected, but the disadvantages of the polling method are much less severe than I expected

    DirtyList
    • Pros
      • Lowest constant cost when nothing changes
      • Very low cost of triggering a statsUpdate at multiple points in the frame
    • Cons
      • Performs much worse when tons of stats are changing
    Polling
    • Pros
      • Performs better when tons of stats are changing
    • Cons
      • Higher constant cost when nothing changes (...but it's much less terrible than I expected)
      • Cost of triggering a statsUpdate at multiple points in the frame multiplies its already-higher constant cost

    So perhaps the best candidate for an "all purpose" stats solution would be the polling approach afterall. Especially since we can easily add a mechanism where we globally keep track of if *any* stat has changed, and only launch the stat update jobs if any stat has changed (saves us the constant overhead on most frames)

    But mostly due to the "multiple stat updates per frame" requirement, perhaps keeping both solutions around would be a good idea
     
    Last edited: Jan 24, 2022
    lclemens, calabi and Enzi like this.
  25. calabi

    calabi

    Joined:
    Oct 29, 2009
    Posts:
    232
    That's great, I'm wondering if change filtering could also help at all.
     
  26. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    761
    I could be wrong on this and I hope someone corrects me if I am, but in entities 0.17, change filters had a rather limiting restriction that said if any system writes to the component, it will make it so that all entities get marked as changed. So if you have 100k entities and 8 systems reading with change filters and one system performs a write on the component of interest, the 8 reading systems will have to sift through all 100k entities for that frame - as if there was no change filter at all. All of the implementations of polling I've seen have a system that clears the dynamic buffer at the end of each frame. So my theory is that change filtering would not be of any benefit. Maybe there's a workaround or something though that I'm unaware of. Also I heard a rumor that change filtering in 0.50 is better so maybe that restriction changed. I've been so busy with other stuff that I haven't had time to dig into it yet.
     
  27. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    What it does is a bit better than this but still not perfect; if anything writes to a component in a certain chunk, the whole chunk will be marked as changed.

    So if you have 100k transform entities, and a system writes to translation on one of them, a system doing change filtering on translation will only iterate on 500 entities (or whatever amount of those entities fit in a chunk) instead of 100k

    Often you'll need an additional mecanism for change filtering on a per entity basis. I often use a IsDirty component with a single byte, that I must remember to set to 1 whenever I make a change. So taking back the example above, your change filtering job would only have to check 500 IsDirty instead of 100k

    Also note: in my results post above, the "polling" approach does use change filtering
     
    Last edited: Apr 24, 2022
    lclemens likes this.
  28. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    4,264
    To further elaborate, using a change filter to react to an Entities.ForEach only works one way, and that is if you have a very specific query that does not run every frame that requests the change-filtered component by ref. ComponentDataFromEntity is much more helpful in this regard. It uses a property for writes, which force update the change filter (maybe properties are what Entities.ForEach needs?) Otherwise you really need to use IJobEntityBatch for everything to get the full benefits.
     
    lclemens likes this.
  29. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    761
    So are you saying something like this won't get the full benefits of change filters?:

    Code (CSharp):
    1.  
    2. public partial class ImpactWriterSystem : SystemBase
    3. {
    4.     EndSimulationEntityCommandBufferSystem m_ecbEndSys;
    5.  
    6.     protected override void OnCreate()
    7.     {
    8.         m_ecbEndSys = World.GetOrCreateSystem<EndSimulationEntityCommandBufferSystem>();
    9.     }
    10.  
    11.     protected override void OnUpdate()
    12.     {
    13.         EntityCommandBuffer.ParallelWriter ecbEnd = m_ecbEndSys.CreateCommandBuffer().AsParallelWriter();
    14.         Entities.ForEach((...) => {
    15.             ecbEnd.AppendToBuffer(entityInQueryIndex, entity, new ImpactData {...});
    16.         }).ScheduleParallel();
    17.         m_ecbEndSys.AddJobHandleForProducer(this.Dependency);
    18.     }
    19. }
    20.  
    21. public partial class ImpactReaderSystem : SystemBase
    22. {
    23.    protected override void OnUpdate()
    24.    {
    25.       Entities.WithChangeFilter<ImpactData>().ForEach(in DynamicBuffer<ImpactData> impacts) => {
    26.            // do something with impacts here (read only)
    27.       }).ScheduleParallel();
    28.    }
    29. }
    30.  
    31. [UpdateInGroup(typeof(LateSimulationSystemGroup))]
    32. public partial class ImpactCleanupSystem : SystemBase
    33. {
    34.     protected override void OnUpdate()
    35.     {
    36.         Entities.WithChangeFilter<ImpactData>().ForEach((ref DynamicBuffer<ImpactData> impacts) => {
    37.             impacts.Clear();
    38.         }).ScheduleParallel();
    39.     }
    40. }
    41.  
     
    Last edited: Apr 24, 2022
  30. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    4,264
    ECB will also update change versions more precisely similar to BufferFromEntity. But why are you using a parallel ECB for a single-threaded job? (You don't have to answer if it was just a quickly written up example).
     
  31. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    761
    Haha sorry, yeah I just typed that up as an example so I changed it just now in to avoid confusion if anyone is reading this thread later.

    Thanks for the info! I am using a couple of ecb+dynamic buffer "event systems" like that, so it's good to know that there is some benefit from change filters. I suspect with an average number of impacts happening in a battle, odds are pretty decent that all chunks will end up getting the change flag and the change filters won't do anything, but it'll at least help when there is relatively little conflict.
     
  32. Enzi

    Enzi

    Joined:
    Jan 28, 2013
    Posts:
    962
    On the topic of, I want to make DidChange work but I have writes that constantly bump up the version number, so it doesn't:
    You can get ReadOnly pointers and still change the data with ref. That's dirty and dangerous but maybe the trade-off is worth it.
     
  33. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    4,264
    If you have a dedicated clearing system, stuff its global version in a singleton and have other jobs use that value for DidChange instead of LastSystemVersion.