Search Unity

[OLD THREAD] DOTS Polymorphic Components!

Discussion in 'Entity Component System' started by PhilSA, Jun 26, 2021.

  1. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    Update: a newer/better version of this tool is available here:
    https://forum.unity.com/threads/sou...m-in-dots-now-with-source-generators.1264616/

    Links:
    Sample project
    Package download

    _____________________________________

    What does it do:

    This tool gives you a DOTS equivalent to calling interface functions on "objects" without knowing what the type of the object is in advance, even from inside bursted jobs. It generates a struct (either a IComponentData or a IBufferElement) that can assume the role of any struct in the project that implements a given interface.

    For example:
    • you create a IMyPolyComponent interface that has a "DoSomething()" function
    • you create structs A, B, C that implement that interface, and each do different things in DoSomething()
    • you press a button to generate a "MyPolyComponent" component that can assume the role of either A, B, or C, but they are all considered as the same component type in the eyes of the ECS.
    This allows you to iterate on a set of MyPolyComponent components and call DoSomething() on them, and they'll each do their own respective implementation of DoSomething(), depending on which of the sub-types (A, B, or C) they have been assigned

    The generated struct can either be a "union struct" (will have a size equivalent to the max size among all of its forms), or not a union struct (will have a size equivalent to the total size of all of its forms, but with the advantage of being able to store the data of each of its forms)

    _____________________________________

    When to use:

    As you can see in this performance comparison, polymorphic components can be a much better solution than anything involving structural changes if those changes happen relatively frequently. They also have the advantage of making the code much simpler.

    They can also be useful in more specific cases like these:

    _____________________________________

    How to use:

    1. Create the interface that defines your polymorphic component
    you create an interface with a special attribute on it: this defines a all the functions that the various types within your polymorphic component can have
    Code (CSharp):
    1. [PolymorphicComponentDefinition("MyPolyComponent", "_Samples/PolymorphicTest/_GENERATED")]
    2. public interface IMyPolyComp
    3. {
    4.     void Update(float deltaTime, ref Translation translation, ref Rotation rotation);
    5. }
    2. Create the specific structs that are part of the polymorphic component
    you create several structs that implement the interface from point 1: this defines the specific implementations of all the various types that your polymorphic component can assume
    Code (CSharp):
    1. [Serializable]
    2. public struct CompA : IMyPolyComp
    3. {
    4.     public float MoveSpeed;
    5.     public float MoveAmplitude;
    6.  
    7.     [HideInInspector]
    8.     public float TimeCounter;
    9.  
    10.     public void Update(float deltaTime, ref Translation translation, ref Rotation rotation)
    11.     {
    12.         TimeCounter += deltaTime;
    13.  
    14.         translation.Value.y = math.sin(TimeCounter * MoveSpeed) * MoveAmplitude;
    15.     }
    16. }
    17.  
    18. [Serializable]
    19. public struct CompB : IMyPolyComp
    20. {
    21.     public float RotationSpeed;
    22.     public float3 RotationAxis;
    23.  
    24.     public void Update(float deltaTime, ref Translation translation, ref Rotation rotation)
    25.     {
    26.         rotation.Value = math.mul(rotation.Value, quaternion.AxisAngle(math.normalizesafe(RotationAxis), RotationSpeed * deltaTime));
    27.     }
    28. }
    3. Generate the polymorphic component code
    you press a button to codegen a polymorphic component based on the interface you created in point 1, and the structs you defined in point 2. The generated component has all the methods of the interface from point 1, but it will automatically call them on whatever specific struct type was assigned to your polymorphic component by the authoring component
    Code (CSharp):
    1.  
    2. [Serializable]
    3. [StructLayout(LayoutKind.Explicit, Size = 20)]
    4. public struct MyPolyComponent : IComponentData
    5. {
    6.     public enum TypeId
    7.     {
    8.         CompA,
    9.         CompB,
    10.     }
    11.  
    12.     [FieldOffset(0)]
    13.     public CompA CompA;
    14.     [FieldOffset(0)]
    15.     public CompB CompB;
    16.  
    17.     [FieldOffset(16)]
    18.     public readonly TypeId CurrentTypeId;
    19.  
    20.     public MyPolyComponent(in CompA c)
    21.     {
    22.         CompB = default;
    23.         CompA = c;
    24.         CurrentTypeId = TypeId.CompA;
    25.     }
    26.  
    27.     public MyPolyComponent(in CompB c)
    28.     {
    29.         CompA = default;
    30.         CompB = c;
    31.         CurrentTypeId = TypeId.CompB;
    32.     }
    33.  
    34.  
    35.     public void Update(Single deltaTime, ref Translation translation, ref Rotation rotation)
    36.     {
    37.         switch (CurrentTypeId)
    38.         {
    39.             case TypeId.CompA:
    40.                 CompA.Update(deltaTime, ref translation, ref rotation);
    41.                 break;
    42.             case TypeId.CompB:
    43.                 CompB.Update(deltaTime, ref translation, ref rotation);
    44.                 break;
    45.         }
    46.     }
    47. }
    48.  
    4. Call polymorphic functions from a system!
    Code (CSharp):
    1. public class TestSystem : SystemBase
    2. {
    3.     protected override void OnUpdate()
    4.     {
    5.         float deltaTime = Time.DeltaTime;
    6.  
    7.         Entities.ForEach((Entity entity, ref MyPolyComponent polyComp, ref Translation translation, ref Rotation rotation) =>
    8.         {
    9.             polyComp.Update(deltaTime, ref translation, ref rotation);
    10.         }).Schedule();
    11.     }
    12. }
     
    Last edited: Jun 12, 2022
    bb8_1, Krajca, andreiagmu and 15 others like this.
  2. Enzi

    Enzi

    Joined:
    Jan 28, 2013
    Posts:
    968
    Thanks for sharing!
    What did you end up using this for? Where was it useful?
     
  3. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    In general, I'd say this would be useful for situations where you have references to entities that you know will have a specific kind of function, but could have different implementations of that function (and different data).

    Examples:
    • An "ability system" where each ability lives on its own entity, and you want to call "abilityFromEntity[equippedAbility].Launch()" without knowing in advance what specific type that ability is
    • A state machine where states are Entities
    • Event systems
    The kind of use case that makes this better than current alternatives (like adding events to an event buffer on the target entity and waiting for another job to pick up the event later in the frame) is when instant changes would be preferable. Or when you don't want to create "1 job per type of ability/state" because it would create too many jobs to schedule or too much tedious code to write.

    When you're not operating on massive amounts of entities, I think it definitely helps making your code much simpler and it possibly even improves performance compared to scheduling tons of jobs. And when dealing with thousands of entities, performance should still be pretty good; it just won't be "the best possible thing"
     
    Last edited: Jun 27, 2021
    andreiagmu, NotaNaN and Ruchir like this.
  4. Enzi

    Enzi

    Joined:
    Jan 28, 2013
    Posts:
    968
    You have a pretty cool way of thinking about code structure.
    I've never thought of putting Update logic in components but it makes sense in many ways. Makes some of the code in systems way more readable. I've put a lot of code from systems ForEach in methods and the code file gets long. Scrolling and maintaining such systems is not that great.
     
    adammpolak likes this.
  5. desertGhost_

    desertGhost_

    Joined:
    Apr 12, 2018
    Posts:
    260
    Really cool stuff, but it is definitely an OOP design. Per the DOTS Best Practices, "Combining data with methods that operate on that data encapsulated into a single structure (or class) is an OOP concept. DOD is based around having structures of data and systems to operate on that data. This means that as a general rule, there should be no methods in components and no data in systems. [...] Methods on components aren’t necessarily harmful, but they indicate a DOD anti-pattern: the “do things one by one” anti-pattern."

    That being said, I could see it being useful in situations where you don't have very many entities being processed (e.g. Camera state machine, Objective / level / mission flow control, character state in a classic FPS/TPS game).
     
    Opeth001 and RaL like this.
  6. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    I like to think that the true definition of DoD is "Understand how the CPU works, so that you can make better programming decisions", and not something like "always do a by-the-book ECS implementation", which is what tends to be implied when we hear about DoD

    Or at least, regardless of the true definition, I think the former is a more useful mindset to have than the latter. If the best solution to a specific problem resembles OOP, then that solution would count as "DoD", because it is the best solution to the problem. And that "best solution" must take into consideration usability and maintainability too

    There definitely are caveats to having methods directly in components though. For example; it can change data inside of the component without you being aware of it, and so it becomes easy to forget writing that data back to the ECS. Perhaps I could modify the codegen so that it generates static functions that take the polymorphic component by ref instead. At least that way it would be clearer

    _________________________________

    Performance-wise, I wouldn't be too surprised if this "polymorphic" approach performed better than a "each different thing has its own job" approach in several cases. If you have 100 different kinds of states/abilities in your game, that would mean scheduling up to 100 jobs with the latter approach. If you have let's say under 100 actors that need these states/abilities in your game, I'd be ready to bet that the cost of scheduling up to 100 jobs could be greater than the cost of random data access
     
    Last edited: Jun 26, 2021
  7. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    4,271
    This seems pretty cool, and I can't wait for something like this to be integrated into the compilation pipeline with C# source generators.

    One thing I noticed briefly scanning the code is that you don't have any protections for Entity fields or BlobAssetReference fields. If these do not receive unaliased memory (or if aliased, aliased by the same type), the memory region they occupy will be clobbered by the remappers.
     
    RahulRaman and adammpolak like this.
  8. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    oh, that's something I didn't know / think about. Thanks for pointing it out
    Do you think adding explicit [FieldOffset] attributes over entity fields when declaring your structs would solve the problem? I'm not very familiar with those concepts
     
  9. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    4,271
    I doubt that would work. A component could have more than one entity reference, so all structs would have to avoid field offsets for the max amount of entity references for any component implementation.

    The only way I can think of solving this is to codegen code that manually packs the union and exposes conversion operations. The manual packing is then aware of Entity and BlobAssetReference fields. That gets especially tricky with nested structs. You can see this code as well as the neighboring .gen.cs how I implemented this manually for circumventing a single BlobAssetReference. https://github.com/Dreaming381/Lati...hysics/Physics/Components/ColliderPsyshock.cs
     
    adammpolak, Egad_McDad and PhilSA like this.
  10. desertGhost_

    desertGhost_

    Joined:
    Apr 12, 2018
    Posts:
    260
    I agree 100% with this. Instead of having a job for each ability state, I spread a state / ability across different entities (1 entity per state / ability) and have different, common components on each state / ability entity. States / abilities are activated using tags.

    Jobs are based on component combinations in bite sized chunks. That way you minimize jobs, maximize code reuse. This limits the number of abilities being processed. This comes with the disadvantage of having to sync some data between owner and active ability (delta position, delta rotation, etc.), but this data is typically small and not an issue.

    Somethings can be collapsed into a single job and selected for with an enum (e.g. weapon state enum of idle, firing, meleeing, reloading, casting, etc) to minimize job overhead. Your polymorphic component could even be a good candidate for updating common attributes related to weapon state in this example.
     
    PhilSA likes this.
  11. WAYNGames

    WAYNGames

    Joined:
    Mar 16, 2019
    Posts:
    992
    Hi,

    Interesting approach, I myself use some method in helper struct but never on components.

    I have 2 questions remarks.

    Wouldn't the use of a switch statement in the polymorphic update cause branching and therefore hurt performance ? Or is that optimized by burst ?

    Regarding the ability/skill exemple, you are worried about scheduling lots of jobs for each ability. As you may now I'm working on such a system and my logic is how many stuff can a skill really do ? You can have hundreds of different skill in your game but at their core they all have the same kind of effects (deal damage/ heal / play sound / vfx , spawn something,...) I feel like that really limits the number of systems you have to make. I would be interested in your thoughts on that ?
     
    desertGhost_ likes this.
  12. phobos2077

    phobos2077

    Joined:
    Feb 10, 2018
    Posts:
    350
    In a project I'm working on we just used generic systems and put virtual methods inside components (so a component type is an argument of generic system). Works well enough.
     
  13. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    I know compilers often optimize switch statements into a jump table past a certain number of cases, but I haven't verified if that's the case with burst (i'm not sure I'd know how to decipher the IL / machine code either)

    Regardless, right now I don't really have better ideas on how to handle this. I thought about something involving a table of function pointers, but the doc seems to suggest that there are some pretty big limitations with what you can pass as params to function pointers, so I chose to avoid them

    I think the least we could say is that the performance impact of the switch case won't be a concern unless you start using this on thousands of entities, but it'll give you the benefit of instant changes, simpler code, and fewer jobs to schedule (so potentially better performance at small scales)

    I can definitely imagine that ability logic in many games could be done with only a few jobs, and in that case one-job-per-ability-type would be good. But there are also some games that can have a few dozen different ways that abilities can "operate", and in those cases this tool would help. "Buff systems" are also a good candidate, because in the games where I've tried creating buff systems, I was often ending up with over 20 different kinds of buff jobs (for >20 different stats that can be buffed)

    This polymorphic component tool really isn't meant to be used everywhere, but there are a few cases where it could make your code much simpler. Here's a better example than the ability system:
    • During your game, you accumulate an ordered list of "events" that must happen in that specific sequence
    • An event that gets processed can create new events that insert themselves in the list right after that event
    If your events were processed with "1 job per type of event", this would be a nightmare due to the fact that they must be processed in order instead of in batches of similar types. You'd have to create a SystemGroup that has all of your event-processing jobs, and run your entire group again for every single event you process, on by one, most likely with structural changes between each run.

    With these polymorphic structs, you can just do: foreach(event in buffer) -> event.Play(ref data). In this case, this gives you a massive performance win compared to the other approach
     
    Last edited: Jun 27, 2021
    NotaNaN and WAYNGames like this.
  14. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    4,271
    Burst is very aggressive about this. I'd link to an old post I made regarding this topic, but I am too lazy to dig it up.

    Branches and jump tables can be suboptimal, but they are still typically an order of magnitude faster than a cache miss. So union components rather than sparse chunks is usually still a win.
     
    adammpolak, Egad_McDad and PhilSA like this.
  15. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    Pushed an update:
    • option to allow generating the component as a IBufferElement instead of IComponentData
    • option to allow generating the component as NOT a union struct. This has the downside of making the struct size be the cumulative size of all of its possible forms (instead of the maximum size among all of its possible forms), but the upsides are:
      • it can safely have Entity/BlobAssetReference fields
      • it can store the data of all of its other forms even after it changes form (can be useful for certain kinds of state machine implementations & such)
    • Don't allow Entity/BlobAssetReference fields in polymorphic components that are in union struct mode, because of this
     
    adammpolak likes this.
  16. davenirline

    davenirline

    Joined:
    Jul 7, 2010
    Posts:
    987
    I presume you need to specify the TypeId when using the generated component but I don't see it set as readonly.

    Edit: Are there guarantees for say if I use TypeId = CompA but I'm setting values for CompB? Would there be errors in that case?
     
  17. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    You'd create authoring components for your specific types like this (note that I just refactored the names "ComponentType" and "TypeId" to "TypeId" and "CurrentTypeId"):
    Code (CSharp):
    1. [DisallowMultipleComponent]
    2. public class CompAAuthoring : MonoBehaviour, IConvertGameObjectToEntity
    3. {
    4.     public CompA CompA;
    5.  
    6.     public void Convert(Entity entity, EntityManager dstManager, GameObjectConversionSystem conversionSystem)
    7.     {
    8.         dstManager.AddComponentData(entity, new MyPolyComponent { CompA = CompA, CurrentTypeId = MyPolyComponent.TypeId.CompA });
    9.     }
    10. }
    So at conversion time, it tells the polymorphic component what its type is. However, I want to keep the option open to change its type at runtime, which is why I'm not making it readonly

    No guarantees. I suppose the solution to this would be to codegen all kinds of constructors for the poly component (one constructor for each type it can assume the form of) and make the TypeId private
     
    Last edited: Jun 27, 2021
    adammpolak likes this.
  18. davenirline

    davenirline

    Joined:
    Jul 7, 2010
    Posts:
    987
    I see. IMO, I don't think that it's a good idea to allow changing type at runtime. I mean a regular C# class instance doesn't change type at runtime. I'd rather have type safe constructor for each type. For example:
    Code (CSharp):
    1. public readonly ComponentType TypeId;
    2.  
    3. public MyPolyComponent(in CompA component) {
    4.     this.CompA = component;
    5.     this.TypeId = ComponentType.CompA;
    6. }
    7.  
    8. public MyPolyComponent(in CompB component) {
    9.     this.CompB = component;
    10.     this.TypeId = ComponentType.CompB;
    11. }
     
  19. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    The use case I have in mind for this is state machines

    You'd make your state machine be a polymorphic component that can assume the form of each state, and then you'd manage state transitions by manually changing the TypeId & assigning state data. Especially useful if your poly component is not in union struct mode (as explained here)

    EDIT: well now that I think about it, the constructors approach would still allow this sort of stuff. I think I'll add it (only when the polyComponent is in union struct mode)
     
    Last edited: Jun 27, 2021
    adammpolak likes this.
  20. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    Update pushed:

    Constructors for setting up the polymorphic component with a specific type, and readonly TypeId when the component is in UnionStruct mode

    Code (CSharp):
    1.  
    2. [Serializable]
    3. [StructLayout(LayoutKind.Explicit, Size = 20)]
    4. public struct MyPolyComponent : IComponentData
    5. {
    6.     public enum TypeId
    7.     {
    8.         CompA,
    9.         CompB,
    10.     }
    11.  
    12.     [FieldOffset(0)]
    13.     public CompA CompA;
    14.     [FieldOffset(0)]
    15.     public CompB CompB;
    16.  
    17.     [FieldOffset(16)]
    18.     public readonly TypeId CurrentTypeId;
    19.  
    20.     public MyPolyComponent(in CompA c)
    21.     {
    22.         CompB = default;
    23.         CompA = c;
    24.         CurrentTypeId = TypeId.CompA;
    25.     }
    26.  
    27.     public MyPolyComponent(in CompB c)
    28.     {
    29.         CompA = default;
    30.         CompB = c;
    31.         CurrentTypeId = TypeId.CompB;
    32.     }
    33.  
    34.  
    35.     public void Update(Single deltaTime, ref Translation translation, ref Rotation rotation)
    36.     {
    37.         switch (CurrentTypeId)
    38.         {
    39.             case TypeId.CompA:
    40.                 CompA.Update(deltaTime, ref translation, ref rotation);
    41.                 break;
    42.             case TypeId.CompB:
    43.                 CompB.Update(deltaTime, ref translation, ref rotation);
    44.                 break;
    45.         }
    46.     }
    47. }
    48.  
     
    Last edited: Jun 27, 2021
    adammpolak and davenirline like this.
  21. Guedez

    Guedez

    Joined:
    Jun 1, 2012
    Posts:
    827
    You could also save a byte that states 'which function this Ability calls', and then use an array to call said function. I don't know how that would interact with burst compilation, but it's an idea.

    I will keep this project in mind when implementing my stuff, in case I find something that would really benefit from it.
     
  22. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    I've pushed a new update that allows defining a "SharedData" struct in the polymorphic component. This is data that's meant to be used across all of the forms the component can have

    There are a few ways this can be useful:
    • in UnionStruct mode, this gives you a place where you're allowed to store Entity & BlobAssetReference fields
    • when not in UnionStruct mode, this gives you opportunities to reduce the total size of your polymorphic struct by storing common data there instead of in every specific struct.
    • Certain polymorphic components might even want to have 100% of their data in that sharedData struct, and the specific structs just serve the purpose of defining different modes of operation on that data
    if a valid SharedData type is defined, the constructors of the component are updated to accept that struct in its parameters on top of the specific struct

    You define the type of your shared data struct in the attribute:
    Code (CSharp):
    1. [Serializable]
    2. public struct MyPolyCompSharedData
    3. {
    4.     public Entity Target;
    5.     public bool TestBool;
    6. }
    7.  
    8.  
    9. [PolymorphicComponentDefinition(
    10.     "MyPolyComponent", // name
    11.     "_Samples/PolymorphicTest/_GENERATED", // path
    12.     new string[] { "Unity.Transforms" }, // AdditionalUsings
    13.     false, // IsBufferElement
    14.     true, // IsUnionStruct
    15.     typeof(MyPolyCompSharedData) // SharedDataType
    16.     )]
    17. public interface IMyPolyComp
    18. {
    19.     void Update(float deltaTime, ref MyPolyCompSharedData sharedData, ref Translation translation, ref Rotation rotation);
    20. }
    This will place the sharedData struct inside of your generated component, before the specific structs:
    Code (CSharp):
    1. [Serializable]
    2. [StructLayout(LayoutKind.Explicit, Size = 32)]
    3. public struct MyPolyComponent : IComponentData
    4. {
    5.     public enum TypeId
    6.     {
    7.         CompA,
    8.         CompB,
    9.     }
    10.  
    11.     [FieldOffset(0)]
    12.     public MyPolyCompSharedData MyPolyCompSharedData;
    13.  
    14.     [FieldOffset(12)]
    15.     public CompA CompA;
    16.     [FieldOffset(12)]
    17.     public CompB CompB;
    18.  
    19.     [FieldOffset(28)]
    20.     public readonly TypeId CurrentTypeId;
    21.     // ............
    22.  
    You can pass on that sharedData by ref when calling your functions:
    Code (CSharp):
    1. public class TestSystem : SystemBase
    2. {
    3.     protected override void OnUpdate()
    4.     {
    5.         float deltaTime = Time.DeltaTime;
    6.  
    7.         Entities.ForEach((Entity entity, ref MyPolyComponent polyComp, ref Translation translation, ref Rotation rotation) =>
    8.         {
    9.             polyComp.Update(deltaTime, ref polyComp.MyPolyCompSharedData, ref translation, ref rotation);
    10.         }).Schedule();
    11.     }
    12. }

    Moreover, "additional usings" are now auto-detected instead of being added manually
     
    Last edited: Jun 27, 2021
    NotaNaN likes this.
  23. DV_Gen

    DV_Gen

    Joined:
    May 22, 2021
    Posts:
    14
    I wouldn't mind hearing more about that too.
     
    SolidAlloy likes this.
  24. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    You can do something that looks like this:
    Code (CSharp):
    1. public interface IProcessor
    2. {
    3.     void DoSomething();
    4. }
    5.  
    6. public abstract class MySystem<T> : SystemBase where T : struct, IComponentData, IProcessor
    7. {
    8.     public struct MyJob : IJobEntityBatch
    9.     {
    10.         public ComponentTypeHandle<T> ProcessorType;
    11.  
    12.         public void Execute(ArchetypeChunk batchInChunk, int batchIndex)
    13.         {
    14.             NativeArray<T> chunkProcessors = batchInChunk.GetNativeArray(ProcessorType);
    15.  
    16.             for (int i = 0; i < batchInChunk.Count; i++)
    17.             {
    18.                 T processor = chunkProcessors[batchIndex];
    19.                 processor.DoSomething();
    20.             }
    21.         }
    22.     }
    23.  
    24.     protected override void OnUpdate()
    25.     {
    26.         new MyJob
    27.         {
    28.             ProcessorType = GetComponentTypeHandle<T>(),
    29.         }.Schedule(GetEntityQuery(typeof(T)));
    30.     }
    31. }
    And then you create variants of that system based on different kinds of component types that implement "IProcessor". They will automatically launch a job that calls "DoSomething" on their respective processor component:
    Code (CSharp):
    1. public struct ProcessorA : IComponentData, IProcessor
    2. {
    3.     public void DoSomething()
    4.     {
    5.         UnityEngine.Debug.Log("A");
    6.     }
    7. }
    8.  
    9. public class MyProcessorASystem : MySystem<ProcessorA>
    10. { }
    11.  
    12. public struct ProcessorB : IComponentData, IProcessor
    13. {
    14.     public void DoSomething()
    15.     {
    16.         UnityEngine.Debug.Log("B");
    17.     }S
    18. }
    19.  
    20. public class MyProcessorBSystem : MySystem<ProcessorB>
    21. { }

    This approach is better than "polymorphic components" in most situations. The downside compared to manually writing every job is that the base generic system/job has to get all of the component data that might be necessary across all possible variants. And so all variant jobs have to get all the possible data needed even if they're not gonna use it. But there are ways to arrange things in a way where the generic system takes a second generic type that defines a "Job Data" struct that holds & gets ComponentDataFromEntities, etc...

    But polymorphic components can become a better approach in cases like these:
    • when there is a need for calling polymorphic methods in a precise order that can't be pre-determined and that can't be batched by type. Ex: implementing an ordered events system where there can be dozens of event types, and each event must fully apply its changes before the next one can be processed. You'd end up facing this sort of use case when implementing a system that plays back a queue of sequential "commands" in a turn-based strategy game, or certain kinds of ability/effects systems where it's important that the effects are applied in the precise order in which they were added, etc... NativeStreams could sometimes be a good alternative in those cases, but they'd have the limitation of not being able to be stored per-entity
    • When dealing with cases where an "event" can create another event upon being processed, and you must make sure you reach the end of the whole event creation process in one frame. Polymorphic components could potentially save you from having an EventProcessingGroup and having to re-run that group many times per frame. It's up to you to judge which approach would be better in your use case. Depending on the quantity of different event types that might exist at the same time, and the average nb of new events created per event, polymorphic components could make an immense difference in the amount of jobs required to schedule (and therefore a performance gain)
    • When instant changes from polymorphic methods inside a single job would really help reducing the complexity of your code, and you've determined that the performance impact will either be worth it or will be better than a "multiple-jobs + structural-changes" approach at the scale you'll be operating on. I encountered this situation when creating a state machine for my character movement, but it's a highly-specific use case and would take a wall of text in order to explain it properly
    • When you either need a statemachine that changes state often (you want to avoid frequent structural changes) or you just want to not waste any time and do a really quick & simple state machine on a small nb of entities (ex: managing a "Game State" statemachine). The polymorphic components tool essentially just acts as a switch case generator
     
    Last edited: Jun 30, 2021
    DV_Gen likes this.
  25. davenirline

    davenirline

    Joined:
    Jul 7, 2010
    Posts:
    987
    For polymorphic-like execution, we are doing something like this:
    Code (CSharp):
    1. interface ISomeInterface {
    2.     void Update(in Entity target);
    3. }
    4.  
    5. // Just a sample component where we could get a target
    6. struct Target : IComponentData {
    7.     public Entity value;
    8. }
    9.  
    10. abstract class SomeInterfaceBaseSystem<TComponentFilter, TProcessor> : SystemBase
    11.     where TComponentFilter : struct, IComponentData
    12.     where TProcessor : struct, ISomeInterface {
    13.     private EntityQuery query;
    14.  
    15.     protected override void OnCreate() {
    16.         this.query = GetEntityQuery(typeof(Target), typeof(TComponentFilter));
    17.     }
    18.  
    19.     // Deriving class will override this one
    20.     protected abstract TProcessor PrepareProcessor();
    21.  
    22.     protected override void OnUpdate() {
    23.         Job job = new Job {
    24.             targetType = GetComponentTypeHandle<Target>(),
    25.             filterType = GetComponentTypeHandle<TComponentFilter>(),
    26.             processor = PrepareProcessor()
    27.         };
    28.  
    29.         this.Dependency = job.ScheduleParallel(this.query, 1, this.Dependency);
    30.     }
    31.  
    32.     public struct Job : IJobEntityBatch {
    33.         [ReadOnly]
    34.         public ComponentTypeHandle<Target> targetType;
    35.  
    36.         public ComponentTypeHandle<TComponentFilter> filterType;
    37.  
    38.         public TProcessor processor;
    39.  
    40.         public void Execute(ArchetypeChunk batchInChunk, int batchIndex) {
    41.             NativeArray<Target> targets = batchInChunk.GetNativeArray(this.targetType);
    42.  
    43.             for(int i = 0; i < batchInChunk.Count; ++i) {
    44.                 // TProcessor is used here
    45.                 // In some cases, we pass the filter component
    46.                 this.processor.Update(this.targets[i]);
    47.  
    48.                 ...
    49.             }
    50.         }
    51.     }
    52. }
    The good thing about this is we can specify any data in the processor struct like ComponentDataFromEntity, native collections, or any burstable data resolver structs. Something like this:
    Code (CSharp):
    1. class DerivingClass : SomeInterfaceBaseSystem<Weapon, DerivingClass.Processor> {
    2.     protected override Processor PrepareProcessor() {
    3.         return new Processor() {
    4.             allHealth = GetComponentDataFromEntity<Health>()
    5.         };
    6.     }
    7.  
    8.     public struct Processor : ISomeInterface {
    9.         public ComponentDataFromEntity<Health> allHealth;
    10.  
    11.         // We can specify other data source here
    12.  
    13.         public void Update(in Entity target) {
    14.             Health targetHealth = this.allHealth[target];
    15.             // Do something with targetHealth or other data.
    16.         }
    17.     }
    18. }
     
    Timboc, Krajca, gentmo and 3 others like this.
  26. DV_Gen

    DV_Gen

    Joined:
    May 22, 2021
    Posts:
    14
    Thanks. This whole thread has been super interesting.
     
    adammpolak likes this.
  27. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    I just did a performance comparison between a "polymorphic component" statemachine, a "structural changes" statemachine, and an "enabled components" state machine. The results were very eye-opening for me.

    Skip to the part in blue text in the "Results" section for final results

    ___________________________

    The test:



    I implemented a state machine that creates the logic you see in the .gif above. There are 4 states (one of which is just an "init" state to do the initial transition to the first state). Each state has a random duration in a certain range, and they do the following:
    • StateA : translate the cube to the side (direction flips every time we enter the state)
    • StateB: rotate the cube
    • StateC: translate the cube vertically with a sine function
    I created 3 equivalent versions of the state machine:
    • Polymorphic: this is pretty much exactly the same as the statemachine in the samples of my PolymorphicComponent repo, but with random durations
    • Structural: this is a state machine implemented with structural changes (1 job per function per state, and we add/remove tag components to determine which state function should be run). Instead of using an EntityCommandBuffer, the structural changes are done by "batches" of all entities requiring a similar change for increased efficiency. I've attached the implementation files of this version to this post.
    • Enabled Components: similar to the structural changes state machine, except components store an "enabled" field to determine their active status, instead of being added/removed
    And finally, I created a spawner script that spawns 200,000 cube prefabs with either of the 3 state machine solutions on them. I also turned off rendering so that the performance results don't take that into account:



    _____________________________________

    Results:

    (the most pertinent results are in blue below, but I've also added additional stats for those who are curious)

    When making the cubes change states regularly:
    • Average total frame time:
      • Polymorphic: 17.5ms
      • Structural: 48.8ms
      • Enabled Components: 24ms
    • Average time of statemachine jobs only (excludes time spent on structural changes & other jobs):
      • Polymorphic: 4.2ms
      • Structural: 3.2ms
      • Enabled Components: 6.8ms
    • Average total frame time at smaller entity counts (1,000 instead of 200,000):
      • Polymorphic: 3ms
      • Structural: 3.7ms
      • Enabled Components: 3ms
    When making the cubes always stay in state B forever so there's no structural changes happening in either approach:
    • Average total frame time:
      • Polymorphic: 17.5ms
      • Structural: 16ms
      • Enabled Components: 24ms
    • Average time of statemachine jobs only (excludes time spent on structural changes & other jobs):
      • Polymorphic: 4.1ms
      • Structural: 2.8ms
      • Enabled Components: 6.8ms
    _____________________________________

    Conclusion:

    When I initially made this polymorphic component tool, I was convinced the use cases for it would be pretty rare and that it would mostly be in favor of ease-of-use instead of performance. But I think this test shows pretty clearly that it can bring absolutely enormous performance improvements in cases where frequent changes are necessary. Even when there are no frequent changes, the performance difference with the "structural" approach seems to be not that large. As for the "Enabled Components" approach, I would imagine that the performance cost of it would increase more rapidly with nb of states than the polymorphic approach, but that remains to be confirmed.

    When it comes to ease of use and or/amount of boilerplate required, I find that the polymorphic approach is much better than the other two approaches. And I find the "Enabled components" approach to be only slightly better than the structural changes approach in that respect

    In short, I would feel pretty confident using the polymorphic approach for most state machines that involve relatively-frequent changes now, such as AI, character movement, animation, etc.... I hope Unity can eventually provide a similar tool that doesn't require a cumbersome manual codegen step.

    An alternative conclusion I could draw out of this is that if you have a relatively "small" quantity of entities using this logic, like 1000 or less (which, let's be honest, is still a lot in most game contexts), the performance difference becomes small enough to be insignificant, and so only your personal preferences on how you like to architect your code would really matter
     

    Attached Files:

    Last edited: Jul 1, 2021
  28. WAYNGames

    WAYNGames

    Joined:
    Mar 16, 2019
    Posts:
    992
    I'd be interested to see the same sample as the structural change one but by mimicking the enabled/disabled feature discussed a while back.

    So each state has an enable disable.bool instead of being a tag component and you run every thing every time just exiting early if the state is inactive.
     
    PhilSA likes this.
  29. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    just tried it; much better than structural changes for performance, but not quite as good as polymorphic

    Total frame time is around 24ms
    "StateMachine jobs only" time is: 6.8ms

    I'll update the other post with these results
     
    Krajca and WAYNGames like this.
  30. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    4,271
    These results are quite unexpected. At first I thought it was normal, and then I noticed you were using a non-aliased union in the polymorphic case while using separate components in the other cases. It seems like you are memory-bound and consequently branch misprediction latency is almost completely hidden. Hence why there isn't much of a performance difference with state changes disabled. But it could also be that the CPU is pre-parsing the jump tables. The enabled components is probably slower because it is reloading the transform components three times over.

    As for the structural changes, your test is a little flawed. You are using deltaTime for simulation stepping, and time is the criteria for changing states. That means that if something is a little slower, deltaTime increases, which increases the number of entities that need their state changed during that frame, which then increases the amount of structural changes, which then increases the frame time, which increases deltaTime. It is a vicious cycle. The correct way to compare is to hardcode deltaTime to some constant.

    I think there's a good chance that once C# source generators arrive you will be able to migrate this package to use it.
     
    SenseEater, NotaNaN and Shinyclef like this.
  31. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    totally true, I fixed it so that both use a dt of 0.02f, and now the results are much less crazy

    Polymorphic: 17.5ms
    Structural (Batched change) : 48.8ms
     
    Krajca, Egad_McDad, NotaNaN and 2 others like this.
  32. Vacummus

    Vacummus

    Joined:
    Dec 18, 2013
    Posts:
    191
    I do want to offer a word of caution with using these polymorphic components. This is basically mixing OOP within DOTS, and with any use of OOP it comes with a baggage of pitfalls to watch out for. And this mainly for when doing highly complex work. OOP is fine for simple things. I make use of it for a few things here and there, but when things get complex I stick to DOD (especially for something as complex as state machines). OOP couples data and behavior together and puts emphasis on smart code instead smart data layouts. This generally causes your app to get over complicated and become too unpredictable. The refactor costs become very high which in turn will make it harder to optimize your code down the road if you do run into performance issues. It's a common story in the game dev industry where a studio builds a game with OOP and then runs into performance issues towards the end of the project with very little to no options to optimize outside of doing a huge refactor.

    DOD is designed to not just be performant, but also be highly optimizable (especially towards the end of your project where you have enough data to reason about what needs to be optimized). DOD keeps data and behavior separate and puts more emphasis on smart data layouts (where most of your complexity is in your data instead of in your code). This helps reduce the complexity of your app, make it more predictable, and keeps the refactor costs low (which again makes it much easier to optimize down the road). There is a lot of articles and books out there that talk about this so I won't bore you with the details.

    Polymorphism in DOD is pretty easy once you get used to thinking more in smart data layouts instead of smart code. I get that it can be challenging if you're coming from an OOP background (like myself, been doing it since 2006). But here is a little stack overflow question I answered a few years back on the difference of doing polymorphism in OOP vs DOD that I hope will help you on your journey to think more in smart data vs smart code: https://stackoverflow.com/questions/53977182/interfaces-in-data-oriented-design/54483503#54483503
     
    MehO, Krajca and andreiagmu like this.
  33. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    @Vacummus
    I agree with most of what you're saying, and I definitely think the proper use cases for it are rare and that it's a "dangerous" tool to have (easily mis-used). But I do think my "When to Use" section in the top post presents some use cases where the usual ECS approaches to these problems wouldn't be great.

    Especially the "ordered events" problem where a list of events must be processed in a specific order that is independent of their type. I'm under the impression that the only way to solve this DoD-style would be to have an "EventProcessingSystemsGroup", and re-update the group manually 100 times in a row for 100 ordered events. At that point I'd be starting to ask myself if I'm prioritizing "ECS purism" over "whatever works best". But it's always possible that there are better implementations that I simply haven't thought of

    And as for performance, the best DOD-style implementation of a state machine I was able to come up with still performed worse than the polymorphic approach, and performance would degrade faster in the DOD approach with more states added
     
    Last edited: Jul 9, 2021
  34. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    The main reason I hesitate to use polymorphism, is that very often it's a clue you are missing a better more concrete set of abstractions that you simply aren't seeing.

    Ability/skill/effect systems are a good example that I see people miss how to do well quite a bit. They just haven't discovered how to reduce the problem well.

    Also where you would use polymorphism generally isn't in a performance sensitive context. So that it diverges from DOD is sort of a given. It has to in order to fit well because most high level problems in games are inherently not DOD friendly and gain little benefit from being so.

    Our entire ability/effect system is outside ECS and mostly classes with a bit of internal pooling/reuse. It just gains nothing of significant value from a DOD design. And being a shared library I can iterate so much faster on it using proper test driven development in VS outside of Unity. It's just a total win not being in ECS.
     
  35. RoughSpaghetti3211

    RoughSpaghetti3211

    Joined:
    Aug 11, 2015
    Posts:
    1,709
    Could you point me to the example of where the component contains update logic. I have also never thought of doing this.
     
  36. mikaelK

    mikaelK

    Joined:
    Oct 2, 2013
    Posts:
    284
    Interesting. (Edited)
    Has anyone actually ever run in to performance problems with state machines and with normal use of events? Isn't entitycommand buffer made to solve the issue with structural changes?

    My current workflow is to do non performance critical code on the oop way and use ecs for stuff that shows up on profiler. That way I get the best of both world. If OOP code needs data from the ecs I make a function that pulls that data out from the system. Personally I wouldn't use state machine as a driver for AI
     
    Last edited: Jul 11, 2021
  37. Guedez

    Guedez

    Joined:
    Jun 1, 2012
    Posts:
    827
    How would a switch case calling static burstable methods perform? The "state machine" component is just a integer stating it's state, then a switch inside the job would call the correct state code. Is this already what the Polymorphic does? Would that break SIMD optimization?
     
  38. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    ECB's role is to allow structural changes to be planned for the future from within jobs, but it doesn't make structural changes any faster than they would normally be. When we reach the command buffer system, the structural changes are just done normally one by one on the main thread, and it pretty much has the same cost it would have if you did those changes directly with EntityManager. Also note that if I use the ECB in my stress test instead of the "batched structural changes" approach, the total frame cost nearly doubles

    Yeah that is pretty much all there is to it. No doubt it must break SIMD optimizations, but this is a small sacrifice that is done so that we can avoid a much bigger sacrifice; the cost of structural changes. At the end of the day, with all other factors taken into account, the version that breaks SIMD (switch statement) performs much better than the version that is SIMD-friendly (structural changes)
     
    Last edited: Jul 12, 2021
  39. Vacummus

    Vacummus

    Joined:
    Dec 18, 2013
    Posts:
    191
    Oh yeah, I can def see how your polymorphic approach can be faster in certain use cases. But not in all. And there lies the problem that I feel is being a little overlooked here. It's very hard to tell early on in game development where your bottlenecks will be (with any approach really). This only becomes more clear during the later stages of game development when you actually have enough data to be able to reason about the bottlenecks. And that's when it becomes important to have the flexibility to optimize. With the polymorphic approach, not only will it be very hard to see and isolate your bottlenecks it will also be very hard to optimize. SIMD benefits are thrown out the window, L1/L2 cache lines will not be utilized properly which in turn will cause multi threading to not be utilized properly due to false sharing, and you have no way of assigning more resources to where your bottlenecks are. So when you choose to use OOP to solve a complex problem, you are giving away the ability to reason about where your performance issues are and you are very limited in how you can optimize. And sometimes that's ok, not everything needs to be DOD. For simple things, this is fine. But for complex things, I think it's important to understand what you are giving up with the polymorphic approach.

    I have never had a need for "ordered events", so I don't have experience with solving that problem but I would imagine that can be solved with a very simple DOD queueing pattern. What DOD approaches have you used to solve this problem? And again, solving this problem with your polymorphic approach is great for simple use cases. But if ordered events is something that is going to be used a lot in my game, I would stick with DOD for the flexibility and optimizablity it offers as things get more complex.
     
  40. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    If the goal is optimizability, and the OOP approach performs much better for a given problem than the DOD approach (as demonstrated in the stress test above), then wouldn't that make us choose OOP over DOD in that situation? I know OOP won't be better for most situations, but I also never claimed it will

    The best DOD-style approach I could think of for the ordered events problem is explained in the previous post, and it would perform very considerably worse than an "OOP" approach. But it's always possible that there's a better DOD solution that I just haven't thought of. At this point, I think we'd need to wait for someone to make a better suggestion for a DOD approach

    It's true that SIMD and cache misses matter, but if coming up with SIMD-friendly approaches means we also have to add heavy structural changes as well, then the bottom line would be that the SIMD-friendly approach will perform worse than the non-SIMD approach when all other factors are taken into account
     
    Last edited: Jul 12, 2021
    Goldseeker and Krajca like this.
  41. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    4,271
    I understand where you are coming from, but this solution is far from backing you in a corner. Remember, DoD is all about choosing the right data structure for the problem at hand based on what is efficient for the hardware to transform rather than some abstract model. OOP happens to also have a data structure, and as it turns out, something closely resembling that data structure is a perfect fit for data that frequently requires different transform logic (Note: I'm not using the term "transform" in the TRS sense). It's not quite the full OOP model because there's not external references and the data layout is a little more customizable. Both of those are big wins. In addition, the reason this structure is optimal is because it is capitalizing on the hardware prefetcher for data, which is one of the best optimizations you can make, second only to using Burst (which this also does). There's very little transform logic coherence, so trying to separate things based on required transform logic which introduce random access, and a lot of it (this manifests as the structural changes which are quite expensive).

    So already we have the most optimal data prefetching. There's no random memory access so no false sharing. Transform logic lacks coherence, so we are just going to rely on the branch predicter for that. Cache efficiency is still tunable, as there are multiple union layout options. It isn't perfect yet, but I'm totally willing to help improve this area once we can adapt this to C# source generators. And that just leaves us with simd. Here's the thing about simd, Burst has two optimization modes for it. One of them is loop-based simd, in which batching transforms makes sense. The other is operation-based simd. What people don't realize is that the former is very fragile, and most Burst code by default is scalar with a little bit of the latter sprinkled in. The performance gains mostly come from other compiler tricks. And unless you are specifically processing data using batch APIs (using IJobEntityBatch instead of Entities.ForEach), odds are you aren't seeing the effects of loop-based simd. But operation-based simd is still on the table and still offers plenty of room to optimize.

    My point is, under a DoD lens, this is a really strong default for managing complexity and the tradeoffs it makes benefit you far more often than hurt you.
     
    Krajca, Occuros, PhilSA and 3 others like this.
  42. sexysoup

    sexysoup

    Joined:
    Apr 22, 2014
    Posts:
    17
    I think the orthodox DOD approach would be having three different components corresponding three different states for any entity. In case of a state change, the existing component is removed and the new component is added to the entity.
    This makes sure entities under the same state have their state data packed together in memory.
    Performance wise I think the two approach are going to be very close because the size differences in the three components are negligible. Probably the structural changes will cost a lot more cpu cycles when compared with the extra branching in your polymorphic implementation, and the tiny bit of waste in the cache by unioning the three structs.
     
    Krajca likes this.
  43. mikaelK

    mikaelK

    Joined:
    Oct 2, 2013
    Posts:
    284
    If you are concerned about performance why would you want to go like this? Wouldn't it be better for performance to agree for example in how many states and entity can be at the same time and then add that many components. Inside these components would integer representing a state and that also mapped to an enum. If a state changes you would just change the state by changing the integer with help of enum to make it more readable. In the job you would check the integer/state and execute function for that.

    That should a lot faster, if not the fastest way to check and change a state of the soldier for example?
    Idk what I'm missing here?
     
  44. Krajca

    Krajca

    Joined:
    May 6, 2014
    Posts:
    347
    Huh, I didn't know it was possible for a component to have another struct in it. Maybe didn't consider it because it will be a copy, not a reference.
    But Polymorphic components are working great! Now I need one "generic" system instead of one per state. It is a huge win if I can even preserve most of the performance!

    But after that just use integer in them? Do you mean to add the same amount of systems too? Don't get how it is supposed to work.

    It seems that it's the same as what polymorphic components presented here are doing. I mean they only have one component with one integer/enum and switch triggers function based on its value, but it seems like something similar to your description.
     
  45. mikaelK

    mikaelK

    Joined:
    Oct 2, 2013
    Posts:
    284
    You would then need 1 system that has jobs for each state component. 1 Job would have trigger code for all the states on the state component.

    It wouldn't get super messy if you refactor the code in functions.

    This is just quick example and could be improved in a lot of ways so don't hate me for posting it.
    But anyways, this should be a lot more performant than both of the above ways of doing the the thing.
    You could improve this by adding enemy component or player component to only target specific group. Combine dependencies to run both jobs in parallel etc.

    Code (CSharp):
    1. Entities.WithName("Process states group 1 job")
    2.     .ForEach((Entity entity, int entityInQueryIndex, ref Timer timer, in StateGroup1 stateGroup1) =>
    3.     {
    4.         if (stateGroup1.state == EnemyStates.angry)
    5.         {
    6.        
    7.         }
    8.         if (stateGroup1.state == EnemyStates.fleeing)
    9.         {
    10.        
    11.         }
    12.         if (stateGroup1.state == EnemyStates.panic)
    13.         {
    14.        
    15.         }
    16.     }).WithBurst().ScheduleParallel(this.Dependency);
    17. Entities.WithName("Process states group 1 job")
    18.     .ForEach((Entity entity, int entityInQueryIndex, ref Timer timer, in StateGroup2 stateGroup2) =>
    19.     {
    20.         if (stateGroup2.state == EnemyStates.inLove)
    21.         {
    22.        
    23.         }
    24.         if (stateGroup2.state == EnemyStates.curious)
    25.         {
    26.        
    27.         }
    28.         if (stateGroup2.state == EnemyStates.bored)
    29.         {
    30.        
    31.         }
    32.     }).WithBurst().ScheduleParallel(this.Dependency).Complete();  
     
    Last edited: Jul 16, 2021
    Krajca likes this.
  46. Krajca

    Krajca

    Joined:
    May 6, 2014
    Posts:
    347
    Yeah, so it's the same. The only benefit of polymorphic components is packed data. You can have different data per state but it's packed while in your case you will have all the data in one component. So it will take more space. Anyway, it's how I understand this.
     
  47. IgreygooI

    IgreygooI

    Joined:
    Mar 13, 2021
    Posts:
    48
    I have been looking for this for a long time. I really like this way of doing polymorphism. However, HPC# do not provide a lot of tools for that. It feels like I am just working with system language like C, So I turns to them to find inspiration and find enum in Rust does the this kind of tagged union could be used for HPC#. However, what I also found is that the use of "[StructLayout(LayoutKind.Explicit)]" and generics is not allowed in C# runtime. That does stop using generic tagged union for other things like monadic error handling, which comparing to throwing exception, is a much safe way to handle exception(throwing exception is nightmare for unmanaged memory) for burst in my opinion(which is not the case for official unity packages).

    At the end of the day, as I mentioned, HPC# is basically C, where all the C# features do not really works and we have to ensure the code can both be compiled in mono and burst. What we could do more with existing platform is probably codegen. Thus, Many thanks for this.
     
  48. mikaelK

    mikaelK

    Joined:
    Oct 2, 2013
    Posts:
    284
    That and there was also talk about cache misses and simd compatibility.

    The solution I gave should be good in both of those cases since it can be burst compiled and the data layout is optimal for that.
    Basically a data oriented design as far as I see. Please correct me if I'm wrong.

    I don't know how big is your entity count, but storing a state for entity in a struct and in an integer is quite optimal I think.
     
  49. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    4,271
    I'll just throw this out here, your solution is affectively enabled components except with the boolean fields packed into one place. You still have the issue of iterating over the data more times than necessary, which is likely why the polymorphic components win out.
     
    Occuros likes this.
  50. mikaelK

    mikaelK

    Joined:
    Oct 2, 2013
    Posts:
    284
    little confused here. If I have to process ai for every frame for every enemy entity, then how am I iterating the data more times than necessary?

    If its about the amount of states an enemy can be you can limit it to one.

    Or you can also query all the state components. That way you only need to have 1 job and go through the data once.

    If enemy can be in few states at the same time. For example cautious and moving then wouldn't you still have to go through the data? Assuming comparing integers is not going to be issue for performance.


    I'm not trying to prove that the polymorphic way is wrong just trying to make sense whats the best way to do simple state machine with ecs
     
    Last edited: Jul 17, 2021