Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Discussion Dependency Injection in Unity - Vampire Survivors uses Zenject!

Discussion in 'General Discussion' started by TheNullReference, Apr 11, 2023.

  1. TheNullReference

    TheNullReference

    Joined:
    Nov 30, 2018
    Posts:
    222
    It's an awesome video but his examples weren't broad enough for me to get a full idea of how to implement DoD.

    For his rocket example, how do you apply this data to a transform? Does the transform have to become a part of the data struct now? Same goes for mesh, animations etc. What if a particular rocket can be effected by a button to change its direction, how do I isolate that particular rocket?

    I'd love an OOP/DoD system where basically anything with more than 2 instances gets treated like a "particle" in his words. Maybe a draw back of DoD is setting up a manager to handle 1000x instances for your player class which could have been a singleton.

    Survivors game is a good example to test the theory. Main player should be an OOP that benefits from OOPy things like having a lot of the UI bound to it. Enemy behaviour would be cool to build as a DoD

    Also I agree in that I didn't think vampire survivors was a good example of Zenject. There was a couple of unit test scripts in there, so maybe they thought they where going to do a lot more testing?
     
  2. xVergilx

    xVergilx

    Joined:
    Dec 22, 2014
    Posts:
    3,292
    You can do both in the same project. E.g. keep visual layer / UI / Services as OOP, simulation can be DOD / ECS.
    Entities "pure" approach is convoluted right now and has higher level of entry for the beginners. Baking is overengineered in my opinion, as it tries to cover each and every case instead of focusing on the actual game development flow. And subscenes... well they aren't the best thing in terms of maintainability. Hopefully this will improve in future versions.

    Later on you can always swap out more MonoBehaviours to the Entities side once you've got proper solutions for it. Right now I'd say baking and subscenes are the biggest pain points, which can be avoided completely if MonoBehaviours are used as data authoring. Its up to you though. As you can see VS tries to mimic exactly that. But without ECS.

    Custom bootstrap is the way. Though its more advanced topic.

    Because usually you don't do that. Sort groups against groups instead. Its way more simpler and efficient. Use Systems window to visualize what's going on and to save some time. Plus, you can later on do job balancing based on groups instead of micromanaging systems. Its case-by-case basis though.

    Use IJobEntity. Its as simple as it gets. Query can be defined separately - no attributes at all.
    It might get hard to get used to, but once you realize different job structs can be used in different systems as atomic ops - it reduces codebase drastically.
     
    Last edited: Aug 22, 2023
  3. TheNullReference

    TheNullReference

    Joined:
    Nov 30, 2018
    Posts:
    222
    I appreciate the responses, can you give a little more detail into what you mean by using monobehaviours as authoring? I thought the point of baking was to turn monobehaviours into entities?
     
  4. xVergilx

    xVergilx

    Joined:
    Dec 22, 2014
    Posts:
    3,292
    Basically, you split data like you would with "pure" Entities / DOD.
    But data is authored via MonoBehaviour wrapper in runtime. Which can be also kept or added to the Entity.
    (via EntityManager.AddComponentObject)

    Then query that MonoBehaviour directly from the managed system, and call any method you need.
    Data can be taken from anywhere at this point.
    Its more of a "Convert & Keep" as it was prior. This is often referenced as "Hybrid" solution.

    Still DOD in concept, logic is kept in the systems. MonoBehaviour is treated as "data" / authoring. But can also be an access point. All previous logic is supported pretty much. That is plugins from Asset Store, legacy, anything you might need.

    Tests can be applied on top of the systems or jobs. Decoupling is performed by decoupling data & logic into separate systems. Basically it ticks all checkmarks that DI does, except no cons.
    No blackboxes (injects), no extra interfaces required, no reflection involved in runtime.
    As a bonus - easy access to multithreading.

    Baking works differently. It removes MonoBehaviour completely at editor phase. That is because SubScenes doesn't know how to store managed objects properly. So you have to recreate it from the system for example.
    While baking "boosts" loading speed a bit, but its also a pain to work with depending on what you're trying to do or use.

    If you want more advanced authoring for the hybrid setup, you could check out EntitiesExt.
    See EntityBehaviour on how to do it. In the sig from repo as package, or available via OpenUPM as an actual package. Or here's an example how such authoring looks.
     
    Last edited: Aug 22, 2023
    TheNullReference likes this.
  5. TheNullReference

    TheNullReference

    Joined:
    Nov 30, 2018
    Posts:
    222
    Hmm interesting I will have to check it out, so that means the components are managed and can only use system base?

    From my understanding of "worlds" that they can essentially act as application states, which is incredibly powerful, but I don't think the examples use them at all. Quite disappointed with the example repo, it's where i got all my examples of nested structs and 5 attributes per class, so I assumed that was standard.
     
  6. xVergilx

    xVergilx

    Joined:
    Dec 22, 2014
    Posts:
    3,292
    Component Objects are always managed, but they are kept away from the chunk as an array of references.
    ComponentData can be both, but usually should be structs / unmanaged.

    This means you can process any amount of data in jobs / unmanaged ISystems, then at the end of the simulation display it via visual layer (e.g. MonoBehaviour UI) / managed SystemBase. The same way like you would do with default MonoBehaviour implementation.

    You don't really need more than one world in most of the cases. Unless you do some weird kind of server-client simulation sort of thing.

    I find manual to be more reliable in terms of examples. E.g.
    https://docs.unity3d.com/Packages/com.unity.entities@1.0/manual/iterating-data-ijobentity.html
     
    Last edited: Aug 22, 2023
  7. PanthenEye

    PanthenEye

    Joined:
    Oct 14, 2013
    Posts:
    1,763
    The transform is held into 3 arrays of NativeArray<float3> position, NativeArray<quartenion> rotation, NativeArray<float3> scale as part of SomeData struct, then you access by index across all arrays. The separate arrays are more performant.

    It's harder for animations since Animator is entirely object-oriented but there's the option of using Playables API, which can be batch processed by things like transition weights and animation timings as part of a hybrid approach. Gotta wait for Unity's Animation package for true DOD. Also, visuals are seperate from data, the struct likely has an int property that references visuals from another array in a different script. Just like you access position by index, you also get visuals by index, then update animation, meshes, etc. Considering meshes, the talk references Compute shaders in that regard but that's foreign to me since I need to support WebGL and Compute shaders aren't available there.

    As for isolating a particular rocket, you can assign unique IDs to all rockets, which would be another array, or you could just use the index and have a bool or enum property as part of data struct that enables some kind of special behaviour. The button press then enables the bool or sets an enum by index/ID and the system then runs an if statement to check if any special behavior is applied and if not then run the regular logic. Basically, everything happens on a system level, rather than an object level.

    I'm still quite new to performant code in Unity, but this is my understanding so far.
     
    Last edited: Aug 22, 2023
  8. TheNullReference

    TheNullReference

    Joined:
    Nov 30, 2018
    Posts:
    222
    Yea that makes more sense, it's ensuring that all the indices match for the "same" rocket. hmm I guess it actually doesn't matter, because the matched indices will naturally 'become' synced. I think where it might get challenging is if you have 2 or more managed references inside the rocket, for example an animator and a material, it's likely you want a particular animator and a particular material on the rocket, then again, maybe I'm applying OOP brain to a DoD problem.

    In ECS I assumed that components where treated differently so for example I can have two rocket types, one has a fuel component and the other doesn't. Both rockets would want to use the same movement system, but only one type would deplete fuel, but because there's a different number of components, the indices wont naturally sync anymore. because fuel[152] belongs to position[542] type of thing. things Im sure unity ecs has sorted. Makes me think you could probably use the a mono-behaviour to author such a connection.

    In the video example, I would take "own your data" to heart, and probably find a way to either have one rocket, or 2 systems.
     
  9. PanthenEye

    PanthenEye

    Joined:
    Oct 14, 2013
    Posts:
    1,763
    In my mind the probably is something like RocketSystemLogic.cs that iterates on RocketData struct fully in parallel using Jobs and then there's VisualLogic.cs, which handles setting the transform, animator, materials, etc which are held in their own arrays with their own update loop since a lot of this is tied to main thread and can't be fully parallelized. As for handling different rocket visuals/behavior, it's likely just an enum property part of the RocketData and then you simply run a switch statement or instantiate from a different pool based on enum input, i.e. approach it from top down level, rather than bottom up and separate data from visual representation.

    My ECS is very basic but my understanding is that ECS assigns unique ID to each entity, and systems iterate only on entities with matching components so you add or remove components to match different systems.
     
  10. andyz

    andyz

    Joined:
    Jan 5, 2010
    Posts:
    2,134
    I don't think the rocket example is a good one - it is a very limited size object! Also if you have huge arrays then separate arrays will mean you are jumping around in memory as you access each array surely?!
    So using structs as he goes onto makes sense, but eventually you might also end up with a huge struct for more complex entities...
    I am unsure of the issues with using structs vs classes when recommendations are generally to use structs for up to 16 bytes (4 floats) of data. But I guess you split up data in portions so a struct for pos, velocity etc. separated from other data
     
  11. PanthenEye

    PanthenEye

    Joined:
    Oct 14, 2013
    Posts:
    1,763
    Even if rockets are small objects, the principle can be applied to larger and more complex entities that are processed in the main loop.

    The idea behind using separate arrays is to take advantage of data locality, i.e. you process all positions, then all rotations and finally all scales. When data is accessed sequentially from memory, it can be fetched into the cache more efficiently. This is why the video emphasizes storing data in separate arrays.

    As for struct size, I can't comment much. It looks like modern CPUs have a common cache line size of 64 bytes for most architectures, so 64 bytes or a small multiple of it would be the upper limit to reduce cache misses. But for Jobs and Burst purposes, you'd want to keep structs generally small, so it's easier to parallelize and easier to avoid race conditions and other related issues.
     
    Last edited: Aug 22, 2023
    TheNullReference, Macro and andyz like this.
  12. andyz

    andyz

    Joined:
    Jan 5, 2010
    Posts:
    2,134
    Yes if just moving in straight line or something, but you don't process all things separately, for instance you might get angle from target to rocket position and then update the rocket angle and position together so they tend to be tied together is what I meant.
    But yes separation as far as possible reduces cache misses... though your sanity may take a hit in doing such optimisation away from tidy objects!
    If you are not dealing with hundreds of entities you should probably forget most of this imho
     
    Macro likes this.
  13. Macro

    Macro

    Joined:
    Jul 24, 2012
    Posts:
    45
    This is kinda where I stand on it all, as ECS in unity is kinda like async await where you add it to one part and then need to start adding it or at least accounting for it in different areas, and while I think ECS (and async await :D) are great things its just a way of doing stuff.

    If was making a point and click game I would be more inclined to just bypass ECS and just use OO/Editor approaches for the most part, whereas if I were making an RTS game or a low level simulation then I would probably need to use ECS to keep a reasonable performance level but again I would probably just put the core simulation elements into ECS and keep the rest of stuff outside of it.

    Just on this note this is one thing that has annoyed me for a while with ECS frameworks where they dont like events, I know you can express the same notion in ECS by adding a `TriggerSomethingComponent` to an entity then have a group just target that, but then when you want your UI to trigger something in multiple places this becomes a bit of a pain as its not clear unless you know that paradigm and also your UI has to have knowledge about the entity and component, whereas a pub/sub system seems more applicable. As long as your data is in the right format it shouldnt matter if your system is triggered by an event, an observable, an update loop, a thread etc. I havent looked at Unity ECS recently but back when I did look at it, you didnt really seem to have control over how stuff was executed, it was just normal Update or Job based execution.

    From the performance perspective there is always going to be the argument of "you should not be wasteful with resources" (especially on mobiles) but it feels like I mentioned before where you learn ECS and suddenly everything needs to be ECS (to be fair you did say earlier that you can do hybrid approaches etc), but this is just "another way" of doing things.
     
  14. PanthenEye

    PanthenEye

    Joined:
    Oct 14, 2013
    Posts:
    1,763
    Depends how things are structured. You can calculate Position and Rotation in a Job fully sequentially, then apply the result in the main thread by calling transform.SetPositionAndRotation. All of that still runs in the update loop within a single frame, so sequential data processing is preferable from a perf standpoint.

    Yea, the separate arrays thing definitely only shows gains when dealing with a large amount of instances but the tangent of the thread is Vampire Survivors so I think it applies.
     
    andyz likes this.
  15. andyz

    andyz

    Joined:
    Jan 5, 2010
    Posts:
    2,134
    I may be going off post here - but I don't know how this bit works - if you are using arrays or an array of struct instead of classes... what happens in DOD or Unity's jobs when you need to remove an entity? So you have 100 missiles, some of them are destroyed. Do you remove them and re-shuffle the arrays, flag them as dead... what is the idea there? Or do you regenerate for each job?!

    (JB does not mention this bit)
     
  16. MadeFromPolygons

    MadeFromPolygons

    Joined:
    Oct 5, 2013
    Posts:
    3,877
    A lot of the time in DOD you are constructing fresh structures each frame, due to the temporary nature of the data and the way you use it. You are filling temp structures to be computed that frame, rather than creating persistent structures to persist between multiple frames. You then extract the results, do something with it, and then next frame you need computation construct and compute again.

    Thats a major oversimplification of many aspects, but should give you enough of a pointer on how this often will work
     
    andyz and Ryiah like this.
  17. PanthenEye

    PanthenEye

    Joined:
    Oct 14, 2013
    Posts:
    1,763
    As MadeFromPolygons mentioned, structs are redone every frame or overwritten as a consequence of some operation, you can modify NativeArray values. In the video Jason Booth says CPU processing is fast, and while there is lots of memory these days, the memory speed/bandwidth is still limited, so you structure the data in a way that's the most efficient for a CPU to process memory wise. Go door to door, rather than across the town with a couple of detours along the way.
     
    Last edited: Aug 22, 2023
    Ryiah likes this.