Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Question How do DOTS and non-DOTS parts of Unity fit together?

Discussion in 'Entity Component System' started by flatterino, Jul 21, 2022.

  1. flatterino

    flatterino

    Joined:
    Jan 22, 2018
    Posts:
    17
    Hi,

    One thing I can't figure out is how the non-DOTS parts of Unity fit into the DOTS way of working.

    For example, I know that animation isn't DOTS-enabled at this point. So if I wanted to pursue a pure DOTS workflow, how would I fit animated characters into my game?

    Correct me if I'm wrong, please: the DOTS authoring workflow means that you're supposed to use the Unity Editor as you normally would, using GameObjects to set up whatever you might need, including non-DOTS functionality, because all of it will be converted to their DOTS equivalents at runtime. Right?

    However, in a regular, non-DOTS workflow, if wanted to have a humanoid character walking, I would attach an Animator component to a GameObject, and work with that.

    Am I correct in assuming that, since animation isn't DOTS-ready, I should keep doing that within the DOTS authoring workflow, and that my GameObject (and Animator) would get converted into Entity and System equivalents that would handle the animation just as I would expect on a regular GameObject? And how would I even reference and interact with that "converted" Animator if, say, I wanted it to play a different animation depending on the result of some DOTS job?

    Hopefully my questions are not too confusing to follow.

    Thanks a lot.
     
  2. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    I'm not sure if my approach is good or not, and I haven't found a lot of resources on Hybrid DOTS best practices. But here's what I do for a pure-DOTS character controller that has a non-DOTS animated mesh.

    Just so you know what we're going for before I start the detailed explanations, here's what we want:
    • Have a DOTS prefab, and a GameObject prefab (just the "visuals" of the character)
    • Have systems that auto-spawn and auto-destroy the "visuals" prefab whenever the DOTS character entity is spawned/destroyed. That way you don't need to remember to manually spawn and setup 2 things every time you want to instantiate a character
    • Have systems that compute animation values in bursted jobs
    • Have non-bursted main thread systems that write those values to Mecanim Animators
    ___________________


    Two separate prefabs
    I have one prefab for my DOTS character (physics, movement components, etc....), and one prefab for the non-DOTS "visuals" of my character (skinned mesh renderer)




    A managed ECS component to remember the "visuals" prefab
    During conversion, I add a "class-based IComponentData" component to my pure DOTS prefab. The purpose of this component is to hold a ref to the "visuals" prefab of my character
    Code (CSharp):
    1. [Serializable]
    2. public class PlatformerCharacterHybridData : IComponentData
    3. {
    4.     public GameObject MeshPrefab;
    5. }


    A system to manage spawning of the "visuals" prefab based on the DOTS prefab

    I then have a system that schedules two jobs:

    One non-bursted main thread job iterates on all characters with a "PlatformerCharacterHybridData" that haven't been initialized yet. This job instantiates the "Visuals" prefab for that character, and adds a new "class-based ISystemStateComponentData" to the character entity.

    The purpose of that component is to remember the GameObject instance of the spawned character visuals, as well as cache a reference to its Animator so we don't have to GetComponent every frame. It's also the component that tells us that the hybrid init step has been done, and it's a "ISystemStateComponentData" because we'll need to use these references when the entity gets destroyed (more on that in next paragraph):
    Code (CSharp):
    1. [Serializable]
    2. public class PlatformerCharacterHybridLink : ISystemStateComponentData
    3. {
    4.     public GameObject Object;
    5.     public Animator Animator;
    6. }
    Another non-bursted main thread job detects destroyed character entities by checking if they have a "PlatformerCharacterHybridLink" ISystemStateComponentData, but not the regular character components. This job handles destroying the "visuals" GameObject when the character entity is destroyed


    A system to handle animation parameters
    Some bursted jobs write some animation values into a pure DOTS component during the DOTS character update. Then, a non-bursted main thread job iterates on all characters, gets their animator reference through the "PlatformerCharacterHybridLink" component, and writes the previously-mentioned "animation data" from the ECS component to the Animator parameters
     
    Last edited: Jul 22, 2022
    apkdev, koirat and mbalmaceda like this.
  3. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    I'm almost convinced there's gotta be a simpler way, but since I'm mostly doing this temporarily until pure DOTS animation comes out, I haven't spent too much time thinking about it

    At least I think it might be possible to turn this into general-purpose components that can be reused for anything in your game that requires animation, but in general you should mentally prepare yourself for Hybrid workflows to make things a lot more complicated than if you worked with either pure GameObject or pure DOTS (for now, at least)

    One additional source of complexity that I haven't mentioned in my approach is object pooling. You not only have to worry about pooling the character visuals prefab, but you also need to pool the "managed IComponentDatas" that you add to your entities, since they'll be creating GC allocs as well. If we were working in pure DOTS, we could simply not care about pooling at all
     
    Last edited: Jul 21, 2022
  4. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    3,983
    Managed ICDs support IClonable and IDisposable, so theoretically you could tie those to a pooling mechanism and potentially even pool the ICD too. I haven't tested this yet though because I haven't needed it, which brings me to my next point.

    It seems like you all have the same goal of using hybrid primarily for character animation. Why are you going through this pain? It's not like DOTS Animation is around the corner. It is scheduled to arrive after the 1.0 release which is 2022 LTS, and even then will probably take a few releases before it is actually usable. That means it is at least a year out, probably more like two.

    Meanwhile, there are free community DOTS animation solutions that work today. Why are they being overlooked? Lack of awareness? Lack of learning resources? Lack of features? Or are you all afraid to touch something made by someone other than Unity?

    These aren't rhetorical questions. I want to know this, because I believe the lack of involvement in community-based solutions is holding the DOTS community back.
     
    Elapotp and Krajca like this.
  5. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    To be honest I have no idea where to look for proper skinned meshes in 0.51 myself and I do get around a bit. The last I heard of anything that would work very well is your own framework!
     
    Krajca and DreamingImLatios like this.
  6. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    I can't speak for others, but for me these are my reasons:
    • I need this as part of an asset store project, so it makes sense to only use stuff that is as "standard" as possible. And of course; it has to be stuff that I can legally distribute on the asset store
    • Official DOTS Animation will inevitably be a thing. There is risk involved in using a third party solution, because you can never be sure that it'll remain supported (especially after the official one releases). So you are at risk of ending up in a situation where the "official" dots animation solution releases in a year or two, but you're stuck with having built your project with this old unsupported third party solution that you have to maintain and improve yourself (instead of having all this stuff done for free by Unity engineers)
    • I also see value in using a "standard" tech, because of all the resources, tools and community that will build around it
    Overall, I just feel like I prefer the peace of mind of waiting for the official DOTS Animation.

    If I was working on a non-asset-store project and I felt pressure to start implementing the final animation solution right now, I'd have to look for a third party solution. But as of right now, I prefer delaying the implementation of animation until later, and just do other stuff in the meantime.
     
    Last edited: Jul 21, 2022
    WAYNGames and DreamingImLatios like this.
  7. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    3,983
    So to start with, the official ECS samples updated the Skinned Mesh Rendering sample to do simple animation on the humanoid characters. I believe I remember seeing a GitHub repo somewhere that used a similar technique (it was embedded in some experimental project). There's also a few vertex/bone animation texture solutions around GitHub. One of them is this: https://github.com/joeante/Unity.GPUAnimation/tree/361571ebd05fbf6c8321c28e4908010ad29f7d2f Note the commit hash, as future updates are against some internal DOTS version. Also pay attention tot he forks that have their own compatibility improvements.
    There's also these:
    https://github.com/maxartz15/VertexAnimation
    https://github.com/felipemcoliveira/com.felipemcoliveira.crowdmorph

    But of course I do have my own solution, so I am a little biased. :p

    This is a strong but very unique argument. Though I do see a lot of assets which have optional integration for 3rd party solutions.
    This doesn't make sense to me. Whether it is hybrid or third party, it can still be a temporary solution and later switch to Unity when you are ready. So pick between the two based on which has less pain.

    I can't speak for the other solutions, but at least for me, it would take some horrific incident involving me directly for me to stop continuing to support my solution. There's only a few engines out there that provide good authoring and build solutions as well as access to native performance code. And most of those rely on C++ OOP which is going to fight me every step of the way when doing the animation things I want to do. I also don't believe Unity is going to solve my animation problems for me since my problems are unique. Not that any of this will change your mind, but I figured I'd mention it anyways.
     
    hippocoder likes this.
  8. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    I'm sure it must be obvious for you, but for other people who don't really know you it still feels like a very big risk

    Regarding your framework, I'd still like to ask; is it possible to import only your animation package individually, without any reliance on any other part of the framework (not even the "Core" part, if any)? This is usually what frightens me when I hear the word "framework". I'm picturing in my mind something that'll have a big influence on my architecture, add all kinds of extra systems, do all kinds of things that I'm not 100% aware of, and complicate things a lot when I make a netcode game where systems must be moved to prediction groups. What I really want is a fully-independent package that does only animation and absolutely nothing else
     
    Last edited: Jul 21, 2022
  9. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,574
    The current project I am working on, has same concerns regarding animation 3rd party assets, as @PhilSA mentioned.
    You are active now. And that brings superb great contribution to the community. I love your contributions. You keep developing package for long time already.

    Yet, It is impossible to tell, neither you can, if you don't stop developing it tomorrow, for whatever reason. I hope you won't. But I can not take that risk in our project, which is aiming to be developed for extend period of the time, plus further support ahead (months / years). Plus requirement of training anyone on the team, to use the 3rd party assets / framework.

    Hence that is my reasoning, to limit use of 3rd party assets.
    I hope that explains my specific case scenario.
     
    bb8_1 and DreamingImLatios like this.
  10. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    3,983
    That's a good point and one I am already aware of. There's not much more I can do about it, but I also don't think it is the only reason.

    If the Package Manager handled git dependencies properly, I would have split this up into a bunch of small packages so that people could use exclusively what they need. There is some shared functionality especially in Core that doesn't make sense to reinvent or copy for each feature. Animation in particular uses a blob asset generation utility and a mechanism for passing containers between systems which is needed for accurate culling. It also uses some physics utility functions for culling as well.

    Instead of a bunch of packages, the latest feature release addresses this issue by making all invasive features opt-in with the exception of a few critical essentials that everything relies on. If one of these few core essentials break your DOTS project, that is a bug and I will take it very seriously. If you were to install the package and not do anything else, the only thing that would run is a couple of conversion systems that work on custom types (so they wouldn't do anything), and a conversion hook that checks for a special interface to customize the conversion world, which wouldn't exist in your project (so it would do nothing). Everything else including critical essentials have to be enabled via ICustomBootstrap API. There's also some editor create menus, but who counts those?

    I would be lying if I didn't call it a "framework" since there are critical essentials. But it behaves much more like a toolkit. I wish there was a better word for it.
     
    Occuros, bb8_1, Krajca and 1 other person like this.
  11. flatterino

    flatterino

    Joined:
    Jan 22, 2018
    Posts:
    17
    Thank you everyone for your replies, I really appreciate them, but could we please go back to the original topic? That is, coordinating DOTS and non-DOTS parts of Unity.

    I'm thinking out loud here: I could maybe operate and compute stuff in DOTS as much as I can, and then have a System or Systems that, at the end of each frame, after all other Systems have finished doing their work, are set up to "translate" the result of these computations to their respective non-DOTS parts of Unity (GameObjects, animation, whatever).

    The way I'm thinking about it, this "translation layer" would be one-directional as much as possible: for example, animations exist outside of DOTS, and obviously depend on the state of things like Transform, velocity, etc. which can be computed in DOTS. At the same time, things like Transform, velocity, etc. don't depend at all on the state of the animation. At least they don't have to. So the information flow would be [DOTS] → [non-DOTS], but not the other way around.

    So, my (new) two questions would be:
    1. Going with such an approach, how much (approximately) would I be leveraging the performance improvements that DOTS bring to the table? In other words, nobody wants to deal with 100% of the new complexity for just 40% of the performance benefits. Would this be the case?
    2. Given that there's no way to fully leverage DOTS as of now, would I see a big performance improvement from just using the C# Job System along with a traditional OOP & GameObject based design?
    Thank you all.
     
  12. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    You've definitely got the right idea, especially the "one directional" part.

    And personally, my preferred approach is to always go with DOTS systems writing to gameObjects, rather than gameObjects "reading" from an Entity assigned to them. That way, all your logic in your game remains implemented in systems instead of being split between systems and monobehaviour updates. You also keep the benefits of explicit system update orders, and a system's update is also a lot more efficient than multiple monobehaviour Updates

    In my first post, the "A system to handle animation parameters" paragraph basically describes this. First, we compute all anim values on the pure DOTS side of things, and then something just passes that data along to mecanim on the main thread. The rest of that post mostly describes how to auto-spawn and auto-destroy the GameObject prefab whenever the ECS prefab is spawned/destroyed

    I guess this will heavily depend on the specific case. Even when just talking about characters, if your movement code is very simple and your Mecanim animator is extremely complex, then the benefits won't be as much as if it was the opposite scenario. Most of the time though it will be worth it

    And like I said earlier, GC allocs are something you'll have to deal with when going hybrid.

    Yes, it's something a bunch of games have been doing to try & benefit a bit from DOTS even though they have a vanilla Unity project. But the usage of jobs is much more limited in that scenario because almost none of the OOP Unity API is job-compatible.

    For example; raycasts & physics queries in general are very often a significant part of a game's frame time (used for AI detection, character controllers, projectiles, interactable objects, etc...). But since OOP Unity physics APIs are not job-compatible, there's no way to do any of those physics queries in jobs. Although there have been talks of exposing a job-friendly physics API in the future. We do have the "Batch raycast APIs" right now, but I've found their usability pretty limited compared to being able to do queries directly in jobs like DOTS physics allows you to do. They don't have the full set of features the non-batch APIs have

    In other instances, there's a way to use jobs by first converting your OOP data to a job-friendly format (ex: convert your UnityEngine.Transforms to a NativeArray<RigidTransform>), then run a job that does operations on these, and finally reconvert your job-friendly format to the OOP format (ex: reconvert your NativeArray<RigidTransform> to UnityEngine.Transforms). The job part performs just as well as in pure DOTS, but you're paying a significant price for always converting/deconverting your data to a job-friendly format every frame.
     
    Last edited: Jul 22, 2022
    bb8_1 likes this.
  13. d3eds

    d3eds

    Joined:
    Jul 17, 2022
    Posts:
    53
    What PhilSA said, but also, after coming to terms with Jobs:

    Burst, with SIMD considerations in how you structure data and iterate over it, can (combined with Jobs) see you getting speed improvements of more than 1000x on things that are suited to these ways of dong stuff.

    That sounds hyperbolic. But it's not. The learning is hardwork, but no harder than ECS in Unity.

    And Burst and Jobs are basically finished and production ready.

    In doing this, you'll be making tiny systems of your own, in a sort of ECS-lite manner, and learning how to optimise exactly that part of your game that needs this kind of boost, and can still do everything else in an OOP and Monobehaviour manner.

    No need to change rendering or anything else.
     
  14. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    3,983
    Sorry about that. In general I feel that if you rely on conversion, you want systems to access GameObjects. If you instead rely on procedurally creating entities in an otherwise classical Unity project, then going the other way around may make more sense.

    If you assume DOTS frame time is negligible compared to classical Unity because it is that fast, then your speedup is inversely related to the amount of stuff left as MonoBehaviours. In practice it isn't quite that good, but it is a good ballpark.
    If you have some really expensive code, yes.
     
    bb8_1 likes this.
  15. flatterino

    flatterino

    Joined:
    Jan 22, 2018
    Posts:
    17
    I appreciate all the replies, friends. Thanks.

    Is there any way as of today to know whether you're taking advantage of SIMD instructions other than examining the ASM output from Burst by hand? What about designing and structuring your data and systems? How can you tell, at design-time, what would be more (or less) efficient?
     
  16. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    3,983
    Most of the time you don't need SIMD. Burst, good memory layouts, and clever algorithms account for the majority of DOTS performance.
     
  17. d3eds

    d3eds

    Joined:
    Jul 17, 2022
    Posts:
    53

    It's not a good answer: lots of reading and experimenting. Note, I say "with SIMD considerations" - meaning the knowledge, in my case, came from having grown up using Assembler, and I apply this thinking and consideration to what I read about target devices and their SIMD capabilities, then hypothesise and experiment from there.

    The results have been stunning for the DSP stuff I've been doing with this, to the point where I don't feel the need to drop down into C to make an audio plugin.

    The greatest annoyance in this process is how long Burst compilation takes.

    The detective work to eek out more performance is a sick, OCD-like fun - and rewarding.