Search Unity

Freeform Animation : Modular Rigging

Discussion in 'Animation Rigging' started by awesomedata, Mar 20, 2019.

Thread Status:
Not open for further replies.
  1. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    Could we get some details as to where/how this is taking place?

    For example:
    • Are there tools being developed to help users create modular rigs (or is this API-only?)
    • What scope are you guys looking at for this feature-set?
    • Does this implement C# Animation Jobs natively somehow (to help with ControlRig performance?)


    • Is this (quite awesome) feature related somehow?:
      https://docs.unity3d.com/Packages/com.unity.animation.rigging@0.1/manual/index.html
    • Will there be any way to visualize our modular rigs moving in the editor (perhaps via Playables?)

    I'd love to beta-test these features, but I have no clue what functionality (or scope) you guys are aiming at with this right now. Any info on this (very ninja-like) feature-set would be highly-appreciated!! D:
     
    NotaNaN and Oblivionized like this.
  2. MadeFromPolygons

    MadeFromPolygons

    Joined:
    Oct 5, 2013
    Posts:
    3,980
    Ive tried asking about this and have not gotten replies for like a year. Its a bit annoying that noone is acknowledging this
     
    awesomedata likes this.
  3. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    I completely agree -- it's like animation is always left out in the cold.

    For something as vital as that which creates "the illusion of life" -- Unity seems to be oddly-focused on almost anything else besides animation. :/

    @RomainFailliot
    I would really like to know more about this feature and what it is intended for. Anything you or anybody involved with animation know about this feature would be great...
    For example, might this be something the ThirdPersonController team could use in their physics-driven approach?

    Also -- is Kinematica still a thing, or has it been tossed to the sidelines too?
    Maybe we've been waiting on ECS to help boost animation performance perhaps?

    I've been very concerned about Unity's animation future... and I'm not the only one... Please update us? Pretty please?
     
    NotaNaN and Oblivionized like this.
  4. davehunt_unity

    davehunt_unity

    Unity Technologies

    Joined:
    Nov 13, 2017
    Posts:
    32
    Hi awesomedata,

    Just to reassure you, there are great efforts in progress here. The Animation Rigging package is initially being released as preview for 2019.1. The documentation can be found here https://docs.unity3d.com/Packages/com.unity.animation.rigging@0.2/manual/index.html

    We just delivered a GDC Developer Days presentation, which was recorded and can be seen for free on the GDC Vault here https://www.gdcvault.com/play/1026151/. We will also follow up with a blog post and more tutorial content very soon.

    Thanks for your interest, and my apologies for not seeing this earlier! We will be in touch with updates as they become available.

    -Dave
     
    Last edited: Apr 18, 2019
  5. davehunt_unity

    davehunt_unity

    Unity Technologies

    Joined:
    Nov 13, 2017
    Posts:
    32
    Here are some more answers to your questions
    • Are there tools being developed to help users create modular rigs (or is this API-only?)
      • Yes. Totally modular. Rigs are built from general purpose constraints for users to assemble in any creative way they want.
      • All C# code in the package is open source and easily extensible to build your own constraints. This was a fundamental design goal, to enable the community developers to extend functionality because each game design may have custom needs.
    • What scope are you guys looking at for this feature-set?
      • The Animation Rigging package initial release in 2019.1 enables runtime rigging. Following this we will develop keyframe animation authoring tools for creating animation clips on control rigs in Unity. While we are in preview we will be paying close attention to how the community uses it so we can build more efficient artist workflows.
    • Does this implement C# Animation Jobs natively somehow (to help with ControlRig performance?)
      • Yes. The Animation Rigging package is built on the C# Animation Jobs API. With this we can hijack animation stream and get more precise control before animation is pushed out to GameObjects. Also, since the rig constraints are jobs you get safe multi-threading for free.
     
    _met44, Orimay, Oblivionized and 3 others like this.
  6. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    Thanks @davehunt_unity -- You've got me very excited!


    For a long time, I've been trying to do this (for obvious reasons):




    That is procedural animation at its finest IMO, but I've got a better approach that would work for Unity:


    1. "By-module" access to specific groups of named bones (stored in a specific animation clip) at a specific keyframe or time-marker (with an optional custom interpolation argument -- to handle overshoots).

    2. Custom interpolation methods -- (bicubic, linear) alongside a custom Animation Curve or function: i.e. spring-dampening)

    3. Modular Rig Grammar -- The system itself handles building the current pose from individual modules: i.e. upper torso module --> humanoid arms, insect arms + lower torso module -> human legs -> human left leg + human right leg |or| insect legs -> insect left leg (x3) + insect right leg (x3)



    4. Individual modules inside modules -- (i.e. arm module bone chains) each having their own custom interpolation weightings per-bone or chain (i.e. for spring-dampening) applied down these bone chains to make adding a little procedural bounciness or other secondary motion simple. Essentially, dampening and springiness will vary and falloff gradually down bone chains for example.
      This approach enables a LOT of secondary animation out of a very small number of frames!
      (Watch the ears/arms/hands of the rabbit in the GDC video carefully please!)

    5. Active ragdoll and procedural pose-matching for individual modules of course. Everyone wants that. :/ Let me also sometimes overshoot the target pose using interpolation like spring/dampening.

    6. Use "Rig Modules" to label and retarget certain groups (and apply certain kinds of procedural animation) based on each module TYPE, letting users combine these across different kinds of rigs so that users can target certain bone names and groups (for modular retargeting and applying procedural animation), letting them eventually take on scriptable behaviors too based on their type (i.e. TYPE LEG: Left human leg, Left front dog leg, Left Spider leg, TYPE TAIL: Tentacle/Tail, Ponytail, Side-to-Side Spine Swaying, Scorpion Tail, etc.) -- This will also help with quickly (and procedurally) animating armors and other special types of decor for characters based on module (i.e. TYPE ITEM: a cape, dangling strings or TYPE HAIR: hair style 1 or 2.)

    ====== EDIT: Clarified some stuff.

    Also, here's one place the "module" concept is needed in this workflow:

    Modular Rigging - Modules.png
     
    Last edited: Apr 15, 2019
  7. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    Is this presentation still coming? -- Maybe I sound impatient, but I was really hoping to see this before implementing my own solution.
    The overall setup and workflow you guys have planned is currently lost on me (it seems there is a LOT to do to make this system flexible and usable enough for what I'm wanting it for...)


    There are very specific things I want to have control over in regards to a modular rigging/animation workflow --

    For example:

    My idea of how this should work has to do with interpolation and "secondary animation" mostly, along with propagating data down "modules" consisting of bone chains (some containing _additional_ slots for more modules) that can be plugged-in or swapped, or simply retargeted (based on bone names), and animated separately, while the system puts together the resulting pose dynamically, using per-bone and per-pose weighted (custom) interpolation. In general, only two poses are needed in memory at the same time, and these can change at any point during the blend. Rather than being keyframe animated, the poses are retargeted to named bones over a sequence of poses, using only modules explicitly identified in the clip that it expects to animate.
    A clip essentially determines what separate rigs it animates. From this point, clips can be merged/combined into virtual clips (i.e. consisting of arm modules + leg modules + torso modules = a humanoid module + merged arm/leg/torso virtual clip = a humanoid animation clip that can be edited and propagated up/down the chain and generate separate files for each arm/leg/torso module separately and automatically, while also making a combined humanoid virtual clip). Fully animated clips can be imported from a package like Blender, and Unity would separate out the bones by name (based on the predefined modules they belong to) and generate separate clips for each module, including the final, resulting, merged virtual clip that shows the entire animation as it was authored externally in Blender.

    I can give more detail if needed, but please see my (heavily-edited) post above for a better idea of what tools I need in this Modular Rigging toolset. -- If the current/planned feature set can already do everything mentioned above, I would totally love to see how it might work!! :D
     
    Last edited: Apr 15, 2019
  8. MattRix

    MattRix

    Joined:
    Aug 23, 2011
    Posts:
    121
    awesomedata and davehunt_unity like this.
  9. dibdab

    dibdab

    Joined:
    Jul 5, 2011
    Posts:
    976
    does this mean that the IK here is not in LateUpdate?
     
  10. davehunt_unity

    davehunt_unity

    Unity Technologies

    Joined:
    Nov 13, 2017
    Posts:
    32
    Correct. The constraints in Animation Rigging package are jobs and therefore do not use LateUpdate. Through jobs you have access to Animation Stream before it is pushed out to game objects.

    For more information about Animation C# Jobs check out Romain's blog post here https://blogs.unity3d.com/2018/08/27/animation-c-jobs/
     
    Last edited: Apr 18, 2019
    Oblivionized and dibdab like this.
  11. davehunt_unity

    davehunt_unity

    Unity Technologies

    Joined:
    Nov 13, 2017
    Posts:
    32
    Hi awesomedata,

    Thanks for all of the suggestions. These are the types of things we are interested in hearing about, so we will keep them in our notes for further development. We are aware of the bare-bones (pun intended) nature of Animation Rigging v1 preview. This is intentional because we believe we will come to better solutions through hearing your feedback while it's in preview, so keep it coming!

    -Dave
     
    NotaNaN, Oblivionized and awesomedata like this.
  12. davehunt_unity

    davehunt_unity

    Unity Technologies

    Joined:
    Nov 13, 2017
    Posts:
    32
    I edited my post above to include the link to our GDC presentation. Here it is again, and props to MattRix for finding it first!

    https://www.gdcvault.com/play/1026151/
     
    Oblivionized likes this.
  13. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    Here's a thought for something akin to a "constraint" that might be really nice to add:

    • What if the character has a rigidbody that applies some physics bounce in a particular direction (based on its own motion vector) that applies to a modular grouping of various limbs? It would apply an increasing tolerance that cascades down the bone chain (i.e. floppy arms, but not floppy fingers -- as long as the hands are a separate rig module.)

    In that Procedural Animation video I linked to up top, they do something like this with the arms and the ears.


    • Also.. if one were to combine this with another module "constraint" effect... For example, what if a "pose-matching" constraint was applied to the same module simultaneously (so floppy arms and shoulders combined with a "reaching" or "punching" animation?)

    • I think a "center of mass" sort of "constraint" would be really nice to have too (again, applying to a particular set of bones and/or modules in a module) -- I've seen a Maya script that does this, but I've never seen it apply weights or constraints to joints automatically in realtime. The guy who did this script was the animator who worked on the VR game with the mouse. He said he essentially used google to give him the weight of each individual body part of a human's anatomy and used this to help him calculate the center of mass for a whole human body. When applied over the frames of an entire animation, this gives him realistic recoil when the body moves quickly and does flips and whatnot.
    Just some thoughts! :)
     
    Last edited: Apr 18, 2019
    Oblivionized likes this.
  14. CodeKiwi

    CodeKiwi

    Joined:
    Oct 27, 2016
    Posts:
    119
    I really like the new Freeform Animation system. I upgraded the JiggleChain demo from the video to 0.1.4 to try and get use to the syntax. I attached the code in case anyone wanted to try it. I tested it in the damp demo scene. I removed the damped constraints and added JiggleChainConstraint. Then set root to MRoot, tail to pivot8 and stiffness to 0.25. It’s great that the source code for the other constraints is included to compare against.

    I’m making something similar based on the animation bootcamp video. I create a base pose prefab with the character and a pose component that lists all the bones (the animator is removed from the character). Then I create prefab variant for each pose e.g. Run0-4. I use a component to set the variant pose from an existing clip or I can just manually pose it in Unity. I have two blending components that I’ll probably change over to the new animation system. The first is just a standard blend that can have a negative weight for anticipation or a weight greater than one for overshot. The other takes four poses and does bicubic interpolation. I then use a controller to push new poses to the bicubic interpolation e.g. next run step or jump anticipation blend followed by jump. The referenced poses can be in the scene or just direct links to the prefabs. I’m planning on including settings on the poses that match the rig e.g. crouch pose might cause the arms to be more bouncy with a reduced jiggle chain stiffness than the run pose. I might also try a center of mass constraint like you mentioned and maybe try some of the techniques from the Ubisoft's IK Rig video.
     

    Attached Files:

  15. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    @davehunt_unity

    I've been fleshing out some of my ideas above. -- I still have a ways to go, but I've come up (and documented) a better "Freeform" modular-rigging approach that takes into account ideas from Ubisoft and some GDC talks I've seen:


    Source-Data:
    1. Select a target model(s) to define the hierarchical layout for skeleton(s) that will eventually be used to define all possible modules that can be used/retargeted later on.
    2. Grab a list of each model's bone chains (in a hierarchical format) and process them based on module connectivity, constraint ripples, and "terminator" bones.
    3. Collect any relevant Animation Clips (and frames) you want to include or source from for animating each module, and store them (per-module) as pose data.


    Clip-Keys (and) Clip-Sequences:
    1. Animation clips can be sourced into keys known as "Clip-Keys" that are actually just "masked" bone hierarchies using a list of modules (to define the masked hierarchies) by sourcing from "Clip-Layers" that exist inside "Clip-Keys" using a particular "Clip-Module" to contain all the data needed to evaluate an animation (also known as a "Clip-Sequence")
    2. "Clip-Modules" are the backbone of the Clip-Key data making up the Clip-Sequence.
    3. "Clip-Modules" are just a fast way to organize, mask, and ultimately evaluate an animation key-pose consisting of multiple "Clip-Layers" (which are just groups of modules that may or may not be necessary "by layer") that may or may not exist in a given model's bone hierarchy or keyframe data.
    4. "Clip-Layers" are combined to make up a "Clip-Key" (or modular animation frame) that is then evaluated for existing (and required) modules based on what "Clip-Layer" data exists in the current "Clip-Key".
    5. "Clip-Keys" get interpolated within the "Clip-Sequence" based on the below specs:
    6. Poses are stored per-module (and per "Clip-Layer"), and can be based on bone names OR chain indexes, just in case names are not reliable, but hierarchy or layout IS.
    7. Time flow can either be Linear _or_ it can be managed by Animation Curves.
    8. "Clip-Keys" consist of various poses animated with different kinds of interpolation (such as Bicubic) or Animation Curves converted into mathematical functions.
    9. Secondary animation is managed by "Constraint-Effects" instead of being based directly on keyframes (which produces more dynamic animations without the need to author time-consuming specialized animations.)


    Clip-Layers (and) Clip-Modules:
    1. Poses from modules are combined on "Clip-Layers" to create a single pose that varies based on what modules are included or masked off (i.e. a legs module, a torso module, a head module, an arms, and hands module == "Human Clip-Module" -- Another example: a multi-spider-leg module, a head-with-horn-slots module, with a couple of eye-pegs attached to the horn slots = "Crab/Spider Clip-Module")
    2. Animations relying on "Clip-Layers" that don't exist simply ignore those layers (i.e. A crab without eye-pegs might be a spider, so it can use the "Crab/Spider Clip-Module" animations/poses -- it just ignores any eye-peg animation processing that might be required, since the required bone-chain isn't present in the spider model (and therefore doesn't get a "module" assigned to it for "eye-pegs".)
    3. Clip-Layers may be marked as "mandatory" (i.e. the rig _must_ include bones for it), but are treated as "optional" by default (which means that if there is not animation or bones for the Clip-Layer, the animation-processing is simply ignored.)


    Modules:
    1. Mask out different bones/chains to be tagged/labeled as part of different "modules".
    2. A module can be set as a "mirror" to the other side based on name -- i.e. you have a leg module, rather than a left/right leg module -- doing this can help quickly mask-off parts of the skeleton.
      (The "naming" convention for the above masked mirroring should look for i.e. "Left, L, or l, with an "_" , a "-" , or finally no space either before _OR_ after the bone name (i.e. for "Left" or "left") -- It should allow for names like "boneleft" or "leftbone", as well as "l_bone" or "bone-left" to be robust in its retargeting capabilities.)
    3. A mirrored module should be stored with an extra bit (i.e. 1 or a 0) to indicate whether IT is the original bone or not.
      The original bone should contain any extra settings for the partner (i.e. which axes to mirror or flip, if any -- This will avoid having to process the partner at all in most cases.).
      If IT is the copy, it does nothing and lets its partner position/rotate it during their turn.
      ALL of this extra info could probably be stored in a single byte _and_ be processed for both bones while processing the partner -- resulting in a much cheaper operation.

      A similar thing could be done with "Constraint-Effects" below.

    4. Each module has only one "start" or "input" bone, but can have many output "slots" where other modules can be plugged into it.
    5. The final bone on each chain inside a module has a "slot" that continues processing the next attached module's "start" or "input" bone -- and if nothing is plugged into this, it is considered a "terminator" bone (which basically indicates the last bone in a chain.)


    Constraint-Effects:
    1. Constraints can be applied to groups (and/or chains) of modules as "effects".
    2. Constraint "effects" ripple down each subsequent module.
    3. The context for each "ripple" can be based on an (all-at-once OR individual) flat user-selected list of modules, an overall (combined/singular) hierarchy of user-selected groups of individual modules, or individual (separate) per-module or per-bone-chain hierarchies of a user-selected group of individual modules.
    4. Rippled values may be applied as constant values, values modified by an animation-curve, until "terminated" by the module (i.e. no further input plugged into the slots at the last bones of each bone chain.


    There's a lot more I can go into, but these are really the barebones of what's necessary for a great hybrid of something like Overgrowth's low-keyframe procedural animation and the Ubisoft video about "Modular Rigging" -- all without resorting to a node-graph.

    The initial data (and nearly everything else) can come from the inspector and a traditional dopesheet layout.

    Am I on the Unity payroll yet? :) -- If so, I don't mind making user-friendly GUI "icing" to go along with all that great-tasting "cake" there... *wink, wink* :D
     
    Last edited: Apr 23, 2019
    NotaNaN, Oblivionized and CodeKiwi like this.
  16. dibdab

    dibdab

    Joined:
    Jul 5, 2011
    Posts:
    976
    would this approach work with mecanim?

    if yes, it would be great if you could include a basic example of
    1. getting the animation stream
    2. adding rotations to certain bones
    3. pushing out to gameobject

    this would mean performance+
    as we could eliminate animator layers (like upperbody etc) and lateupdate
     
  17. DerekMcKinley

    DerekMcKinley

    Joined:
    Jun 24, 2014
    Posts:
    19
    Hello, this video also talks about the application of physics in character animation, it would be incredible if in unity we had examples of such systems.


     
    dibdab likes this.
  18. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    I also just wanted to point out that this is actually **better** than what was proposed as Kinematica IMO:




    @davehunt_unity :

    Just a quick question --
    -- Is something like the above being considered with the Kinematica / Modular Rigging featureset?
     
    Oblivionized and Mixa1985 like this.
  19. davehunt_unity

    davehunt_unity

    Unity Technologies

    Joined:
    Nov 13, 2017
    Posts:
    32
    Hi awesomedata,

    Yes, the IK Rig presentation is very cool, I was in the audience and saw this live in 2016. This shows the power of using constraints (similar to the ones we have in Animation Rigging) for animation instead of bones, and how powerful that can be for a wide variety of real-time animation production needs. I believe many of these types of things would be possible to achieve building on top of the Animation Rigging package, perhaps with a few custom constraints and supporting tools and gameplay systems. I'd be excited to see what someone such as yourself could do to extend Animation Rigging in these kinds of directions.

    I would like to point out a clarification that the ideas here in Alexander's IK Rig presentation solves an entirely different set of problems than Kinematica does. Kinematica provides motion synthesis from a library of animations input by the user such that developers don't have to construct their own animation state machines.

    These two ideas are actually complementary to each other. In fact, Alexander Bereznyak and Michael Buttner were colleagues working together at the same company before Michael joined Unity. I feel like it's a bit off to say that one idea is better than the other because it's actually really nice to have both. Although, thanks for your comments, it's really nice to see that these things are important for Unity users and we will definitely take that into account.
     
  20. davehunt_unity

    davehunt_unity

    Unity Technologies

    Joined:
    Nov 13, 2017
    Posts:
    32
    I think this example might already exist, if you are talking about building a custom constraint for Animation Rigging. In our GDC Developer Days talk, Olivier Dionne was the third presenter and he covered two examples of how to build your own constraints. Or, if you want to dig deeper into how to access Animation Stream the whole package is open source C# you can feel free to explore how we are doing it here. Hope that helps!
     
    Oblivionized likes this.
  21. davehunt_unity

    davehunt_unity

    Unity Technologies

    Joined:
    Nov 13, 2017
    Posts:
    32
    "barebones" lol! I see what you did there.

    Thanks for describing your ideas here. It's pretty clear how you are suggesting the implementation could look. And I have my own ideas about how this might be useful in animation productions. I would actually really like to hear more from you about what problems this solves in the production of games. (again, I have my own interpretations but I want to hear yours). What's really helpful for us is to know what users need in productions that are practical and important situations that real game productions need. The more of these examples you can provide the more strength it adds to your suggestions, which will help folks like me build a strong argument as to why we should be spending development time on it.

    Thanks again for all your great ideas and suggestions!
     
    Oblivionized and awesomedata like this.
  22. davehunt_unity

    davehunt_unity

    Unity Technologies

    Joined:
    Nov 13, 2017
    Posts:
    32
    This is super cool! Really excited to see where you are going with this. It would be great to see any videos or gifs of your constraints in action. Definitely share them here if you get a chance!
     
    Oblivionized likes this.
  23. dibdab

    dibdab

    Joined:
    Jul 5, 2011
    Posts:
    976
    is the ninja not included?
    I don't see any humanoid rig model, neither fullbodyIK example.

    in
    https://github.com/Unity-Technologies/animation-jobs-samples

    there are examples, but so abstract
    and so many questions...

    what is SyncIK for?
    the FullBodyIK demo works only in editor if the effector is selected...

    Code (CSharp):
    1.     private void SyncIKFromPose()
    2.     {
    3.         var selectedTransform = Selection.transforms;
    4.  
    5.         var stream = new AnimationStream();
    6.         if (m_Animator.OpenAnimationStream(ref stream))
    7.         {
    8.             AnimationHumanStream humanStream = stream.AsHuman();
    9.  
    10.             // don't sync if transform is currently selected
    11.             if (!Array.Exists(selectedTransform, tr => tr == m_LeftFootEffector.transform))
    12.             {
    13.                 m_LeftFootEffector.transform.position = humanStream.GetGoalPositionFromPose(AvatarIKGoal.LeftFoot);
    14.                 m_LeftFootEffector.transform.rotation = humanStream.GetGoalRotationFromPose(AvatarIKGoal.LeftFoot);
    15.             }
    16.  
    17. etc..
    animjobs.jpg
    there's no animator controller assigned
    getting stream from a clip (so again, a quite restricted use)

    Code (CSharp):
    1.  var clip = SampleUtility.LoadAnimationClipFromFbx("DefaultMale/Models/DefaultMale_Humanoid", "Idle");
    2.         var clipPlayable = AnimationClipPlayable.Create(m_Graph, clip);
    3.         clipPlayable.SetApplyFootIK(false);
    4.         clipPlayable.SetApplyPlayableIK(false);
    is it not possible to get the stream from the controller?
    it doesn't work with root position only rotation, has this changed since?

    a google search for AnimationStream or AnimationHumanStream returns virtually nothing, except the unity's own posts. and it's in unity since 2018 August
    would be shame if would happen the same as with humanPose muscles, that people were asking years later, what it supposed to do and how it supposed to work

    a search on the assetstore for 'playables' returns only 'default playables' which doesn't even include humanoid animation...

    all while there was such interesting things being talked about as mixing animatorcontrollers in playables in 5.3
    playablectrl.jpg

    still have to watch the GDC talk
    okay, I've seen it already

    I think it's not for what I'm interested in:
    1. getting the animation stream
    2. adding rotations to certain bones (IK, humanPose, or else)
    3. pushing out to gameobject

    now come to think it might be not possible to do with it, what I thought it would be...
    should the whole animator controller/mecanim be rewritten into playables to get use of animationstream before it is actually used on characters?
     
    Last edited: May 29, 2019
  24. Kybernetik

    Kybernetik

    Joined:
    Jan 3, 2013
    Posts:
    2,570
    You might be interested in my Animancer plugin which is built on the Playables API (link in my signature). None of the examples explicitly show blending between animator controllers, but the Locomotion/Linear Blending example shows how you can play a single controller and it would be pretty straightforward to add a second one so you can blend over to it or other separate AnimationClips.

    Mecanim is already built on playables. The Animator.playableGraph property exposes its graph and if you get the playable graph visualiser you can use it on them.
     
  25. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419


    So I can give you a few problems this type of system solves (and why I'd be excited about helping it come to life!):


    1) Complex (i.e. Third-person) Character Controllers
    (Simultaneous Physics-informed Animation & Animation-informed physics)


    Even Unity itself is struggling with combining Animation and Physics while also offering easy-to-replace physics logic and animations into their own Third-person Character Controller solution.
    The problem is (lack-of) modularity.

    Not having separate logical systems where physics can INFORM animations (rather than control them), while animation itself also informs PHYSICS (rather than impose upon it) is very detrimental in complex character controllers -- especially in cases where one needs to override the other temporarily (such as in complex IK situations -- see IKRig video). This issue is ever-present in action-based character controllers like those found in something like Zelda BotW or Super Mario 64 (or Overgrowth, as mentioned above.)

    With Animation-Controllers, currently, you can either have root-motion-driven Physics or rigidbody physics-driven Animation in Unity -- but get ready for some work if you want to design/develop a system where you need both.
    Add procedural animation (even basic IK) into the mix (with a standard Animation Controller), and you might as well have a team of programmers at your disposal to accomplish this goal (as an artist) during this century.

    An actually reusable, generic, modern character controller cannot (efficiently) be made without something like the system I described to help physics INFORM the animation as the animation itself INFORMS the physics. Each system would be independent and would make its own decisions (based on the information of the other). This not only helps with modularity, but by combining specific modules that deal with both sides separately, each can also take into account the state of the other, allowing for more granular control, by automating certain things (and only overriding themselves when necessary -- i.e. when a leg IK sensor hits a wall at the same time as the head IK sensor (i.e. raycast), forward momentum can be stopped thanks to the simple information from these sensors.)

    A Super Mario 64-style (or Overgrowth-style) controller, for example, could be created using the methods I've described above by developing a set of Physics modules that get informed by the Animation modules, where the Animation modules are generally procedural (and can even play full-body, physics-informed clips, using my method above using a Bone Module mask, which contains a list of "modules" consisting of all the character's body part "modules" that contain the exact bones for each body part). Physics could tell Animation which parts to blend, and animation could pick and choose whatever "Constraints" it wants to apply to the modules down the bone chains (as well as what Animation Curve to use to weight these.)

    Not only would this work with modular Visual Scripting really well, but Unity's own (Zelda BotW-inspired) character controller could be easily improved with this tech too!


    2) Animation logic in Action Games + Adopting the new DOTS Mindset
    (Modules, modules, modules -- Switching to "modular-thinking" from "solutions-based" thinking)


    Action-based games are notoriously difficult to program because you are constantly combining physics and animation into transient (fluid) visual states. This is inescapable -- the real world kinda works this way.

    Intuitively, most approach animation following the "state-machine" model (i.e. Mecanim).
    But problems quickly arise when you've got to deal with upper-body or lower-body animations separately, or hand-positioning on weapons, feet IK (but only sometimes), blending animations, blending IK, dealing with actual physical momentum, grabbing doors, interacting with characters, facial animation during all this, etc. etc. etc.
    While the new procedural "Constraints" alleviates some of this to some degree, the new workflows are very tedious and likely new (and old) users will still revert back to thinking "solutions-based" instead of "module-based" simply because of the enormity of the task at hand in authoring animations (and scripts) -- in conjunction with one another -- simultaneously -- to handle each of their "special" cases.

    The problem is -- All of these cases are "special" cases.

    Logic exists behind each and every animation in games -- no matter the animation type.
    Whether it's a one-shot clip, a transitional blended-clip combination, or even procedural constraints that ripple down a ponytail (with a moving, articulated, monster-hand at the end!) -- it all requires logic.


    The "Animancer" plugin (shamelessly plugged by the author above) exists because people naturally want to get their animation (and their game logic) closer together (for easier logic editing).
    But there's a problem with this -- The "in-code" animation approach is that animation and logic quickly become TOO tightly-coupled, and one begins to quickly dominate the other with an iron-fist. This becomes VERY apparent in any animation-heavy game -- (i.e. action games like fighting games, brawlers, etc.) -- due to the many special-case states these sorts of games generally require.

    Very few animations are ever explicitly "state-based" -- even punch/kick states tend to take into account whether you're in the air, on the ground, or crouching first.
    These kinds of 'semi-states' are prime-candidates for "DOTS Visual-Scripting" modules, while also being very useful to animators since specific things can be done in a "crouch" that is a "punch" by playing a different upper-body animation (such as a chop rather than a punch). The animation wouldn't care about the lower-body, yet it could play all the same -- all without complicated "Animation Layer" setups).

    The holistic system proposed will easily reduce the amount of states these games need to manage because clips can be played per-module OR per-groups-of-modules, while also being blended using constraints and AnimationCurves that ripple down the module's bone chains (OR chains of modules themselves).

    Currently, Mecanim's "solution" to "general animation" is "playing/mixing an animation clip", but the real "problem" is (and always has been) applying clips to procedural routines -- i.e. game animation -- in general.
    Game animation has always included procedural stuff also (like IK, or facial animation, or a sword on the back of a character, or a character's long hair) that also must interact with the logic behind the animation's physical transience (be that purely physics, purely visual, or some combination of them both).

    However, "state-based" animations has always been unwieldy here because of only one reason:

    Animation is not a problem of identifying states -- it is a problem of recognizing transience.
    This is as true for each kind of animation as it is for the kinds of logic (physics or otherwise) behind animations.
    Action/fighting games just make this painfully obvious -- yet, as developers, we've never accepted this reality.

    Yet, recognizing transience is kinda hard... We simplify it in our minds as explicit "states"... but at a certain point, this breaks down, and we can no longer see them as such. Take any fighting game, then increase the number of overall (base) states (i.e. crouch, stand, jump, walk) then try to add various attacks or actions to these. A stupidly unwieldy character-controller occurs when you can do something like "crouch-walk" or "dash-jump" or "air-dash" as base states -- especially when you suddenly need to put in a large number of attacks or actions in these base states too. What if your character has various expressions with attacks in each attack state (if they are poisoned or excited or PO-ed?)

    How do you group, separate, or apply these tiny state changes? -- All in all, module-based animation is the only practical thing that can save the day when not sure exactly how to split states up in terms of their transience.


    3) Easier Animation Authoring + Flexible/Automated Bone Retargeting
    ("Transience by modularity" -- Modular retargeting and modular 'state' construction/anims)


    "Mecanim" spits in the face of the idea of "transience" that I've mentioned above.
    It's not a terrible system, but it is not a system suited for implementing game animation.

    Games -- by their very nature -- are procedural.

    The whole principle of game animation boils down to the idea of procedural transience.

    Transience comes from the idea of a state being a "state" temporarily (which sometimes consists of smaller and smaller "substates", which increasingly become more and more unwieldy since they're traditionally tied to a main "state", which can, and does, change often). A state (and its substates) are always on their way to transforming into a completely different representation of what it once was -- especially in terms of animation in games.

    The idea of an animation "state" really needs to be redefined for the procedural transience of games.

    A transient "state" only maintains an overall form in explicit circumstances, otherwise it's represented by groups of smaller and smaller parts and their sub-parts. These groups can also maintain a state i.e. "semi-states", and these "semi-states" are ultimately just larger and larger groups of parts (and sub-parts) that are combined to create the current transient "state" of the object. These parts, themselves, can change, too, because they are made up of other parts/modules and combined into their own individual "states" to some extent. The "semi-states" are rarely intended to represent the whole (global) "state" of the object/player, but in simple cases, this can work too. Instead, they are each able to be swapped in/out completely with one another. Ultimately you can get a hand that is three-fingered with tons of animations that can be swapped with a mitten, who can be flexible only at that one joint it is connected to the rest of the arm. This means that mittens can still move at the wrist using the three-fingered version's animations -- they just can't flex any fingers like their three- or five- fingered counterparts since the bones don't exist to be repositioned.

    To be transient, therefore, means "to be constructed in such a way that each tiny module can be variable and can be swapped around (singularly, or in groups), ultimately leading to as many different configurations or variations as you have modules for -- and all being easily modifiable with logic, since you're usually only modifying the local animation rather than the global animation -- but these can be modified easily too."
    Global animations are simply broken down to its smaller (constituent) groupings of parts automatically (based on the bone-names in the groups it was split into). This means it can be animated both upward _and_ downward.


    If modules (and the bones they are composed of) are named consistently (on a per-module basis), this can lead to automatic retargeting (and procedural, low-cost mirroring) of individual bones being combined into individual modules, that are combined back into complete animations, ignoring bones that either aren't present or aren't named properly, letting each bone be animated local to its intial (untranslated) position according to the modular skeleton being built.
    Animation clips could easily be authorable in the Editor. Start with a full-body mask as the main animation mask (including visible meshes and bones), and upon saving, have changes ripple-down to the individual submodules, populating each of their animation clips upon saving the full-body animation mask. If you're animating a three-fingered character's hand-gesture, it can work for a five-fingered character, and vice-versa, but if you're authoring a three-legged character but your character only has two legs, an animation can be set to play only when there's modules marked as "three legged" for a particular slot -- i.e. octopus, robots, spiders all have legs that slot at the base of the spine/root, so this module slot can look for those types of modules tagged as "three-legged" for the animations to play.

    The sockets of these individual module sockets can pass down offsets for specific bones that can be mirrored, offset, and stretched from their original position) based on which modules are attached. Some slots can also be relative to another rig's setup (i.e. a three-fingered dwarf rig vs. a female rig vs. a tall hero rig can all be labeled as sliders that let the rig module slots individually morph between various [positioning offsets / joint configurations). These are just ideas of course, but it helps to explain how procedural transience is helpful for detailed animations!

    There are more complex rigs out there -- like these in "The Last Guardian", but the system described above would allow this to be created in a way that is both universal and able to be used for tiny one-shot animations all the way up to large-scale logic-based rigs like those in the slides there -- all without a node-graph or complex behavioral trees.

    Almost all logic can be done via general scripts applied to bones/rigs, physics predictive sensors (i.e. to calculate arcs of footsteps), or virtual bone sensors (to calculate timing of a duck animation so the character doesn't hit his head, or to trigger the timing of an arc calculation so the character will lift his foot over a small rock).
    A visual DOTS-based scripting solution could work well to help with this, assuming the graphs could be applied to specific (or mirrored) bones inside of specific modules (alongside their constraints, of course).


    Sorry it took so long -- Please let me know if this helps, @davehunt_unity! :)
     
  26. giles_pixeltoys

    giles_pixeltoys

    Joined:
    Sep 30, 2016
    Posts:
    1
    Any information on how Animation Rigging will play with Timelines. Currently Timeline clips seem to override any rigging constraints setup.
     
  27. SniperED007

    SniperED007

    Joined:
    Sep 29, 2013
    Posts:
    345
    Do you have the updated scripts that support Animation Rigging 0.2.3 ?
     
  28. CodeKiwi

    CodeKiwi

    Joined:
    Oct 27, 2016
    Posts:
    119
    Sorry, I haven’t upgraded to 0.2.3 yet. It should be fairly easy to convert but I’m currently working on some other projects. I probably won’t get around to updating it.
     
  29. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    NotaNaN and elcionap like this.
  30. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
  31. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    So, I've seen this TINY note about animating rigging keyframes in Timeline mentioned prominently in 2019.3's new features, however I've not seen a workflow mentioned regarding this. I was just wondering -- Is anyone sharing this workflow with the community, or has it not been finalized?

    @davehunt_unity mentioned there were tools coming, but so far, I've only seen something just above API-level programmer tools. If you guys need some help in this department, I don't mind designing some better tools to author the rig with constraints, but inspector-focused tooling is really hampering this workflow artistically. I've got a detailed idea about how this would work, but it would require a better interface for constraints in the overall workflow. I am considering making this interface for you guys, but I'm not sure if there are tools in the pipeline to do this already, so I don't want to waste my time reinventing the wheel when there is something similar (or better) coming than what I made -- especially since DOTS animation is just on the horizon.


    You guys have done a great job providing some really amazing capabilities for animation. I can clearly see the potential, but I am worried it is not likely to gain traction without some much-needed user-friendly "UI-flair" to put the exclamation point on how these features will ultimately speed up people's workflows.
    Have you guys looked in to Dreams (ps4) on how their constraints system works? -- They've got modular rigging already. And it works like a dream, forgive the pun. Might be worthwhile to check out if you don't want my help.
     
    Last edited: Nov 7, 2019
    NotaNaN likes this.
  32. Zaax

    Zaax

    Joined:
    Oct 14, 2017
    Posts:
    16
    Can we get something like this in the upcoming Dots/Animation authoring package?
     
  33. davehunt_unity

    davehunt_unity

    Unity Technologies

    Joined:
    Nov 13, 2017
    Posts:
    32
    Timeline is supported with Animation Rigging constraints starting with Unity 2019.3.
     
    NotaNaN likes this.
  34. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    I had no idea this was already a thing! -- Go Disney!

    I requested this a few weeks ago in the uMotion / Very Animation assets, but couldn't explain it very well, so I was rejected each time. I'm really glad to see a video showcasing those ideas. Thanks for posting it!


    I really believe we need this kind of keyframing system in Unity. It would save loads of money on expensive mocap setups, and keyframing is really the only factor that's holding computer animation back from being as good as (and more flexible than) mocap. Plus, with in-scene widgets like this, it's more straightforward and fun too.

    Please push this up to the powers-that-be at Unity to make this happen, @davehunt_unity.
     
    NotaNaN likes this.
  35. MadeFromPolygons

    MadeFromPolygons

    Joined:
    Oct 5, 2013
    Posts:
    3,980
    My god, thats amazing. I never realised how lacking normal way of keyframing is until I just saw that. Now I cant unsee it :(
     
    NotaNaN and awesomedata like this.
  36. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    I'm surprise it's still not widespread, that idea has been in the air for SOOOO LONNGGGG what prevent adoption?
     
    NotaNaN and awesomedata like this.
  37. chrisk

    chrisk

    Joined:
    Jan 23, 2009
    Posts:
    704
    Wow! Tangent Space Control seems so intuitive to use. I think even 7 year-old can create good looking animation. My animation skill is still at 7 year-old. ^^
     
    awesomedata and NotaNaN like this.
  38. NotaNaN

    NotaNaN

    Joined:
    Dec 14, 2018
    Posts:
    325
    Wow to the above video. I would absolutely love Tangent Space Control as a feature for Unity's new animation system! Though, do we have any news on when the next preview of the package will be released? Last time I checked it was still scheduled for 2019.3, is this still true or has something changed?
    Also, @awesomedata, your massive post on animation (while a really long read) is simply top-notch. I've read it once -- and just now I've read it again! Thank you for taking the time to write it up! Hopefully the Unity Team will take all you said into consideration for the final result of the new Animation Rigging Package. (I'm sure they will, considering how amazing ECS and DOTS are. They probably already have a huge plan all written out! I just can't wait to get my hands on it...). Which brings me back to my question; any news on the 2019.3 update to the package? Or are we still in the dark?
     
    davidebae18 and awesomedata like this.
  39. chrisk

    chrisk

    Joined:
    Jan 23, 2009
    Posts:
    704
    Hi, I'm reviewing Animation Rigging and it is really interesting. I think I can see myself using it instead of FinalIK for my current project. It's a lot more flexible and general. And I expect it to outperform when DOTS version becomes available.
    But, in order to make the call, I would like to ask a couple of questions.

    1. When will DOTS' version of Rigging be available? Any ETA?

    2. When DOTS version is available, will the current version of Constraints be ported to DOTS? I don't expect it to convert automatically but I'm going to use Unity's new DOTS Constraints as examples to convert. I'm asking if I might have to rewrite all of my custom Constaints again.

    3. Are there Rigging Roadmap? I'm really curious how it will work with the rest of the system, particularly with Kinematica.

    Thanks and please keep the good work.
    Cheers!
     
  40. simonbz

    simonbz

    Unity Technologies

    Joined:
    Sep 28, 2015
    Posts:
    295
    Hi,

    We don't have an ETA, really.

    Writing constraints for DOTS Animation is very similar to Animation Rigging. There will be needs to rewrite part of it, but the job itself should be fairly easy to convert to the new system.

    The workflow itself is still largely undetermined, and will probably be adapted in order to better integrate with DOTS-Physics and the DOTS transform system.
     
    awesomedata, chrisk and NotaNaN like this.
  41. chrisk

    chrisk

    Joined:
    Jan 23, 2009
    Posts:
    704
    Thanks for the answer.
    I understand it's undetermined for the future. And let me ask you this question instead.
    How about right now? Is Animation Rigging actively under work in the current form(Playables)? Or is it waiting until DOTS forms its shape? I hope not. If DOTS Rigging is going to be similar, I hope you keep working in Playables. I would like to see more constraints, for examples.

    I'm asking this because, it seems that DOTS is amazing but it puts everything on hold and we won't see the light of day until few years DOTS finally forms its shape. I love DOTS and hate DOTS because of this reason.

    I believe Kinematica has been delayed more than a year because of DOTS and I don't want to see this happen for Riggings. Riggings is already very usable piece of tech and I would like to see it actively supported instead of waiting for DOTS. In my honest opinion, DOTS will take at least a year or two before it becomes usable.
     
    Last edited: Jan 14, 2020
    NotaNaN likes this.
  42. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    Those are my sentiments exactly.

    Mecanim is a frustrating experience when trying to do procedural animation. It seems like we should have a stopgap UI in place in the meantime to enable this kind of thing a bit better for now -- even if it's just a gameobject-based prototype. This is why the Freeform Animation package has been so interesting to me. Even if it means learning something brand-new, I'm tired of fooling around with Mecanim's API for this kind of thing!
     
    NotaNaN and chrisk like this.
  43. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    Also, if you'd like, I wouldn't mind helping you guys solidify this workflow. I think about these subjects probably too much to be healthy. I've got enough experience developing workflows for tools for a variety of different game styles that I can offer very practical suggestions and approaches. Once I understand what you guys are going for (and what your specific limitations are), I can contribute a solid / practical direction for workflow and UX that feels fun and intuitive (and won't get in anyone's way). I think this is what you're going for?

    Hand-tweaked procedural animation and modeling combined with programming physics and logic (with a great UX and workflow) was what I was under the impression you guys wanted. If that's the case, I can definitely help you out. Just let me know!
     
    Last edited: Jan 14, 2020
    NotaNaN likes this.
  44. DavidGeoffroy

    DavidGeoffroy

    Unity Technologies

    Joined:
    Sep 9, 2014
    Posts:
    542
    Hello @chrisk, I'm the Kinematica team lead.

    The delay has nothing to do with DOTS, and everything to do with building solid user workflows for a technology where there are not industry standards. This needs several iterations, which take time. The Kinematica prototype shown was technically solid, but the authoring workflow was nonexistent. This is what we have been working on for the past year.

    Kinematica is still being developed on Animation Playable Jobs, expressly so we have the liberty of delivering a version before DOTS is stable.
     
    TyrannicGoat likes this.
  45. chrisk

    chrisk

    Joined:
    Jan 23, 2009
    Posts:
    704
    Hi, thanks for dropping by.
    Kinematica by far the most exciting announcement when it was first announced in 2018 Unite, but it totally dropped out of the radar for more than a year. I was lead to believe that it was waiting for DOTS just like many other tech stacks. It was promised to have some preview version at the end of 2018 and you must had some ideas to show back then. Otherwise, how it could be delayed more than a year without any info?

    Anyway, I'm happy to hear that it's still actively being developed and I'm really looking forward to get my hands on it. To me, having good animation system is the most important for gaming experiences, much more so than having pretty graphics. If I have to choose just one, I rather have a stick figure character and have really awesome animation over anything else.
    I can picture how Kinematica with the Rigging system can work together hand in hand. It will totally change how we work with animations.
    Please don't let us down for the further delay and I'll appreciate if you can share what you have so far.
    Thanks.
     
  46. DavidGeoffroy

    DavidGeoffroy

    Unity Technologies

    Joined:
    Sep 9, 2014
    Posts:
    542
    > It was promised to have some preview version at the end of 2018 and you must had some ideas to show back then. Otherwise, how it could be delayed more than a year without any info?

    I understand the frustration. I don't have a better answer than: there was a disconnect internally about what it actually entails to ship a preview version that's solid enough for wide distribution. I was not attached to the project at that point, but I apologize on behalf of Unity for the confusion and the disappointment.

    As for the lack of update: the underlying tech has stayed mostly the same for the past year while we were building the workflows; half finished menus and tools don't make interesting demos. Once we have something exciting to show, we will give another update.
     
    TyrannicGoat, NotaNaN and chrisk like this.
  47. simonbz

    simonbz

    Unity Technologies

    Joined:
    Sep 28, 2015
    Posts:
    295
    Hi,

    I'll try and shed some light on that.

    There is still active work in Animation Rigging. We are applying the finishing touches to the bi-directional baking workflow (Freeform) that will be available in Unity 2020.1.

    However, that is true that we are shifting our efforts to the new DOTS workflow, so while you can expect some new features for 2019.3 and 2020.1, things will slow down for subsequent releases until we can shift the existing workflow to DOTS. With the DOTS Animation architecture we can go beyond the limitations of classic Unity (e.g. physics/animation blending that is not possible in the current framework). This is why we believe it's necessary to make this shift now.

    That doesn't mean that Animation Rigging will be left out though, as we'll still continue to maintain and ensure the package works for production needs.
     
    NotaNaN and chrisk like this.
  48. Grimreaper358

    Grimreaper358

    Joined:
    Apr 8, 2013
    Posts:
    789
    So DOTS Animation will have Animation Rigging as well or that package will just stay how it is now?
     
  49. chrisk

    chrisk

    Joined:
    Jan 23, 2009
    Posts:
    704
    Thanks, guys, we will see what the future entails. I hope it well worths for the long wait. Dots is a step backward to make a giant leap, I'm sure. I'm still optimistic but it could've been handled more smoothly. I still think it will take a while before we are all ready to adopt Dots. It's not likely that everyone needs Dots nor wants Dots. Dots could have been worked on a separate branch without dragging everyone down. No one will have problems making the switch once it's finalized. While things are in flux, we all have to take the step backward together. I strongly believe, it could've been avoided.

    Dots has been the focus past a couple of years but most people don't know how to use it nor care about it. It will benefit only the very few at the cost of everyone because it has a huge impact on everything, instability of the Editor, regression bugs, putting breaks on the many things such as making Editor stable and performant.

    Well, I know you guys will be busy preparing for GDC and I think GDC is very important because it's the first major event each year and it sets the tone and goal for the rest of the year.
    If I can give some of my inputs before things are finalized, this is what I would like to see this year's GDC.

    The recurring theme we've heard was "Performance, Performance, Performance", and "Performance by Default" a past couple of years and we saw DOTS everywhere.

    To be honest, I love dots but I don't think we will be ready to make the switch in a year or two and it will only benefit very few people initially. Instead, I would like Unity to think about how it can help many more people right now and easing what's causing the most pain. I can say with certainty that the clunky and unstable Editor has been the most painful.

    I hope I don't have to prove, but the Editor workflow has been mostly the same past ~10 years while the performance has been degrading slowly over time. Yeah, features are being added but HW has been getting faster too. And this clunkiness is compounded by ever-growing average project sizes and the pain is real.

    And what I hope to see during this GDC is this. "Now we have been focused on Performance past a couple of years, we have some results. We have many game running faster that was not possible. We believe we are on the right track and we are now expanding Performance to Editor while improving the workflow. We will not only help make the game running faster but also to make the game faster!" Something like that.
    I'm pretty sure you agree that Editor has a lot to be desired right now and many users are having issues with slow and unstable editor. It feels like I'm spending more time fighting with the Editor and searching for workarounds, rather than making games.

    If you agree with me, please let the decision-makers know of the concern and let's try to make a successful year to help everyone. In that spirit, there a few things I would like to see.

    1. Make a separate DOTS branch and do the latest and coolest development there. Anyone who is the fast track will use this branch. I'm sure it will help Unity too as you can focus on a few things while worrying less about the backward compatibility.

    2. Port Editor to .NetCore. Mono is the source of evil. For instance, AppDomain.Reload() reloads everything even there is a single line of changes in the source code. We cannot get rid of it unless we move to .NetCore. Mono served its purpose and knowing that it's a thing of the past, we will need to move to .NetCore soon or laster. .NetCore will improve Editor's performance significantly without any special optimizations. It's just faster by several factors than Mono, period. Like the Dots branch, please create a.NetCore branch so that we can test it independantly. Once ready, we can merge it with the main branch.

    3. Improve the Editor workflow. I mean for real improvements, not for just saying for the sake of improvements. Unity has been stressing the fact that "Unity is Simple to Use". It doesn't mean that it's easy to use. There is a big difference between them and I think Unity is far from Easy to Use. I welcome a very sophisticated Editor if it can get the job done faster. Focusing on Easy(intuitive) to Use with some learning curve should be the goal. In Unity, it requires too many clicks, going between windows back and forth, lock-unlock, inconsistent UI usage, missing some fundamental stuff. to any good Editors and etc.

    4. Make Editor scalable to large project. This is a huge surprise to me that before I start working on a large size project. (~100GB). I've heard ~1TB size project and I don't think I can call my project large size in comparisons. But it getting slower and slower and I often click on checkboxes 3 times. It was outside of my control and I didn't have any solutions until I heard about Addressables. I gave me an idea that I can divide my project into smaller pieces and load Addreassables into another project. I got it almost worked and it's worth sharing to many others who are suffering from the same issue. However, there is one missing functionality in Addressable, i.e., loading Addressable into the Editor. I tried to reach out to the Addressable dev but he flat out refused my request saying that it goes against the core Unity usage. !#@$!@#$ not sure what to say. All I'm asking was to load a non-editable static scene into the hierarchy. Just displaying won't affect anything. And aren't we doing enough workarounds already? I think it's very unresponsible. If Editor worked fined with a large project, I didn't have to these kinds of monkey work. It wasn't my choice in the first place but as one of the desperate attempts. To make the long story short, please make the Editor scale up with large size projects, or give us a good workaround. I gave you one of the possible workarounds and I would like to hear what other Unity developers have to say about loading Addressable into the Editor. Here is a link.
    https://forum.unity.com/threads/using-addressable-to-save-our-lives.793422/
    And you can find more threads requesting loading Addressable into the Editor.


    TL;DR, I would like the theme of 2020 becomes, "Performance by Default for Editor", and how? "Focusing on Easy to Use" I'm sure it will benefit everyone, not just very few who are on the Dots land.

    Thanks for reading my inputs.
     
  50. NotaNaN

    NotaNaN

    Joined:
    Dec 14, 2018
    Posts:
    325
    While we're talking about DOTS, I would like to throw in my two cents about it, too. As far as things that have been frustrating to me about the DOTS workflow, the top four things I would have to mention would be (in no particular order):

    (1) Editor Performance -- Which, even with my smaller projects, has become a massive issue (I have had to test my game in slow-motion for quite a while now...).

    (2) Lack of support for proper Presentation Entities -- What the heck do I mean by that? I mean that, currently, as far as presenting your game goes you are almost forced into using GameObjects for presentation if you do not wish to write a lot of systems from the ground up.
    Is this an inherently bad workaround for presenting your game? No.
    Is this a good and non-time consuming workaround for presenting your game that doesn't cause issues down the road? DEFINITELY NOT.
    I have been using GameObjects for presenting my entities for the past month now, and while I can say it was fun when I first started back when my project was simpler -- it is now a mad house.
    And I'm struggling to keep my sanity.
    I've had to build crazy systems just to make things function properly, and if I knew I was going to have to be doing this much manual labor in addition to creating a tangled web of dependencies and nuances that only the person who created the mess could understand -- I would have made my own systems to make using Presentation Entities viable. Now, to clear up any confusion, I am not against the conversion workflow! The conversion workflow works quite well -- minus the boiler plate and a lot of things you need to learn. But using GameObjects for your in-game presentation is a nightmare, which is why I would love to see some more bare-bones support for Presentation Entities (a basic sprite-renderer plus basic 2D animation playing support would be awesome). (Or at least tell people that you should NOT use GameObjects for anything but authoring...)

    (3) Boilerplate City, here we come! -- Now I'm not sure if there's a way around having to do what I'm doing... But for about a year now I've had to do Manual Chunk Iteration for a stupid amount of my systems... Making a stupid amount of non-jobbed code to boot. Why? Because as far as I know, there isn't a way to Iterate over an Entity in a nice and efficient fashion if your ComponentGroup holds Twenty Different Components with half of them being IBufferElementDatas. Do you know how long it takes to write a system that uses Chunk Iteration over a ComponentGroup of twenty? At least a minute per component, probably two. And it physically hurts. If there is any way around me doing this SOMEBODY TELL ME. But as far as I know, there isn't. And it makes me spend more than 50% of my time writing boilerplate instead of logic. I don't like it one bit.

    (4) Documentation? Never heard of it! -- We all know someone was going to say something along the lines of that, but for something as complicated as ECS, Jobs, and Burst, plus the Authoring Workflow, we really need a better documentation. I realize that DOTS is an ever changing structure, but surely ECS has reached a point where it's somewhat steady and not going to drastically change? Surely the same case is for Jobs, right? What about Burst? Is there going to be a massive change to how Burst works that would make 50% of the documentation information obsolete? I don't know where you guys are heading, I like where we are now -- and I am hyped for the future, but could we make a new years resolution on proper documentation? Because a lot of people would love that, and you are going to have to write one sooner or later. Why not make it sooner?

    Anyway, those are my major gripes right now. DOTS is awesome, but it has some practicality issues. It would feel great if those issues were ironed away. So I hope it's on the roadmap to add some more quality of life features in addition to Editor Performance!

    The Unity Team is doing great. Cannot wait to see what you guys come up with next. :D
     
    Last edited: Sep 11, 2020
Thread Status:
Not open for further replies.