Search Unity

Freeform Animation : Modular Rigging

Discussion in 'Animation' started by awesomedata, Mar 20, 2019.

  1. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    746
    Could we get some details as to where/how this is taking place?

    For example:
    • Are there tools being developed to help users create modular rigs (or is this API-only?)
    • What scope are you guys looking at for this feature-set?
    • Does this implement C# Animation Jobs natively somehow (to help with ControlRig performance?)


    • Is this (quite awesome) feature related somehow?:
      https://docs.unity3d.com/Packages/com.unity.animation.rigging@0.1/manual/index.html
    • Will there be any way to visualize our modular rigs moving in the editor (perhaps via Playables?)

    I'd love to beta-test these features, but I have no clue what functionality (or scope) you guys are aiming at with this right now. Any info on this (very ninja-like) feature-set would be highly-appreciated!! D:
     
  2. GameDevCouple_I

    GameDevCouple_I

    Joined:
    Oct 5, 2013
    Posts:
    1,620
    Ive tried asking about this and have not gotten replies for like a year. Its a bit annoying that noone is acknowledging this
     
    awesomedata likes this.
  3. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    746
    I completely agree -- it's like animation is always left out in the cold.

    For something as vital as that which creates "the illusion of life" -- Unity seems to be oddly-focused on almost anything else besides animation. :/

    @RomainFailliot
    I would really like to know more about this feature and what it is intended for. Anything you or anybody involved with animation know about this feature would be great...
    For example, might this be something the ThirdPersonController team could use in their physics-driven approach?

    Also -- is Kinematica still a thing, or has it been tossed to the sidelines too?
    Maybe we've been waiting on ECS to help boost animation performance perhaps?

    I've been very concerned about Unity's animation future... and I'm not the only one... Please update us? Pretty please?
     
  4. davehunt_unity

    davehunt_unity

    Unity Technologies

    Joined:
    Nov 13, 2017
    Posts:
    13
    Hi awesomedata,

    Just to reassure you, there are great efforts in progress here. The Animation Rigging package is initially being released as preview for 2019.1. The documentation can be found here https://docs.unity3d.com/Packages/com.unity.animation.rigging@0.2/manual/index.html

    We just delivered a GDC Developer Days presentation, which was recorded and can be seen for free on the GDC Vault here https://www.gdcvault.com/play/1026151/. We will also follow up with a blog post and more tutorial content very soon.

    Thanks for your interest, and my apologies for not seeing this earlier! We will be in touch with updates as they become available.

    -Dave
     
    Last edited: Apr 18, 2019
    GameDevCouple_I and awesomedata like this.
  5. davehunt_unity

    davehunt_unity

    Unity Technologies

    Joined:
    Nov 13, 2017
    Posts:
    13
    Here are some more answers to your questions
    • Are there tools being developed to help users create modular rigs (or is this API-only?)
      • Yes. Totally modular. Rigs are built from general purpose constraints for users to assemble in any creative way they want.
      • All C# code in the package is open source and easily extensible to build your own constraints. This was a fundamental design goal, to enable the community developers to extend functionality because each game design may have custom needs.
    • What scope are you guys looking at for this feature-set?
      • The Animation Rigging package initial release in 2019.1 enables runtime rigging. Following this we will develop keyframe animation authoring tools for creating animation clips on control rigs in Unity. While we are in preview we will be paying close attention to how the community uses it so we can build more efficient artist workflows.
    • Does this implement C# Animation Jobs natively somehow (to help with ControlRig performance?)
      • Yes. The Animation Rigging package is built on the C# Animation Jobs API. With this we can hijack animation stream and get more precise control before animation is pushed out to GameObjects. Also, since the rig constraints are jobs you get safe multi-threading for free.
     
    joshcamas, awesomedata and teutonicus like this.
  6. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    746
    Thanks @davehunt_unity -- You've got me very excited!


    For a long time, I've been trying to do this (for obvious reasons):




    That is procedural animation at its finest IMO, but I've got a better approach that would work for Unity:


    1. "By-module" access to specific groups of named bones (stored in a specific animation clip) at a specific keyframe or time-marker (with an optional custom interpolation argument -- to handle overshoots).

    2. Custom interpolation methods -- (bicubic, linear) alongside a custom Animation Curve or function: i.e. spring-dampening)

    3. Modular Rig Grammar -- The system itself handles building the current pose from individual modules: i.e. upper torso module --> humanoid arms, insect arms + lower torso module -> human legs -> human left leg + human right leg |or| insect legs -> insect left leg (x3) + insect right leg (x3)



    4. Individual modules inside modules -- (i.e. arm module bone chains) each having their own custom interpolation weightings per-bone or chain (i.e. for spring-dampening) applied down these bone chains to make adding a little procedural bounciness or other secondary motion simple. Essentially, dampening and springiness will vary and falloff gradually down bone chains for example.
      This approach enables a LOT of secondary animation out of a very small number of frames!
      (Watch the ears/arms/hands of the rabbit in the GDC video carefully please!)

    5. Active ragdoll and procedural pose-matching for individual modules of course. Everyone wants that. :/ Let me also sometimes overshoot the target pose using interpolation like spring/dampening.

    6. Use "Rig Modules" to label and retarget certain groups (and apply certain kinds of procedural animation) based on each module TYPE, letting users combine these across different kinds of rigs so that users can target certain bone names and groups (for modular retargeting and applying procedural animation), letting them eventually take on scriptable behaviors too based on their type (i.e. TYPE LEG: Left human leg, Left front dog leg, Left Spider leg, TYPE TAIL: Tentacle/Tail, Ponytail, Side-to-Side Spine Swaying, Scorpion Tail, etc.) -- This will also help with quickly (and procedurally) animating armors and other special types of decor for characters based on module (i.e. TYPE ITEM: a cape, dangling strings or TYPE HAIR: hair style 1 or 2.)

    ====== EDIT: Clarified some stuff.

    Also, here's one place the "module" concept is needed in this workflow:

    Modular Rigging - Modules.png
     
    Last edited: Apr 15, 2019
  7. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    746
    Is this presentation still coming? -- Maybe I sound impatient, but I was really hoping to see this before implementing my own solution.
    The overall setup and workflow you guys have planned is currently lost on me (it seems there is a LOT to do to make this system flexible and usable enough for what I'm wanting it for...)


    There are very specific things I want to have control over in regards to a modular rigging/animation workflow --

    For example:

    My idea of how this should work has to do with interpolation and "secondary animation" mostly, along with propagating data down "modules" consisting of bone chains (some containing _additional_ slots for more modules) that can be plugged-in or swapped, or simply retargeted (based on bone names), and animated separately, while the system puts together the resulting pose dynamically, using per-bone and per-pose weighted (custom) interpolation. In general, only two poses are needed in memory at the same time, and these can change at any point during the blend. Rather than being keyframe animated, the poses are retargeted to named bones over a sequence of poses, using only modules explicitly identified in the clip that it expects to animate.
    A clip essentially determines what separate rigs it animates. From this point, clips can be merged/combined into virtual clips (i.e. consisting of arm modules + leg modules + torso modules = a humanoid module + merged arm/leg/torso virtual clip = a humanoid animation clip that can be edited and propagated up/down the chain and generate separate files for each arm/leg/torso module separately and automatically, while also making a combined humanoid virtual clip). Fully animated clips can be imported from a package like Blender, and Unity would separate out the bones by name (based on the predefined modules they belong to) and generate separate clips for each module, including the final, resulting, merged virtual clip that shows the entire animation as it was authored externally in Blender.

    I can give more detail if needed, but please see my (heavily-edited) post above for a better idea of what tools I need in this Modular Rigging toolset. -- If the current/planned feature set can already do everything mentioned above, I would totally love to see how it might work!! :D
     
    Last edited: Apr 15, 2019
  8. MattRix

    MattRix

    Joined:
    Aug 23, 2011
    Posts:
    88
    awesomedata and davehunt_unity like this.
  9. dibdab

    dibdab

    Joined:
    Jul 5, 2011
    Posts:
    797
    does this mean that the IK here is not in LateUpdate?
     
  10. davehunt_unity

    davehunt_unity

    Unity Technologies

    Joined:
    Nov 13, 2017
    Posts:
    13
    Correct. The constraints in Animation Rigging package are jobs and therefore do not use LateUpdate. Through jobs you have access to Animation Stream before it is pushed out to game objects.

    For more information about Animation C# Jobs check out Romain's blog post here https://blogs.unity3d.com/2018/08/27/animation-c-jobs/
     
    Last edited: Apr 18, 2019
    dibdab likes this.
  11. davehunt_unity

    davehunt_unity

    Unity Technologies

    Joined:
    Nov 13, 2017
    Posts:
    13
    Hi awesomedata,

    Thanks for all of the suggestions. These are the types of things we are interested in hearing about, so we will keep them in our notes for further development. We are aware of the bare-bones (pun intended) nature of Animation Rigging v1 preview. This is intentional because we believe we will come to better solutions through hearing your feedback while it's in preview, so keep it coming!

    -Dave
     
    awesomedata likes this.
  12. davehunt_unity

    davehunt_unity

    Unity Technologies

    Joined:
    Nov 13, 2017
    Posts:
    13
    I edited my post above to include the link to our GDC presentation. Here it is again, and props to MattRix for finding it first!

    https://www.gdcvault.com/play/1026151/
     
  13. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    746
    Here's a thought for something akin to a "constraint" that might be really nice to add:

    • What if the character has a rigidbody that applies some physics bounce in a particular direction (based on its own motion vector) that applies to a modular grouping of various limbs? It would apply an increasing tolerance that cascades down the bone chain (i.e. floppy arms, but not floppy fingers -- as long as the hands are a separate rig module.)

    In that Procedural Animation video I linked to up top, they do something like this with the arms and the ears.


    • Also.. if one were to combine this with another module "constraint" effect... For example, what if a "pose-matching" constraint was applied to the same module simultaneously (so floppy arms and shoulders combined with a "reaching" or "punching" animation?)

    • I think a "center of mass" sort of "constraint" would be really nice to have too (again, applying to a particular set of bones and/or modules in a module) -- I've seen a Maya script that does this, but I've never seen it apply weights or constraints to joints automatically in realtime. The guy who did this script was the animator who worked on the VR game with the mouse. He said he essentially used google to give him the weight of each individual body part of a human's anatomy and used this to help him calculate the center of mass for a whole human body. When applied over the frames of an entire animation, this gives him realistic recoil when the body moves quickly and does flips and whatnot.
    Just some thoughts! :)
     
    Last edited: Apr 18, 2019
  14. CodeKiwi

    CodeKiwi

    Joined:
    Oct 27, 2016
    Posts:
    53
    I really like the new Freeform Animation system. I upgraded the JiggleChain demo from the video to 0.1.4 to try and get use to the syntax. I attached the code in case anyone wanted to try it. I tested it in the damp demo scene. I removed the damped constraints and added JiggleChainConstraint. Then set root to MRoot, tail to pivot8 and stiffness to 0.25. It’s great that the source code for the other constraints is included to compare against.

    I’m making something similar based on the animation bootcamp video. I create a base pose prefab with the character and a pose component that lists all the bones (the animator is removed from the character). Then I create prefab variant for each pose e.g. Run0-4. I use a component to set the variant pose from an existing clip or I can just manually pose it in Unity. I have two blending components that I’ll probably change over to the new animation system. The first is just a standard blend that can have a negative weight for anticipation or a weight greater than one for overshot. The other takes four poses and does bicubic interpolation. I then use a controller to push new poses to the bicubic interpolation e.g. next run step or jump anticipation blend followed by jump. The referenced poses can be in the scene or just direct links to the prefabs. I’m planning on including settings on the poses that match the rig e.g. crouch pose might cause the arms to be more bouncy with a reduced jiggle chain stiffness than the run pose. I might also try a center of mass constraint like you mentioned and maybe try some of the techniques from the Ubisoft's IK Rig video.
     

    Attached Files:

    davehunt_unity and awesomedata like this.
  15. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    746
    @davehunt_unity

    I've been fleshing out some of my ideas above. -- I still have a ways to go, but I've come up (and documented) a better "Freeform" modular-rigging approach that takes into account ideas from Ubisoft and some GDC talks I've seen:


    Source-Data:
    1. Select a target model(s) to define the hierarchical layout for skeleton(s) that will eventually be used to define all possible modules that can be used/retargeted later on.
    2. Grab a list of each model's bone chains (in a hierarchical format) and process them based on module connectivity, constraint ripples, and "terminator" bones.
    3. Collect any relevant Animation Clips (and frames) you want to include or source from for animating each module, and store them (per-module) as pose data.


    Clip-Keys (and) Clip-Sequences:
    1. Animation clips can be sourced into keys known as "Clip-Keys" that are actually just "masked" bone hierarchies using a list of modules (to define the masked hierarchies) by sourcing from "Clip-Layers" that exist inside "Clip-Keys" using a particular "Clip-Module" to contain all the data needed to evaluate an animation (also known as a "Clip-Sequence")
    2. "Clip-Modules" are the backbone of the Clip-Key data making up the Clip-Sequence.
    3. "Clip-Modules" are just a fast way to organize, mask, and ultimately evaluate an animation key-pose consisting of multiple "Clip-Layers" (which are just groups of modules that may or may not be necessary "by layer") that may or may not exist in a given model's bone hierarchy or keyframe data.
    4. "Clip-Layers" are combined to make up a "Clip-Key" (or modular animation frame) that is then evaluated for existing (and required) modules based on what "Clip-Layer" data exists in the current "Clip-Key".
    5. "Clip-Keys" get interpolated within the "Clip-Sequence" based on the below specs:
    6. Poses are stored per-module (and per "Clip-Layer"), and can be based on bone names OR chain indexes, just in case names are not reliable, but hierarchy or layout IS.
    7. Time flow can either be Linear _or_ it can be managed by Animation Curves.
    8. "Clip-Keys" consist of various poses animated with different kinds of interpolation (such as Bicubic) or Animation Curves converted into mathematical functions.
    9. Secondary animation is managed by "Constraint-Effects" instead of being based directly on keyframes (which produces more dynamic animations without the need to author time-consuming specialized animations.)


    Clip-Layers (and) Clip-Modules:
    1. Poses from modules are combined on "Clip-Layers" to create a single pose that varies based on what modules are included or masked off (i.e. a legs module, a torso module, a head module, an arms, and hands module == "Human Clip-Module" -- Another example: a multi-spider-leg module, a head-with-horn-slots module, with a couple of eye-pegs attached to the horn slots = "Crab/Spider Clip-Module")
    2. Animations relying on "Clip-Layers" that don't exist simply ignore those layers (i.e. A crab without eye-pegs might be a spider, so it can use the "Crab/Spider Clip-Module" animations/poses -- it just ignores any eye-peg animation processing that might be required, since the required bone-chain isn't present in the spider model (and therefore doesn't get a "module" assigned to it for "eye-pegs".)
    3. Clip-Layers may be marked as "mandatory" (i.e. the rig _must_ include bones for it), but are treated as "optional" by default (which means that if there is not animation or bones for the Clip-Layer, the animation-processing is simply ignored.)


    Modules:
    1. Mask out different bones/chains to be tagged/labeled as part of different "modules".
    2. A module can be set as a "mirror" to the other side based on name -- i.e. you have a leg module, rather than a left/right leg module -- doing this can help quickly mask-off parts of the skeleton.
      (The "naming" convention for the above masked mirroring should look for i.e. "Left, L, or l, with an "_" , a "-" , or finally no space either before _OR_ after the bone name (i.e. for "Left" or "left") -- It should allow for names like "boneleft" or "leftbone", as well as "l_bone" or "bone-left" to be robust in its retargeting capabilities.)
    3. A mirrored module should be stored with an extra bit (i.e. 1 or a 0) to indicate whether IT is the original bone or not.
      The original bone should contain any extra settings for the partner (i.e. which axes to mirror or flip, if any -- This will avoid having to process the partner at all in most cases.).
      If IT is the copy, it does nothing and lets its partner position/rotate it during their turn.
      ALL of this extra info could probably be stored in a single byte _and_ be processed for both bones while processing the partner -- resulting in a much cheaper operation.

      A similar thing could be done with "Constraint-Effects" below.

    4. Each module has only one "start" or "input" bone, but can have many output "slots" where other modules can be plugged into it.
    5. The final bone on each chain inside a module has a "slot" that continues processing the next attached module's "start" or "input" bone -- and if nothing is plugged into this, it is considered a "terminator" bone (which basically indicates the last bone in a chain.)


    Constraint-Effects:
    1. Constraints can be applied to groups (and/or chains) of modules as "effects".
    2. Constraint "effects" ripple down each subsequent module.
    3. The context for each "ripple" can be based on an (all-at-once OR individual) flat user-selected list of modules, an overall (combined/singular) hierarchy of user-selected groups of individual modules, or individual (separate) per-module or per-bone-chain hierarchies of a user-selected group of individual modules.
    4. Rippled values may be applied as constant values, values modified by an animation-curve, until "terminated" by the module (i.e. no further input plugged into the slots at the last bones of each bone chain.


    There's a lot more I can go into, but these are really the barebones of what's necessary for a great hybrid of something like Overgrowth's low-keyframe procedural animation and the Ubisoft video about "Modular Rigging" -- all without resorting to a node-graph.

    The initial data (and nearly everything else) can come from the inspector and a traditional dopesheet layout.

    Am I on the Unity payroll yet? :) -- If so, I don't mind making user-friendly GUI "icing" to go along with all that great-tasting "cake" there... *wink, wink* :D
     
    Last edited: Apr 23, 2019
    CodeKiwi likes this.
  16. dibdab

    dibdab

    Joined:
    Jul 5, 2011
    Posts:
    797
    would this approach work with mecanim?

    if yes, it would be great if you could include a basic example of
    1. getting the animation stream
    2. adding rotations to certain bones
    3. pushing out to gameobject

    this would mean performance+
    as we could eliminate animator layers (like upperbody etc) and lateupdate
     
  17. DerekMcKinley

    DerekMcKinley

    Joined:
    Jun 24, 2014
    Posts:
    10
    Hello, this video also talks about the application of physics in character animation, it would be incredible if in unity we had examples of such systems.


     
    dibdab likes this.
  18. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    746
    I also just wanted to point out that this is actually **better** than what was proposed as Kinematica IMO:




    @davehunt_unity :

    Just a quick question --
    -- Is something like the above being considered with the Kinematica / Modular Rigging featureset?
     
    Mixa1985 likes this.
  19. davehunt_unity

    davehunt_unity

    Unity Technologies

    Joined:
    Nov 13, 2017
    Posts:
    13
    Hi awesomedata,

    Yes, the IK Rig presentation is very cool, I was in the audience and saw this live in 2016. This shows the power of using constraints (similar to the ones we have in Animation Rigging) for animation instead of bones, and how powerful that can be for a wide variety of real-time animation production needs. I believe many of these types of things would be possible to achieve building on top of the Animation Rigging package, perhaps with a few custom constraints and supporting tools and gameplay systems. I'd be excited to see what someone such as yourself could do to extend Animation Rigging in these kinds of directions.

    I would like to point out a clarification that the ideas here in Alexander's IK Rig presentation solves an entirely different set of problems than Kinematica does. Kinematica provides motion synthesis from a library of animations input by the user such that developers don't have to construct their own animation state machines.

    These two ideas are actually complementary to each other. In fact, Alexander Bereznyak and Michael Buttner were colleagues working together at the same company before Michael joined Unity. I feel like it's a bit off to say that one idea is better than the other because it's actually really nice to have both. Although, thanks for your comments, it's really nice to see that these things are important for Unity users and we will definitely take that into account.
     
    awesomedata likes this.
  20. davehunt_unity

    davehunt_unity

    Unity Technologies

    Joined:
    Nov 13, 2017
    Posts:
    13
    I think this example might already exist, if you are talking about building a custom constraint for Animation Rigging. In our GDC Developer Days talk, Olivier Dionne was the third presenter and he covered two examples of how to build your own constraints. Or, if you want to dig deeper into how to access Animation Stream the whole package is open source C# you can feel free to explore how we are doing it here. Hope that helps!
     
  21. davehunt_unity

    davehunt_unity

    Unity Technologies

    Joined:
    Nov 13, 2017
    Posts:
    13
    "barebones" lol! I see what you did there.

    Thanks for describing your ideas here. It's pretty clear how you are suggesting the implementation could look. And I have my own ideas about how this might be useful in animation productions. I would actually really like to hear more from you about what problems this solves in the production of games. (again, I have my own interpretations but I want to hear yours). What's really helpful for us is to know what users need in productions that are practical and important situations that real game productions need. The more of these examples you can provide the more strength it adds to your suggestions, which will help folks like me build a strong argument as to why we should be spending development time on it.

    Thanks again for all your great ideas and suggestions!
     
    awesomedata likes this.
  22. davehunt_unity

    davehunt_unity

    Unity Technologies

    Joined:
    Nov 13, 2017
    Posts:
    13
    This is super cool! Really excited to see where you are going with this. It would be great to see any videos or gifs of your constraints in action. Definitely share them here if you get a chance!
     
  23. dibdab

    dibdab

    Joined:
    Jul 5, 2011
    Posts:
    797
    is the ninja not included?
    I don't see any humanoid rig model, neither fullbodyIK example.

    in
    https://github.com/Unity-Technologies/animation-jobs-samples

    there are examples, but so abstract
    and so many questions...

    what is SyncIK for?
    the FullBodyIK demo works only in editor if the effector is selected...

    Code (CSharp):
    1.     private void SyncIKFromPose()
    2.     {
    3.         var selectedTransform = Selection.transforms;
    4.  
    5.         var stream = new AnimationStream();
    6.         if (m_Animator.OpenAnimationStream(ref stream))
    7.         {
    8.             AnimationHumanStream humanStream = stream.AsHuman();
    9.  
    10.             // don't sync if transform is currently selected
    11.             if (!Array.Exists(selectedTransform, tr => tr == m_LeftFootEffector.transform))
    12.             {
    13.                 m_LeftFootEffector.transform.position = humanStream.GetGoalPositionFromPose(AvatarIKGoal.LeftFoot);
    14.                 m_LeftFootEffector.transform.rotation = humanStream.GetGoalRotationFromPose(AvatarIKGoal.LeftFoot);
    15.             }
    16.  
    17. etc..
    animjobs.jpg
    there's no animator controller assigned
    getting stream from a clip (so again, a quite restricted use)

    Code (CSharp):
    1.  var clip = SampleUtility.LoadAnimationClipFromFbx("DefaultMale/Models/DefaultMale_Humanoid", "Idle");
    2.         var clipPlayable = AnimationClipPlayable.Create(m_Graph, clip);
    3.         clipPlayable.SetApplyFootIK(false);
    4.         clipPlayable.SetApplyPlayableIK(false);
    is it not possible to get the stream from the controller?
    it doesn't work with root position only rotation, has this changed since?

    a google search for AnimationStream or AnimationHumanStream returns virtually nothing, except the unity's own posts. and it's in unity since 2018 August
    would be shame if would happen the same as with humanPose muscles, that people were asking years later, what it supposed to do and how it supposed to work

    a search on the assetstore for 'playables' returns only 'default playables' which doesn't even include humanoid animation...

    all while there was such interesting things being talked about as mixing animatorcontrollers in playables in 5.3
    playablectrl.jpg

    still have to watch the GDC talk
    okay, I've seen it already

    I think it's not for what I'm interested in:
    1. getting the animation stream
    2. adding rotations to certain bones (IK, humanPose, or else)
    3. pushing out to gameobject

    now come to think it might be not possible to do with it, what I thought it would be...
    should the whole animator controller/mecanim be rewritten into playables to get use of animationstream before it is actually used on characters?
     
    Last edited: May 29, 2019
  24. Kybernetik

    Kybernetik

    Joined:
    Jan 3, 2013
    Posts:
    438
    You might be interested in my Animancer plugin which is built on the Playables API (link in my signature). None of the examples explicitly show blending between animator controllers, but the Locomotion/Linear Blending example shows how you can play a single controller and it would be pretty straightforward to add a second one so you can blend over to it or other separate AnimationClips.

    Mecanim is already built on playables. The Animator.playableGraph property exposes its graph and if you get the playable graph visualiser you can use it on them.
     
  25. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    746


    So I can give you a few problems this type of system solves (and why I'd be excited about helping it come to life!):


    1) Complex (i.e. Third-person) Character Controllers
    (Simultaneous Physics-informed Animation & Animation-informed physics)


    Even Unity itself is struggling with combining Animation and Physics while also offering easy-to-replace physics logic and animations into their own Third-person Character Controller solution.
    The problem is (lack-of) modularity.

    Not having separate logical systems where physics can INFORM animations (rather than control them), while animation itself also informs PHYSICS (rather than impose upon it) is very detrimental in complex character controllers -- especially in cases where one needs to override the other temporarily (such as in complex IK situations -- see IKRig video). This issue is ever-present in action-based character controllers like those found in something like Zelda BotW or Super Mario 64 (or Overgrowth, as mentioned above.)

    With Animation-Controllers, currently, you can either have root-motion-driven Physics or rigidbody physics-driven Animation in Unity -- but get ready for some work if you want to design/develop a system where you need both.
    Add procedural animation (even basic IK) into the mix (with a standard Animation Controller), and you might as well have a team of programmers at your disposal to accomplish this goal (as an artist) during this century.

    An actually reusable, generic, modern character controller cannot (efficiently) be made without something like the system I described to help physics INFORM the animation as the animation itself INFORMS the physics. Each system would be independent and would make its own decisions (based on the information of the other). This not only helps with modularity, but by combining specific modules that deal with both sides separately, each can also take into account the state of the other, allowing for more granular control, by automating certain things (and only overriding themselves when necessary -- i.e. when a leg IK sensor hits a wall at the same time as the head IK sensor (i.e. raycast), forward momentum can be stopped thanks to the simple information from these sensors.)

    A Super Mario 64-style (or Overgrowth-style) controller, for example, could be created using the methods I've described above by developing a set of Physics modules that get informed by the Animation modules, where the Animation modules are generally procedural (and can even play full-body, physics-informed clips, using my method above using a Bone Module mask, which contains a list of "modules" consisting of all the character's body part "modules" that contain the exact bones for each body part). Physics could tell Animation which parts to blend, and animation could pick and choose whatever "Constraints" it wants to apply to the modules down the bone chains (as well as what Animation Curve to use to weight these.)

    Not only would this work with modular Visual Scripting really well, but Unity's own (Zelda BotW-inspired) character controller could be easily improved with this tech too!


    2) Animation logic in Action Games + Adopting the new DOTS Mindset
    (Modules, modules, modules -- Switching to "modular-thinking" from "solutions-based" thinking)


    Action-based games are notoriously difficult to program because you are constantly combining physics and animation into transient (fluid) visual states. This is inescapable -- the real world kinda works this way.

    Intuitively, most approach animation following the "state-machine" model (i.e. Mecanim).
    But problems quickly arise when you've got to deal with upper-body or lower-body animations separately, or hand-positioning on weapons, feet IK (but only sometimes), blending animations, blending IK, dealing with actual physical momentum, grabbing doors, interacting with characters, facial animation during all this, etc. etc. etc.
    While the new procedural "Constraints" alleviates some of this to some degree, the new workflows are very tedious and likely new (and old) users will still revert back to thinking "solutions-based" instead of "module-based" simply because of the enormity of the task at hand in authoring animations (and scripts) -- in conjunction with one another -- simultaneously -- to handle each of their "special" cases.

    The problem is -- All of these cases are "special" cases.

    Logic exists behind each and every animation in games -- no matter the animation type.
    Whether it's a one-shot clip, a transitional blended-clip combination, or even procedural constraints that ripple down a ponytail (with a moving, articulated, monster-hand at the end!) -- it all requires logic.


    The "Animancer" plugin (shamelessly plugged by the author above) exists because people naturally want to get their animation (and their game logic) closer together (for easier logic editing).
    But there's a problem with this -- The "in-code" animation approach is that animation and logic quickly become TOO tightly-coupled, and one begins to quickly dominate the other with an iron-fist. This becomes VERY apparent in any animation-heavy game -- (i.e. action games like fighting games, brawlers, etc.) -- due to the many special-case states these sorts of games generally require.

    Very few animations are ever explicitly "state-based" -- even punch/kick states tend to take into account whether you're in the air, on the ground, or crouching first.
    These kinds of 'semi-states' are prime-candidates for "DOTS Visual-Scripting" modules, while also being very useful to animators since specific things can be done in a "crouch" that is a "punch" by playing a different upper-body animation (such as a chop rather than a punch). The animation wouldn't care about the lower-body, yet it could play all the same -- all without complicated "Animation Layer" setups).

    The holistic system proposed will easily reduce the amount of states these games need to manage because clips can be played per-module OR per-groups-of-modules, while also being blended using constraints and AnimationCurves that ripple down the module's bone chains (OR chains of modules themselves).

    Currently, Mecanim's "solution" to "general animation" is "playing/mixing an animation clip", but the real "problem" is (and always has been) applying clips to procedural routines -- i.e. game animation -- in general.
    Game animation has always included procedural stuff also (like IK, or facial animation, or a sword on the back of a character, or a character's long hair) that also must interact with the logic behind the animation's physical transience (be that purely physics, purely visual, or some combination of them both).

    However, "state-based" animations has always been unwieldy here because of only one reason:

    Animation is not a problem of identifying states -- it is a problem of recognizing transience.
    This is as true for each kind of animation as it is for the kinds of logic (physics or otherwise) behind animations.
    Action/fighting games just make this painfully obvious -- yet, as developers, we've never accepted this reality.

    Yet, recognizing transience is kinda hard... We simplify it in our minds as explicit "states"... but at a certain point, this breaks down, and we can no longer see them as such. Take any fighting game, then increase the number of overall (base) states (i.e. crouch, stand, jump, walk) then try to add various attacks or actions to these. A stupidly unwieldy character-controller occurs when you can do something like "crouch-walk" or "dash-jump" or "air-dash" as base states -- especially when you suddenly need to put in a large number of attacks or actions in these base states too. What if your character has various expressions with attacks in each attack state (if they are poisoned or excited or PO-ed?)

    How do you group, separate, or apply these tiny state changes? -- All in all, module-based animation is the only practical thing that can save the day when not sure exactly how to split states up in terms of their transience.


    3) Easier Animation Authoring + Flexible/Automated Bone Retargeting
    ("Transience by modularity" -- Modular retargeting and modular 'state' construction/anims)


    "Mecanim" spits in the face of the idea of "transience" that I've mentioned above.
    It's not a terrible system, but it is not a system suited for implementing game animation.

    Games -- by their very nature -- are procedural.

    The whole principle of game animation boils down to the idea of procedural transience.

    Transience comes from the idea of a state being a "state" temporarily (which sometimes consists of smaller and smaller "substates", which increasingly become more and more unwieldy since they're traditionally tied to a main "state", which can, and does, change often). A state (and its substates) are always on their way to transforming into a completely different representation of what it once was -- especially in terms of animation in games.

    The idea of an animation "state" really needs to be redefined for the procedural transience of games.

    A transient "state" only maintains an overall form in explicit circumstances, otherwise it's represented by groups of smaller and smaller parts and their sub-parts. These groups can also maintain a state i.e. "semi-states", and these "semi-states" are ultimately just larger and larger groups of parts (and sub-parts) that are combined to create the current transient "state" of the object. These parts, themselves, can change, too, because they are made up of other parts/modules and combined into their own individual "states" to some extent. The "semi-states" are rarely intended to represent the whole (global) "state" of the object/player, but in simple cases, this can work too. Instead, they are each able to be swapped in/out completely with one another. Ultimately you can get a hand that is three-fingered with tons of animations that can be swapped with a mitten, who can be flexible only at that one joint it is connected to the rest of the arm. This means that mittens can still move at the wrist using the three-fingered version's animations -- they just can't flex any fingers like their three- or five- fingered counterparts since the bones don't exist to be repositioned.

    To be transient, therefore, means "to be constructed in such a way that each tiny module can be variable and can be swapped around (singularly, or in groups), ultimately leading to as many different configurations or variations as you have modules for -- and all being easily modifiable with logic, since you're usually only modifying the local animation rather than the global animation -- but these can be modified easily too."
    Global animations are simply broken down to its smaller (constituent) groupings of parts automatically (based on the bone-names in the groups it was split into). This means it can be animated both upward _and_ downward.


    If modules (and the bones they are composed of) are named consistently (on a per-module basis), this can lead to automatic retargeting (and procedural, low-cost mirroring) of individual bones being combined into individual modules, that are combined back into complete animations, ignoring bones that either aren't present or aren't named properly, letting each bone be animated local to its intial (untranslated) position according to the modular skeleton being built.
    Animation clips could easily be authorable in the Editor. Start with a full-body mask as the main animation mask (including visible meshes and bones), and upon saving, have changes ripple-down to the individual submodules, populating each of their animation clips upon saving the full-body animation mask. If you're animating a three-fingered character's hand-gesture, it can work for a five-fingered character, and vice-versa, but if you're authoring a three-legged character but your character only has two legs, an animation can be set to play only when there's modules marked as "three legged" for a particular slot -- i.e. octopus, robots, spiders all have legs that slot at the base of the spine/root, so this module slot can look for those types of modules tagged as "three-legged" for the animations to play.

    The sockets of these individual module sockets can pass down offsets for specific bones that can be mirrored, offset, and stretched from their original position) based on which modules are attached. Some slots can also be relative to another rig's setup (i.e. a three-fingered dwarf rig vs. a female rig vs. a tall hero rig can all be labeled as sliders that let the rig module slots individually morph between various [positioning offsets / joint configurations). These are just ideas of course, but it helps to explain how procedural transience is helpful for detailed animations!

    There are more complex rigs out there -- like these in "The Last Guardian", but the system described above would allow this to be created in a way that is both universal and able to be used for tiny one-shot animations all the way up to large-scale logic-based rigs like those in the slides there -- all without a node-graph or complex behavioral trees.

    Almost all logic can be done via general scripts applied to bones/rigs, physics predictive sensors (i.e. to calculate arcs of footsteps), or virtual bone sensors (to calculate timing of a duck animation so the character doesn't hit his head, or to trigger the timing of an arc calculation so the character will lift his foot over a small rock).
    A visual DOTS-based scripting solution could work well to help with this, assuming the graphs could be applied to specific (or mirrored) bones inside of specific modules (alongside their constraints, of course).


    Sorry it took so long -- Please let me know if this helps, @davehunt_unity! :)
     
  26. gppt

    gppt

    Joined:
    Sep 30, 2016
    Posts:
    1
    Any information on how Animation Rigging will play with Timelines. Currently Timeline clips seem to override any rigging constraints setup.