Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. We’re making changes to the Unity Runtime Fee pricing policy that we announced on September 12th. Access our latest thread for more information!
    Dismiss Notice
  3. Dismiss Notice

Freeform (Procedural) Animation: Modular Rigging -- A new approach

Discussion in 'Animation Rigging' started by awesomedata, May 27, 2019.

  1. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    https://forum.unity.com/threads/freeform-animation-modular-rigging.648088/#post-4402987

    I have posted a detailed design concept in the above thread regarding a better approach to modular rigging workflows. This design allows us to take our Modular Rigs and apply Procedural Animation to them (in-engine) using constraints and "module" components in a similar way one might combine and overlay multiple filters in Photoshop to achieve a specific image result. In our case, we would achieve a specific kind of rig.

    Please see my posts in that thread.

    Right now, working with Modular Rigs (mainly creating and applying constraints en-masse) is quite painful for those of us who do not have special tools and/or are not mathematically-inclined.


    Modular Rigging is too granular at the moment, meaning it lacks very important cross-over functionality that can (and should) be able to be applied to any group(s) of bones (i.e. "Bone Modules" -- which contains a group of bones, such as an arm or a hand, which can be displayed however the user likes, and can contain an Animation Curve that defines Global and Local Constraints that are applied all the way down the bone-chains of each "Module").

    This "masking system" might be somewhat akin to the Humanoid diagram where the head is a "module" since it has eyes/neck bones, the body is a "module", since it has sockets and bones for the limbs, and the hands / feet are each different modules since each has fingers/thumbs/toes appended to it and a socket for the wrist / ankle that lets it plug back into the body "module" in the "ankle" socket. The hands could be mittens, bug-hands, ninja-turtle three-fingered hands, or whatever -- this would get mirrored over to the other side (based on bone-naming conventions.) The user would click in the scene view to add bones to the mask manually, or the bone-names would match with the various hands/feet/body "modules" and they could be automatically slotted in based on their naming conventions (to speed up rigging).


    Procedural animation is _just_ at our fingertips:

    • Constraints, if possible to apply as a ripple all the way down a bone chain (with an Animation Curve to define the weight of their effects on each bone in the bone chain), could allow easy non-scripted ways to make a rig behave as we want it to (i.e. with a spring-dampen Constraint down an arm chain, or a finger chain, an "ease-out" Animation Curve would decrease the "spring" factor on the more distant digits from the socket origin of the hand's Bone Module. Applying the curve to the "dampen" factor is possible too, and would look neat as an "Ease In" Animation Curve. Think of the visual possibilities!!)

    • Virtual bones would allow us to build "sensors" attached to "modules" that will then enable us to script the sensors for a given set of bones ("Bone Modules", which also contain sensor indexes -- i.e. sensor 1 , sensor 2, etc.) -- For example, while in a "walking" state, "sensor 1" checks for a ceiling and "sensor 2" checks to see where the next foot can be placed (or if it must go through a wall, it sends a signal back up to the calling code to tell the player to go to a "stopping" state). Programming sensors as if they were throwaway variables is a great way to program animation without complex state machines.

    These are "killer-app" features for procedural animation. I hate to think that a keyframing approach is being done on top of a crusty-old "frame-by-frame" system that requires artists to manually do the transform and blending maths without the proper hierarchy of easy-to-use tools to enable the fundamentals of procedural animation.

    Is something like this being considered? -- Or do we have to program it ourselves?
     
  2. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    @davehunt_unity -- Also, don't forget about the "Center of Mass" and the "Spring + Dampen" constraints!

    These constraints are special and would have to be applied to a chain of bones (and/or modules) all at once (again, using an Animation Curve to distribute the weight down the chain of a particular module / group of bones), helping characters remain upright (or maintain a sense of weight) and stay "jiggly" (or not) where it counts most!

    Also a "Pose-Blend" constraint would be nice too. This would allow one to blend a group of bones (Bone Module) to a particular pose keyframe from any Animation Clip -- Separate bone modules could grab separate poses from separate Animation Clips. Legs could run while torsos aim and arms and hands shoot. And if a module is missing a bone (or has an extra bone not found in the Animation Clip), the bones are simply ignored while the next constraint freely processes which ever of these leftover bones it wants. Constraints can be weighted of course, so pose-blending could be controlled with a sliding value, letting other constraints applied to the module have a greater effect on the final bone position if desired.

    Just my two-cents.

    You said "keep it coming" lol -- So I did.
     
    sinjinn, NotaNaN and ForMiSoft like this.
  3. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    Sadly, these ideas were great, but Animation Rigging seems to have been abandoned (in regards to new workflows and/or features).

    Hopefully DOTS Animation will consider some of these thoughts?
    What say you, @Unity?
     
    sinjinn and NotaNaN like this.
  4. sinjinn

    sinjinn

    Joined:
    Mar 31, 2019
    Posts:
    149
    Do you use Animation Rigging right now? How do you use it?
     
  5. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    I do -- In terms of how I use it, I'm mainly just making my own tools for UI/UX as I need them so that I can manipulate the animation much more easily. I don't aim to spend a lot of time on these tools though, since DOTS Animation is supposed to be in the pipeline at some point.

    That being said -- I'm not sure how much longer we're looking at on that front, so I am not entirely sure this is going to be possible. I would like to start moving to a DOTS-only approach, but I'm pretty much using Animation Rigging for nearly everything animation-related right now.
     
    sinjinn and NotaNaN like this.
  6. sinjinn

    sinjinn

    Joined:
    Mar 31, 2019
    Posts:
    149
    Yeah, as a beginner I got caught up in all the DOTS stuff last year, only now it is on the back burner, so I just decided to use the current tools for now.

    Should I use 10 TwistChain Constraints for the fingers?
     
  7. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    It depends on your project and what you're trying to do.
    In VR, for example, this could be fine (though the second option might still be a bit better?) -- In a typical character though, you'd have three things:

    - an animation rig for the hands,
    - an animation rig for the rest of the character's body,
    - 'animation' clips with the hands in different poses (that are played only on the hand rig, rather than on the whole body rig that doesn't include the hands).

    The idea is that, if you want a peace sign, and have 3 animation clips -- open hand, fist, peace sign -- then setting peace sign animation (on the hand rig -- not on the body/character rig) weight to 1 and all others to 0.
    This gives you those fingers posed to the correct pose. You can "blend" the peace sign animation with the open hand to "lift" the thumb, ring, and pinky/small fingers all at once (into an "open hand" pose -- without playing the "open hand" animation directly.)

    Does this make sense?
     
    Last edited: Sep 28, 2020
    sinjinn and NotaNaN like this.
  8. sinjinn

    sinjinn

    Joined:
    Mar 31, 2019
    Posts:
    149
    Yes. Absolutely. That is very clear and useful.
     
    awesomedata likes this.
  9. sinjinn

    sinjinn

    Joined:
    Mar 31, 2019
    Posts:
    149
    Hello again. I guess I'm here for wisdom.

    About the Animator...is there a way to make it behave procedurally? I'm just getting into it. There's that jarring thing between switching animation. I need to look into what Blend Trees are.

    I've been reading a lot of mixed things about The Animator. There's always complaints on the forum, and then there's the Animancer website where I just gave up following the reasons why Mecanim is bad....none of which I can attestify to. I just adopted Bolt, and trying to integrate animation that way because typing C# is very difficult for me.

    I liked what Wolfire games presented, because then there is no jarring change between animations. But how does one achieve that in Animator? and how does that effect the Animation Rigging?

    Also....is there a way to side step Animator completely? with just Bolt? I mean....if we're just dragging windows about, Bolt is very much doing the same things. Can I reference the animations that way.

    What are your thoughts?
     
  10. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    First, I want to mention that Mecanim and the Animator are completely different systems.

    So, technically, the Animator itself is really the underlying "transition" mechanisms, which used to be built underneath the Mecanim "State Machine" logic. This has since been decoupled over the years, especially upon release of the Animation Rigging. Originally, it was impossible to "blend" more than 2 animations at a time. Then came Mecanim which let users blend as many as they want (with BlendTrees). But this comes at a huge CPU cost. The two approaches are not initially "compatible" out of the box either -- at least with Unity's approach to state machines. To make it "simple" for users, Unity thought it was a good idea to couple state and animation logic together in a weird and convoluted way that only _seems_ straightforward -- on paper. This is likely what is causing the "jarring" that people complain about, as the "logic" must still rely on frames to be complete, while blending is somewhat independent of the state logic (as far as I can tell) and must operate as fast as possible (and therefore be executed and possibly distributed across multiple CPU cycles to render an animation frame. This was necessary in order to decouple the two systems enough for something like Animation Rigging or Playables / Timeline to work.

    To answer your question though, underneath Mecanim (state machine) layer is still the Animator (just decoupled from its Mecanim counterpart's state-machine approach). However, nowadays, it is based on the Playables API (the same thing that runs both Timeline and Animation Rigging), and as far as I can tell, Mecanim is hooked into this Playables API also in order to standardize functionality across the different tools. Because you can use Playables (aka Timeline methodology) to control animations, they are now (by default) able to be controlled procedurally through the Playables API (just like what Unity does with Animation Rigging). So, theoretically, you could even remake Animation Rigging completely if you wish -- just as long as you use the Playables API as a base.



    Regarding the Wolfire Games approach -- You simply cannot do this in the Animator itself. You need to go a bit lower-level. They directly change the actual frame-by-frame interpolation to "animate" pose-to-pose (based on a curve-as-interpolation), which is usually defined in the animation clip itself (through the .fbx file as it is imported into Unity). Interpolation is usually determined by Unity on a low-level. But interpolation can still be written in Unity.
    In Wolfire's approach to interpolation, before it applies this interpolation, they also plug the physics simulation into their initial pose and target pose and base the actual secondary "motion" (floppy ears, heavy arms) on top of their invisible collider's movements for their character controller (to handle side-to-side movements of the ears, when running and turning for example). This allows some bones to have a greater stiffness than others along different axis directions. Looking carefully at the motions, you will notice that when an arm moves to its target pose (say, in the walk/run), you've got no sense of "floppiness" in the forward/backward direction to the hands or arms (because the bones in the arm are only "floppy" or "less stiff" in the vertical direction, and so far, while running, no vertical motion is being made). However when the character jumps or LANDS after a jump, the arms will indeed appear slightly "softer" or "floppier" than during the run. This is because the "rolling" sphere they use to move (or "skate") the character around ends up causing a upward/downward "force" on the arm bones of the (closest) target pose while evaluating the curve. The most important thing to note is that physics aren't applied per-bone (although some bones react more heavily to physics in a particular direction -- i.e. the ears react more "floppily" to forward/back movement than up/down movement of the collider, whereas the arms react more "floppily" to up/down forces on the invisible collider as it reacts to the world).

    Does this make sense?


    To answer your question though -- as far as I know, interpolation itself cannot be controlled through Animation Rigging, or even the Animator alone. However there is a more general (underlying) API that was introduced just before Animation Rigging (that Animation Rigging and Playables both seem to use) that allows one to modify the actual interpolation mechanism. (C# Animation Jobs is its name I believe). I've never delved too much into this system (as Animation Rigging solves most of my problems), but I would still be interested if you were to build a version of this Wolfire Games system on top of the Animation Jobs / Playables API. Unlike a lot of people who have tried to really understand and emulate the Wolfire Games procedural animation system before, I'm pretty positive I've actually cracked the "secret" formula. So I should be able to answer any questions you might have should you (or anyone else) try tackling a system like this. I would totally partner-up with you. :)


    Finally -- The explanation above should answer your question here. Technically, the answer is "sort of, but not entirely". Bolt doesn't have the ability to handle Animation Jobs (as far as I'm aware -- I may be wrong though), so the Wolfire Games approach wouldn't be possible using Bolt. However, the Animator itself is just a final "layer" that pulls together all the various Playables into a single frame to be rendered, so bypassing the Animator doesn't make a huge amount of sense in general. If you modify the Playables (which synthesize the procedural animation under the hood) by using the C# Animation Jobs approach to affect the interpolation, the Animator would be necessary to display that on-screen for now. Under the hood though, you finally have access to everything -- assuming you know what level of access you will need -- and assuming your "animation" is actually bone-based. Sadly, Bolt is _extremely_ limited as far as what it can currently do with C# classes, etc.

    Hopefully that explains what you need. :)
     
    Macklehatton and NotaNaN like this.
  11. sinjinn

    sinjinn

    Joined:
    Mar 31, 2019
    Posts:
    149
    Hey, thank you very much for the reply. It looks very detailed, and I will be going over it over the next couple of days so I can process it properly. You definitely live up to your name.:D
     
    NotaNaN and awesomedata like this.
  12. sinjinn

    sinjinn

    Joined:
    Mar 31, 2019
    Posts:
    149
    Wow. Thanks for the answer. I really appreciate it.

    Maybe it is just a mental thing of me trying to not use the animator, when really i should learn it, so I have been trying to go around this whole animation stack, which is probably not smart. However I have been trying to learnt he basics, and now I'm a bit ok with knowing that it might possibly take a few more weeks till I know how to integrate Animation Rigging with animator. I really took the approach of "I like animation rigging, and mecanim/animator is supposed to be bad, so I will try to use the one without the other", when in fact Animation Rigging is built with integration with Animator in mind.

    As to the Playables API, I was looking into that, but it seems not to have any kind of learning path, no tutorials. Maybe sometime in the future when I know more of Unity, and can read/write code more comfortably i can try seeing if i can make sense of it.

    It takes years to learn this stuff. i wish i knew all the things about how these systems integrate so i can start actually making a game, but I always seem to be learning foundational stuff.

    Anyway. thanx for the guidance oh wise one. I may pop up with more questions later, once I'm more familiar with the basics of the animation systems.
     
    NotaNaN likes this.
  13. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    Get used to this. -- It's not fun. :(


    The reason I am so vocal around here is because Unity doesn't seem to realize -- most people, even those with plenty of knowledge, are always "supposed to be" making games -- yet, instead they're always learning foundational stuff. BUT, out of those who manage to make a game in the process of endless digging into the technology they need, these people simply perpetuate the myth that they already have the tools they need to do everything they want to do. How else would they have made a game?

    However, like you pointed out about the Playables API -- there are typically no "learning paths" (as you put it) to much of the important fundamental knowledge needed. Those who have attained that knowledge have either A) written the tools/papers themselves to show and demonstrate (empirically) that knowledge is out there, or B) have some "inside" connection with those who have that knowledge already (who have not yet demonstrated it). But for those who happen to know stuff about the Playables API without following these two paths, the only other option is that these people have been involved in social circles where this knowledge is experimented on and then passed around freely. So that might be a good place to start, rather than doing it all on your own.
    Like you said -- it takes time. But you don't have to do everything all by yourself. That's what friends are for. And it takes time to make friends too...


    This is why it sucks. So much of this knowledge is spread out, and knowing what it takes to put it all together into one cohesive beast is not for the faint of heart -- and it's not like anyone tells you either.

    I personally have been making small protoypes since the late 90's, but I have been interested in game technology since I first laid eyes on Super Mario Bros. I've always been thinking about it. Though, even I have yet to actually complete a full-fledged game. Not because I can't -- but because I don't want to. Not just yet.
    I came close once, however, small prototype after small prototype eventually made me realize (like you seem to have) that there was always some technological roadblock that prompted me to seriously study more and more of the fundamentals in technology in order to realize my vision. Technology (and the tools to build it) simply weren't there yet. Years later, my life has become less about making games, and more about trying to find the right tools/mindset to help OTHER people make games. This teaching has become the source of my knowledge. And as much as it sucks for me to not have ever made a full game -- I have made plenty of completely functional prototypes from just about every dimensional genre, and even one full-scale (networked) game that was just too early for its time -- so I know how, and can truly explain, in great (nuanced) detail, what is necessary to make a game of ANY scale or type, 2d or 3d, and with any kind of team (or even completely alone).
    I've known quite a few people over the years in the AAA space -- and I don't envy them. Only their technology.
    And if you're wondering why I've never completed making a game -- The answer is, deep down, I never wanted "making games" to be a job -- at least not without the proper (fun) tools to do it. You don't typically get those as an indie developer. Had I been a programmer/artist for a AAA title, I might have had access to those tools. But those came just a little after my time.
    Though, now that I have the skills, I kind of wouldn't mind a job designing the _tools_ for making games. I just haven't stumbled upon that job _just_ yet.

    If I had it to do over again, I wouldn't change anything. Somebody has to push for better tools and workflows -- so it might as well be me. :/

    However, the way I look at it -- I think it is too easy to be ambitious in game design. You are much better-off using the few tools you have available, and making them work as best as you are able to. And if you find yourself lacking in skill on some technical front -- simply limit yourself willfully to using something less technical, but five-times as creative. This is what it takes to make a hit in game design -- and this is something anybody can do.
     
    Last edited: Oct 21, 2020
    NotaNaN likes this.
  14. sinjinn

    sinjinn

    Joined:
    Mar 31, 2019
    Posts:
    149
    Yeah. This foundational stuff thing...it's so true. The tool chain is so long that you can't really specialise and stay in one area, and theres so many systems to learn, and you run into roadblocks where you need to learn another part of Unity, and that takes months to learn, in paralell with other things. At least it's fun.
     
    NotaNaN likes this.
  15. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    That depends on what it is you're trying to do -- and on what kind of person you are.
    I tend to find it fun learning new systems -- but that fun quickly fades when I realize how poorly-designed some of them are (and how many tools/workarounds I will need to achieve my vision). At this point, it can often get overwhelming for one to justify.

    This foundational stuff can definitely get in the way if you're not careful -- and Unity is designed in a way that promotes endless foundational learning, so you've gotta watch out.
    It's like rebuilding your car's engine everytime you want to drive a different speed to the grocery store. Sure, the parts are all there -- but what is the point in knowing the makeup of the engine and how to tweak the combustion system when all you want to do is speed up (or slow down) a little along the way?

    In Unity, this nuance is lost -- You have a speed-up system, a slow-down system, and a combustion system that this all extends from -- but each "system" is built in isolation. While this is fine when it has to do with one function and scope, a complex scope that may or may not have multiple parallel functional equivalents is another story (i.e. not just speed and braking on tires, but also on airplane wings and jet boosters too, all in the same vehicle). At this point, it is a great idea to have a single system to handle speed-up and another system for slow-down. There is a lot of complexity there, and the nuance achieved by placing it in separate systems makes sense. However, Unity does this with the smallest of features (for example, Scene Visibility, which should really be expanded into editor LOD/Streaming territory rather than visibility of gameobjects as the be-all-end-all of its functionality), and it is this precise tendency that holds its technology -- and users -- back.
     
    NotaNaN likes this.
  16. sinjinn

    sinjinn

    Joined:
    Mar 31, 2019
    Posts:
    149
    It's a tough cookie for me, speaking on how an engine should work. But I do know when I decided to do do this I thought it would be so much simpler. Like...I thought theyd have a player controller "as/is" with all the functionality built in. And I thought the animation system would be human friendly.

    It's all very logic driven, which is great when you know tthe logic, but not so great when you have to learn it, and it seems so abstract.

    But I'm gambling if I just stick with it that things will start to feel more intuitive. I think Unity will strive for this upper layer where the minutia are all dealt with, but that could be decades, if that kind of higher game development enviroment is even viable. Such as Dreams on the PS4.
     
  17. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    Man, do I feel you here! -- This is what had me split between Unity and Unreal at the very beginning. Unreal had all the tools built-in with basic, albeit crappy, interfaces (but it was ultimately still a beast to run the games), and Unity was just overall more flexible/faster to implement things in, despite not having the tools (and I could run games it made easily), so I opted to either make (or buy, if I didn't want to or know how to make) the tools I needed. The interface and UX was 1000x more important to me than the functionality of the tool itself, and Unreal didn't impress me on this front. :(

    That being said -- I think this is what drives me to push so hard at Unity to do better!
    It has so much potential -- yet there are so many flaws in its implementations!


    This is understandable -- I'm not sure your level of knowledge on game engines, but it sounds like Unity is your first experience. However, my situation is different. I've worked with many engines -- though I've only taken two seriously.

    I never wanted to be "that guy" who bitches and complains about everything that is not up to his "standards" though, and yet, at the same time, I realize that I have no right to complain after the fact, when Unity inevitably misses the mark without my input. So rather than simply keeping my mouth shut and passively letting Unity become a pile of S*** that can't be cleaned up, I'd rather state my thoughts / feelings -- and hear (or not) the development team's response -- whether I like that response or not -- so I can further address any misunderstandings, on either side. They aren't game developers after all -- there are plenty of nuances they either don't fully grasp or understand. On the flipside, I AM a tool developer though, and I know how tools -- especially for artists/designers and even games -- should be made. So I feel obligated to speak up a little more than the average person due to my experience alone.


    Some (light) backstory about my journey if you're interested, lol:
    Way before Unity, GameMaker caught my eye (way back in the 90's) by its promise of "Easy Game Development!". After studying games for YEARS on my own trying to work out how they were made under the hood by using things like a Game Genie and Game Shark on various consoles before I ever even knew it was possible to program a game outside of C++ or Assembly, I was already skeptical of the "easy" part that was promised. However, since it was primarily a 2d engine at the time programmed in Delphi, I didn't expect much since everything else that existed at the time was extremely limited / overly-specific to a single game genre. The game I wanted to make, however, combined 2d fighting games with RPG and online multiplayer. After giving it a try and getting nearly 100% through my project, the experience surprised me at how flexible and intuitive game development could actually be. The major reason why I didn't make it all the way through my project was because the technology was just too slow, and because the author sold his technology to some yoyos who thought the future of games were tile-based, bitmap-driven, 2d platformers. So yeah. After working with the program for years developing many vertical-slice prototypes, I changed engines. :/

    Unity's base workflow (compared to Unreal and others) was the closest to GameMaker's I could find in my years of searching (gameobjects were simply "objects") -- but it was lightning fast compared to what I was used to. However, the more I dug into Unity, the more I realized it had a hugely steep learning-curve if I wanted to achieve feature-parity with what I had achieved in GameMaker. And in some cases, some of the things I wanted to do (that I had already achieved in GameMaker) simply weren't even possible in Unity at the time (i.e. loading external resources at runtime, for example, which was core to my gameplay). 2d was also in its infancy in Unity, so it was up to me to rewrite all of the systems GameMaker handled for me in Unity -- which was harder than it sounds, since Unity left a lot to be desired with 2d motions/collisions. So as I awaited 2d to improve, I went on to learn other bits of the engine to play around with what I already knew about 3d (while also studying other game engines -- including both 2d and 3d, both independently developed and corporate products, etc.) In the process, I realized there weren't many engines with a lot of promise, just Unity and Unreal -- and Unreal was too heavy. Unity, on the other hand, was too unpredictable in its API (plus everything was hidden in a black-box nobody could get to; so when physics was broken -- it was really broken). The crazy thing is, Unreal had all the features I needed (except 2d) -- but Unity had the better (more-flexible) workflows and an easy way to develop your own tools. So in the end, with no other decent alternatives -- I opted to stick with Unity and push it to be a better engine. This was the only way I could see (as a non-C++ master) getting a game engine that had the best features of all engines -- in one package. This was my dream -- and still is.


    PS:
    To be clear -- I am not saying GameMaker was a silver bullet. It still had _plenty_ of issues (even UX issues) at its core.
    Speed was the most crippling issue of all though.
    This was partially because it was TOO flexible. That is, it made EVERYTHING unique 100%. It didn't allow sweeping changes to code or behaviors unless _everything_ was written as a script. Scripts were not instances though -- they were variables -- with string execution. This was horrible for both memory and speed (not to mention code security). Almost everything was string-based. It was great for prototyping ideas with flexible code. It had a simple-to-understand UX too -- but GameMaker couldn't make a full-featured game (much less a multi-platform game) "easily" to save its life. It was just too inefficient. Though, compared to C++ or Assembly, I guess one could argue. The core wasn't designed for anything ambitious at all. To be fair though, the author was a Professor at a University who taught game design. He was just one person who was working on this in his spare time for his students, so you really couldn't expect much. That said, you could load/unload resources later on (he was great about implementing user requests, and that did help), but this was entirely too much work for such a core feature as loading/unloading objects/resources. So there were flaws aplenty. However, for what it was -- a tool for game prototyping -- it was great and straightforward, and was a prototype authoring tool that I remember fondly. :)



    This is something I'm hyper-focused on.
    I can't wait for decades to build a proper game development toolchain either.
    However, Unity (with the help of some third-party tools) is actually really close to this already.

    VisualScripting is the last step of the journey for me as far as intuitive major tools that I prefer not to build alone. Once I have a solid option for coding in Unity that is flexible and scalable (and finally enjoyable to work with), I will have gotten to the point of needing nothing more from Unity that I can't do in external packages (that translate right into the editor). Houdini and Blender have provided me that level of tooling I need in terms of art and level design. Akeytsu (Houdini) and Animation Rigging (and eventually DOTS animation) would do it for me for Animation. UI Builder has nearly fixed my issues with UI. Only Visual codebase authoring is left, and I have provided detailed feedback to (and am in contact with) the team that develops this portion of Unity. While I'm not particularly pleased with the direction they've taken with VS so suddenly, I've made it known (very clearly) why that is the case. They have definitely pivoted to some extent from their original direction thanks to my feedback (and the help of others who also feel the same way), so it looks like they're going to deliver something decent. I'm still fighting right now to ensure they have a full-grasp of what a system like this would look like in the end, but I think my work is nearly done on this front. Beyond this, you have your "foundational knowledge" to create any kind of game you want to -- and the tools to do that intuitively -- all without Unity's future involvement. Hobbyists might have to wait longer, but serious designers will soon have what they need.

    Speaking of Dreams --
    Dreams on the PS4 is something the Product Managers at Unity pointed out to me nearly 6 months ago that they'd like to achieve, but perhaps with windows. They are definitely looking into this on some level. However, even some of these guys are frustrated that Unity is, for example, still looking at heightmap-based terrains for their "Environment System" instead of actual meshes, as well as other 2003-era plans. The reason for such slow software-development is a lot of internal hemming and hawing about software direction. Unity saves a lot of money on R&D by eyeballing the open-source community and technology whitepapers, rather than having a director for their overall technology. UnrealEngine on the other hand clearly has this director, but isn't always rewriting their whole base of code to be more transparent and modular like Unity is, and as such, with Unity, a lot of parts need to fit back together (with better design/performance) which clearly needs some thought and foresight.
    That being said -- Dreams is a higher-level development environment. However, it exploits VR and motion controls for "artist-friendly" development. To its detriment, however, I doubt some of those controls are as intuitive as they seem to be to some artists though. As a result, a development environment like this needs some serious considerations behind its vision. The biggest plus with Dreams is that they had a guy behind the system who is an artist himself. This guy had enough intuition to tell them to ditch the windows-based "dialogue-box" interfaces for context-sensitive actions (for example) -- which is something very few tools actually do -- (except really REALLY good ones). But this is a simple UX trick, and nothing more. A tool like Unity needs hundreds (or thousands!) of these. Though, Unity, with the right backend technology, can (very quickly) have an interface like this -- if they choose to hire the right UX designer.

    All in all -- it's not a gamble (anymore). Unity has become a powerhouse right under our noses. Just learn some basics across all elements of a game technology (i.e. shaders, animation technology / principles, meshes/materials and some basic optimization, and how to read code) and third-party tools to help you with the content-development. Once a proper Visual Scripting solution appears, the rest should fall into place -- if you've got a clear idea of the design you're after of course. I am planning to make this process much easier for everyone however.


    This is the key -- and is exactly why, to expand on my previous paragraph, I actually plan to create a start-to-finish "learning path" and "tool pipeline" setup to share the most intuitive tools with the most optimal learning paths with the designer community. At the moment, I am just waiting to see Unity's new VS solution. If I can simply write a small plugin to improve the intuitiveness of writing code/tools for game design -- i.e. Freeform Animation authoring -- this will be the cherry on top to intuitive game design. So don't worry too much. I plan to really help anyone who wants to learn the basics of game development technology quickly -- showing them the most intuitive tools available to them at the time -- right at their fingertips. :)

    If I cannot accomplish my vision of a badass visual scripting tool within the next year and a half (with Unity's VS tool -- and I don't mean Bolt 1), I will write my own visual scripting tool to supplement the intuitive scripting and animation processes Unity lacks. None of these tools will take long to build, as there are plenty of other tools to be used as my foundation, if necessary. In the meantime -- keep learning -- and don't give up!
    Unity is worthwhile to know -- just stick to the parts you know for now, and know that the future is not far away. :)
    Hope for an intuitive development process is right around the corner! :)
     
    sinjinn and NotaNaN like this.
  18. sinjinn

    sinjinn

    Joined:
    Mar 31, 2019
    Posts:
    149
    Talking about Unreal. I got a new 1TB SSD so finally have space for Unreal. So far, I'm kind of kind of kicking myself I didn't do it 2 years ago. A big issue for me as a beginner was getting trees and foliage into the game. Unity was a real pain to do this in with the renderpipelines. Even the split between HDRP and URP is just a complication that so far Unreal doesnt have. It seems so easy to work with on the surface.

    I thought I'd give Unreal 7 full days. I gave up and came back to Unity a day or 2 early, just to be more familiar and do actual stuff. But the trees I got in the recent bundle are not playing nice again with the render pipelines. It's probably my fault, I'm using the latest beta. I don't know. Seems unreal is really good at justputting everything infront of you rather than sending you on another journey of discovery...

    I'm kind of frustrated because I've learnt many parts of many systems in Unity, and I don't really want to give that, but I guess I'll leave it to which ever can draw me in more. The prospect of making epic landscapes with no renderpipeline issues, less asset hunting, etc, is very appealing.

    You said you use Houdini. I also downloaded that once but decided to focus on Unity instead. How does it fit into your workflow? I'm interested to try it but I'm fearing my focus is getting to spread out. As a solo dev I think I should really get the whole tool chain sorted for a simple game. But just out of curiosity I wonder how people use it, becasue it seems very powerful.

    In defense of Unity though, now that I have booted into URP, It's nice to have something run smooth on a low end machine. And the interface is very nice and orderly, and subdued. And obviously I've spent a lot of time with it and am familiar with how things operate, and sometimes don't. And the 2D tools are great. Spriteshape etc.

    Bolt also...it's just nice looking, and shader graph, and VFX graph. I think visually, Unity is much more pleasing. I don't know how much that matters to me, but I think it might.

    So, I hope, now they have money from their IPO, they really match the content delivery of Unreal. Otherwise, I think they might be losing many developers who don't really want to spend money on assets up front.

    I think perhaps the game has changed. Unreal is legit providing a library of world building assets, which to us means we don't have to budget for those things, don't have to worry about compatibilities, etc. I don't know what Unity is doing with thier photogrammetry purchase, but they better implement a workflow that matches Unreals for HDRP if they want to compete in that space.
     
    Last edited: Nov 11, 2020
  19. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    As somebody who has used A LOT of game engines over my 20+ years making toolchains and pipelines to make all kinds of different game styles, I have found that most game engines (except Unity) focus on getting something simple in front of the user, and some basic (intuitive) tools to operate on that simple thing in a simple way.
    However, the moment this fails is the moment you need to go deeper than the "simple" tools allow -- and it is precisely THIS moment when "simple" is really tested in most engines.

    Unreal requires you to get into Blueprints almost right off the bat. Like Houdini (which you mentioned), you have a hell of a lot of nodes -- and you must already understand them (and how to use them efficiently) before you even begin. This is a LOT to ask of a new user, but those devoted enough can make some headway fairly quickly. The problem arises when these nodes require you to know more stuff about game development more generally than you're actually ready for (and the underlying structure of things like shaders, models, etc. -- creeps up on you later on as an invisible requirement of knowledge). Unity's "workflow" pretty much requires you to face much of this stuff off the bat (since it is necessary to make anything), while Unreal's workflow lets you ease in on this (since it pretty much handles shaders, rendering, etc. for you until you're ready to take control of this stuff -- but it tends to botch things up on a performance level for everything except for high-end games). While this handholding sounds great -- in theory -- the very fact that it obfuscates these things from you at first keeps you from genuinely "getting stuff done" since (whether in Unity or Unreal) you still need to understand what you're actually doing under the hood (i.e. with shaders, models, memory, etc). With this knowledge, jumping ship from Unity gives you a head-start in Unreal (since it seems to be a lot easier to understand thanks to Unity's pain points), but as with anything "different" enough, new pain points tend to arise when you have to find out how to "unwrap" Unreal's pretty package and start to scratch at what's under the surface when you're ready to get a bit serious.

    All in all -- "on the surface" is really what it's all about with most game engines and beginners. The moment you have to let go of the hand that guides you, things get complicated -- fast. You're like a scared child in a forest of syntax and C++ becomes the wolf that is staring you down. Unity, in contrast, gets you into its (heavily unorganized and extremely complex) "guts" pretty much immediately, and with a little battle experience, something like C++ doesn't seem like such a huge challenge -- just a bit unnecessary, considering the tools you've already got to work with (like DOTS and ECS for performance and render pipelines made to specifically target either low or high-end hardware).
    On its surface, Unity seems more complicated (its UX is definitely needlessly so sometimes), but it is clear about the problems of performance in that it leaves a lot up to you to handle your design -- and whether that's based on performance of ease-of-use. The C++ world that Unreal is silently leading you toward without ever telling you performance problems exist is more troublesome to me in the longrun than a terrible UX. If a false sense of comfort is what you want, Unreal is better. Though even simple games run like crap on lower-tier hardware. Optimization in C++ is often necessary, as Blueprints tend to be useful only as prototype architecture. Unity at least doesn't lie to you with the technology itself. It is only _partially_ ready. Unreal, on the other hand, makes you believe it is the full package -- until it isn't. But by that time, you've already invested so much time/energy/money into getting things to where you want them to be -- you simply cannot go back. This is true in more ways than one. UE4 is a closed ecosystem after all. At least with Unity you have an out: All your assets are yours. You can take them wherever you want -- even to Unreal. While it is nice overall to have tons of photogrammetry worldbuilding assets at your disposal in UE4 for "free" -- it does seem everything has a cost. In my experience, it is extremely important not to overlook that cost early on.


    While this is a smart move in theory -- in practice, I've found that working just one step above what you're comfortable with in complexity is the only way to ensure your development methods will grow with you. A "simple" game tends to be the suggestion to those who want to go MMORPG (who clearly won't last long in gamedev) -- but a proper toolchain and pipeline may not even be necessary with some "simple" games, and might end up building false hope that things can be expanded upon later. In practice, this is often not the case. Generally, the most "simple" things get thrown out, not reused. So keep this in mind.
    I'm not saying be ambitious -- but I am saying do some "stretches" first before you do the exercise, and you're less likely to pull a muscle later, as things are already more flexible before you start. This metaphor applies to your tool chain especially. Stretch it first -- then use that "stretched" version in production for your "simple" game. You'll find it serves you better in both limiting yourself, and fitting your exact needs -- simultaneously.


    This is a good question -- Houdini is great at all kinds of things in Unity (especially geometry/texturing), but its new thing is going to be "animation" too -- i.e. rigging and whatnot -- which means it will fit in well with Freeform Procedural Animation in Unity. This is a new set of features I didn't even know were coming, so I'm not certain on my eventual Unity setup just yet, but I'm positive this is just one more way Houdini can help my asset workflow.
    To answer your question though -- think of Houdini as another Unity with UX features that Unity itself tends to lack (i.e. object placement, scattering, booleans, hooks into sculpting apps like Blender or Zbrush, automatic texturing, LOD stuff, etc.) I use it and Blender for pretty much all modeling / texturing I need to do in the context of game design.
    Without the right introduction, learning Houdini can be overwhelming. Most people who teach it come from VFX and film backgrounds. Few approach it from a gamedev point of view. Before you go down that path, I'll get back with you on some training materials that might make that introduction a bit smoother / easier to grasp. Stay tuned. :)



    This is absolutely true. I completely agree. I think, at some point, Unity already understands this on some level. Though I think the "Snaps" series of assets Unity has released is supposed to build into that process. I think that Unity will find out soon enough that you can't beat "free" however. Enough people are jumping ship right now that I think they maybe will wake up to the reasoning behind it. Sadly, I'm not sure enough people at Unity really understand where the process is failing them. It isn't photogrammetry, nor is it even Visual Scripting (as much as it would help them to innovate in this area). It is the fact that @Unity itself is being run by "in the trenches" programmers and "college grad" designers who don't have battle-hardened design-programming experience (i.e. Houdini). This "hard" design stuff is generally left to an equally ignorant (in practical day-to-day development life of the average developer) marketing team. This is the perfect storm of what we developers *don't* actually need (or want) -- Including the AAA developers they cater to. :/

    It honestly scares me that people are jumping ship to Unreal for art assets -- because Unity is the better engine at the end of the day. However, if you can get a character in the game world and running around in less than 5 minutes from start-to-finish, that says a lot about the engine's priorities as a whole. The problem is -- how we all interpret those priorities tends to be a little harder to intuit when you don't actually make games with your own engine. This is where Unity really falters compared to UE -- and where people probably doubt @Unity the most. Sadly free assets wouldn't save them -- but it _definitely_ couldn't hurt to "repair" their image a little bit. :(
     
    sinjinn and NotaNaN like this.
  20. sinjinn

    sinjinn

    Joined:
    Mar 31, 2019
    Posts:
    149
    Well I am fully back in Unity now. It was a nice little distraction, I got to see the other side, and still have Unreal installed, but I think it's time to refocus on unity, and gain some deeper understanding. Thanks for all the advice on that.

    It's actually very impressive how you can write all that out. Certainly when I'm focused on a topic I can write a lot, but you seem to be quite skilled at it. And it's quality knowledge.

    I tried to get into to Houdini a couple of years ago, but it was kind of alien to me, not being versed in coding or anything, but I think I might have a look at it again. Actually I think I might download it....later. New 3D software, in my limited experience, always send me back to Unity. Just the immense learning time. And I think maybe I should stick with Unity and then see where I'm limited in a way that I would need something external. But I'm always looking to gain that procedural edge, because it seems to be multiple times more powerful than regular work.
    But I know, the way to learn it is to use it. And to use it means to not use Unity, and in the end I will make a little progress and return to Unity anyway. So maybe I should not do that. Lesson learned. :) But if ever you want to lay down some Houdini knowlegdge, I'm all ears.
     
  21. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    I'm glad you got to dig around there -- It's never a bad idea to look into the competition. You might bring back something to the thing you're more comfortable with that you never knew you needed. So you always get _something_ valuable out of it -- even if it's just the knowledge that the tool isn't for you without some vast improvements in certain areas (which is really pretty common).

    Thanks for that! :)

    Honestly, I just don't like it when people who can really help someone (or at least can offer insight) instead chooses to "half-ass" their responses. I simply wait to respond when I have the time (rather than responding in a half-assed way).

    It blows my mind why other skilled people out there don't do this too. So I lead by example. :(

    The good thing about the way I suggest to work is that you're working in Unity and Houdini side-by-side simultaneously, complementing the others' weaknesses. Neither alone are enough to make a game. But both -- together -- gives you a really badass tool.


    Haha! -- Give me some time. You'll be glad you did! :)
    Houdini is the kind of program you'll definitely want someone to hold your hand at first with. You can probably still figure some stuff out and be productive with it -- but the workflows for what you'll want to do with gamedev are not easy to spot within its myriad of options / workflows / settings, since it was originally designed for rendering -- not gamedev -- and this shows in its (somewhat outdated) design and interface. However, it can still be intuitive -- and (more hidden) features do seriously rock for a full-featured gamedev workflow (without a S***-ton of Unity assets to eat your wallet -- and your life).
     
    NotaNaN likes this.
  22. sinjinn

    sinjinn

    Joined:
    Mar 31, 2019
    Posts:
    149
    "and your life"

    The road is too long. The end is no where near. The more I walk on it, the further on it goes. I would seriously like to use some expletives but in fear of Unity mods rightfully banning it, I will not.

    Currently messing with Cinemachine and it's cameras, which don't seem to want me to get anything done. And that damn animation system, Man, I really need some kind of structured learning system. I was all set to go to meetups at the begining of the year, and then the pandemic happened.

    I tried discord, but it doesn't seem to want me on there. Won't let me save my account. Also, can't really get things done or sorted that way.

    Forums are a waste of time. Youtube vids are great. Practical is the best, but roadblocks, they are they worst.

    Hows it going Awesome Data? What are you working on?
     
  23. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    Going to provide some well-researched workflows and advice in the form of some videos. Keep me on it though, as I didn't consider animation as part of that process. Thanks to you, I've included that bit in my pipeline.

    I am currently doing this in between work and finishing up some design skeletons I planned to give Unity on two of its most important iterations (Visual Scripting and Tools/Overlays/Shortcuts) as feedback for a better design _months_ ago.

    You?


    Haha -- I feel you man.

    Truth be told -- The "standard" for making games has increased so exponentially that nothing you can ever do as a one man team ever feels "good enough" to be released. And to simply get started is a huge hassle for anyone. So how do the people who actually _do_ make games really succeed?

    My theory is that there are TWO possible answers to that:

    1) They work within the limits of what they already have/know and make something cool/creative out of that.
    2) They hire somebody with more knowledge than them and simply direct them to create it for them. ($$$)


    The two "possibilities" above have two completely different mindsets behind them:

    1) You can throw together any fun/funny bit of goofiness/nonsense, and it'll be fine! -- it's just a game!
    or
    2) You have to be tedious about every little detail because it has to be good enough to earn back your investment.


    The problem with most people is that they want to start with 2 and integrate 1 (without money to do either), or start with 1, but end up doing 2 instead (again, penniless). Sure, there are variations to this process -- but from everything I've seen, it's really just one of these two methods.
    However, the best method I've see is that it's better to work within your limits and do _either_ 1 or 2 -- depending on what you can afford or how much you know how to do (and know you _can_ do in a reasonable time -- for sure).

    Honestly -- making games can be extremely fun, or lead to soul-crushing depression.
    Only we, the designers, decide which is which. I went the long way around (that endless road you were referring to) -- but I only did this knowing there was a "better" way -- and I actively chose the absolute "worst" one (for the purposes of research -- and eventually skills -- not _specifically_ to make a game).


    That being said -- I can't speak to your personal experience, but I'm sure you'd have more fun with 1 rather than 2 -- but if you're going the route of the 2 mindset, you seriously need a (functional) team behind you. If that's not your bag (it hasn't really been mine), I can say from experience that I've had more fun with 1 than all other methods (and was able to escape the soul-crushing depression bit -- even after it consumed me on my first "official" game when I realized it wasn't going to work with the business model I wanted to use).

    The lack of skill is what will be your undoing -- if you do not focus on what you _can_ do (rather than what you _can't_ do, but could learn pretty quickly). I don't know where you're starting in you skill level, but ideally, just touch on the broadest range of skills possible -- (shaders, modeling, animation, textures) and only delve into the bits you actually find fun (in your spare time). The rest of the skills, use them to develop actual _game_ assets. Do three or four of each kind (and use that knowledge to determine the overall _scope_ of your first project). For logic, use state machines as much as possible. See Fungus for example -- it is extremely versatile (and free) and can easily be used alongside Bolt and code, when (and if) you decide to go that route. Lastly, you will need to know how to implement general systems and localized systems -- and learn when (and where) to use both. This is a bit harder of a skill to learn at first, and is somewhat akin to learning game design itself (when designing game "feel"), but with practice, you'll get the feel for how it feels when you're doing it right (for the type of game you're working on).
    I'll go more in-depth in my videos, but I think that's a good starting point for now. Maybe the road won't feel as 'endless' as it did to you before. :)
     
    sinjinn and NotaNaN like this.
  24. sinjinn

    sinjinn

    Joined:
    Mar 31, 2019
    Posts:
    149
    Aye. I guess the problem arises when theres some new part of the system that is required. e.g. charachter controller, which has been hard to get into for a couple of years. I'm making progress, but it's not as fast as it should be. Meanwhile, my other skills, like level building, general basic engine functionality, windows, and a broad understanding how everything is put together, is pretty good. Some tools I'm very happy with, but to get the actual functionality, I need to interact with, well, I guess it's just the player controller that's holding me back. And it is integral to what I want, so can't be skipped. I need to know it.

    It's like I've knowledge and understanding about most other systems, and now to get those systems to work together, I need to work on my player controller/animation system knowledge. And that is a dense subject for me, not because it's too hard, but because theres so much to it. These systems interact with other systems. So I check out those systems. I come back to this systems, well ...now another part of it interacts with another part of something else.

    It's really like we are a blind spider or a caveman, or like dark souls. The web is there, we just need to walk on every bit of it till we see the connections. However, thats a bad analogy, because webs seem somewhat procedurally shaped. So consider then the caveman. He is exploring tunnels, and can only ever know a single tunnel at a time. Eventually he sees how how little bits connect. But he can't have that "big map" without walking down every avenue a few times. Or in Souls, it's like, "yes, I am past the undead Burg, I know how the fire shrines work. I can get past the knights with shields by just waiting for thier tells. But then the boss shows up, and keeps killing me. Again and again, and again. Every time you have the opportunity to learn a new method, a new tell, all of which at some point leads you eventually conquiering that boss, once, then twice, then infinitly, because it has become easy.

    Charachter controller and Animation system have become my Ornestein and smough.

    I'm happy that the problem is in my head, because I know it's a war of attrition untill all the pennies have dropped and it becomes somewhat familiar, and then comfortable, and then easy. It seems that's just the way education works in general.

    I
     
  25. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    That's interesting. It sounds like you're starting with a pretty complex character controller from the outset.

    I, too, did this with my first game. However, it was 2D and megaman(x)-esque. I didn't have to compete with duck-and-cover systems, or smooth animation transitions. Instead, I was fortunate enough to discover state machines, and tried integrating them into my complex model of how I wanted to do fighting abilities on a megaman(x) style fighting game -- abilities that the player themselves could "program" with a simple, visual, interface made mostly of simple menus.

    One thing I've learned about systems is that too many of them makes things needlessly complex.

    ---


    In my particular case, I found that I had, for example, a "move" state, a "dash" state, and an "airdash" state. But I had to figure out how to make these work with "uppercut/shoryuken" styled states (including the specialized physics that go with these kind of states) while simlutaneously counteracting the physics of other states -- (i.e. airdash or movement). This, as you can probably tell, was a bit of a curious problem to solve.

    How would _you_ solve it? -- Before reading on, really think about that question. (This is just one problem out of a HUGE number of others -- This was going to be a realtime online fighting game -- before such a thing existed).


    ---

    In my case, I needed physics to know about states, or I needed states to know about my custom physics.

    Or so I thought.

    Then I realized that I could simply provide "arguments" to my state scripts. Then, in my state scripts, I could simply cancel out whatever axis I needed to have control over during that state (i.e. vsp = 0; hsp = 0.5 -- if I needed my character to only move a certain speed in the horizontal direction -- such as during an "airdash" state).

    However, I wanted the player to be able to perform a physical attack during this state (and not just shoot bullets), such as performing a sword slice. Because the physics of the airdash state are still the same, but the sprite turns into a sword-slashing animation, the question for the "airdashslash" state becomes "do I really _need_ a new state -- or is there any other way to approach this?"

    The arguments helped a lot -- I could simply set a flag that allowed the animation to play across the same number of frames as the length of the dash (and display the attackbox of the slash during 1/3 of the dash -- start, middle, or end). This meant that "dashslash" was not really a "state" -- but a mini-process that "interrupted" the standard operations of the state.
    Nowadays, I would do this with a "Tag" system (i.e. in the "dash" state, if we find the "Air" and "Dash" and "Slash" tags, we perform the "airdashslash" behavior described previously -- otherwise, we do either the "Air" or "Air" + "Dash" behaviors).

    To compound the issue -- let's say I needed to perform a Shoryuken immediately after the AirDashSlash, since the input was put in when the input system said it was okay to accept input again. Today, I would make a state for the Shoryuken that checks for a more generic "NonInterruptable" and "Attacking" tag on the object/entity tagged as "Player". If it finds that, it does nothing -- at least until that tag disappears. At that point, it creates its own "Attacking" / "NonInterruptable" tags. However, states are nothing more than tags themselves -- so it inherently looks for its own "Shoryuken" tag alongside "NonInterruptable" and "Attacking" tags. If these tags are all there, it executes the state's default behavior -- as long as the "Air" tag is not present -- as you cannot do a Shoryuken while in midair. Not without a Game Genie of course. ;)

    Since "states" are just individual systems that have a respective "Tag" index associated with their name, state systems can be grouped and activated/deactivated by logic alone. For example, the moment the Shoryuken state recognizes the player is on the ground again (a "GroundCollision" tag is temporarily added during the frame by a separate system that processes gameplay based on collisions with certain things -- i.e. "GameplayCollisionManager"). Then it 'disables' itself (Shoryuken system) by removing the tags it would be searching for so it, effectively, doesn't execute anymore.


    ---

    This is a little bit simplified compared to a modern controller -- but the concepts are still the same. You identify all of your unique states, and see which ones can (logically) contain the slight differences in graphics/behavior. Then you decide (based on logic -- not behavior) how those systems should interact. If it starts getting too convoluted -- you're missing some simplification process somewhere. And by "simplification" -- I simply mean based on "logic" and not strictly concepts or behavior.

    Had I created a "ChangeState" system and then realized I needed a "GuessSubState" system, I would already be on the wrong path. None of this has any bearing on the legit "state" of the object (i.e. Type, Properties, Behavior, Verbs that it is inherently capable of). In regards to a 3D animation / IK setup -- use gameobjects to trigger "state" as a sort of probe into the environment/player for proper behavior (rather than arguments -- like I did originally). And definitely use "Tags" wherever possible to preserve your hierarchy of logic and "contain" that logic as best as you possibly can using those very same "Tags" alongside state tags.

    If you've checked out Kinematica -- even they are using "Tags" in their animation system to bypass "states" in a bit of a naive way. (They call them "Traits", but they are effectively the same thing). My concept is just a bit more broad and multi-purpose than theirs -- and should work better for your situation, I would think. But they have a bit of a learning curve, as they are still a brand-new concept (and not on the market yet, lol) -- though, here's a sneak-peek. ;)
     
    NotaNaN likes this.
  26. sinjinn

    sinjinn

    Joined:
    Mar 31, 2019
    Posts:
    149
    It's definitely not complex. I'm actually working on multiplication, while you are describing algebra. Not that I don't understand the broad strokes you are describing, but it's not at a place that I am yet.

    With the rest of the engine, I believe I am ok. I have worked with it enough that I could do none coding stuff easy. But with the code, states, logic, animation states/logicetc, I am very new to it, so just have to grind at it more. I believe states will come into play now that I have my character move left and right. There's no animation on him except idle, so currently he just floats left and right, and jumps. I'm using Easy Movement Controller Asset to control him.

    Next steps:

    - apply walking/running animation using standad mixamo assets ( Mecanim fun! /s)
    - use Animation Rigging to control context sensitive actions (e.g. OnTriggerStay, if square button pressed, Knock on door)

    Luckily I've been reading/watching videos/practicing these parts since 1 year ago, so I'm familiar with the basic ways these work, but I am in no way comfortable with them yet, especially when it comes to scripting them. Hopefully in the next month or 2 it will be a different story, and then I can get into the higher logic stuff you are describing.

    So, do you have a Youtube channel where your videos will be aired? I also am hoping to use Unity for videos too, but not tutorials. Maybe commentary/comedy.
     
  27. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    Ah, I see -- That makes sense. Sorry I misunderstood.


    The coding style of one person can totally wreck a beginner's understanding of how things should work. Everyone's own approach is different. (This is the problem with not having "standards" in design for these kinds of things.) I've used a few different character controller assets to see how they are generally designed, and speaking from experience, Opsive's Third-person Character controller is the most advanced Mecanim one I've seen. But, while I respect the time/effort it took to create it (and the ability to make "lemonade" out of the "lemon" that Mecanim was back when this was designed), this is by no means the way a character controller should be approached. Cheaper options are way better for that. However, if you want something that really "just works" -- and you don't care why or how (or how performant, flexible, expandable, or etc. it is) -- Opsive has a market.

    However, since I can't find "Easy Movement Controller" on google, I can't speak to the specific asset you are using, but you definitely need to learn how to deal with movement/collisions/raycasts/spherecasts/etc. before moving further into buying assets for this purpose. While, in theory, it's a nice idea to purchase a character controller that "just works" -- the reality is that the struggle with _any_ (purchased) character controller will always be its (fundamental) _design_ at its core (and the unnecessary man-hours that go into learning this hard/sad truth -- especially when you've spent the money on something). You will always be at war with it throughout your whole project if it doesn't inherently fit your design needs from the outset (or you can't -quickly- modify it on very fundamental levels). But beware -- if it combines animation into the logic for the character controller itself -- you might as well throw it away. It becomes too complicated to salvage 98% of the time (unless you're well-versed in the coding concepts involved -- but at this point, it isn't that far away to make your own -- which is what I would suggest you start out with. See "catlikecoding" on google -- they have some good stuff on this subject).



    But to clarify why I say you might have to throw it out:
    Mixing in graphical "state" with player state is the biggest problem with Opsive's controller -- and the problem with all the controllers I've found on the Asset Store (and even w/Mecanim itself). Visual state should never be dependent upon logical state -- and yet, to make things "simple" on the user (and exponentially harder later on), naive solutions tend to do this. "Look! I can make an idle state! And a walk state! And a run state! And wait -- I have to make walk transition to jump. And also run transition to jump. And walk to attack. And run to attack... And stand to attack........ Oh... Wait. What state am I transitioning from when I am getting hit? -- ALL of them...??? -- But what about getting knocked-down?"

    It _seems_ straightforward -- until it's not.
    The number of states, as you can see, doesn't matter. It's the number of transitions that eventually make you want to rip your eyeballs out.

    Animancer was a cheap/easy way to sidestep Mecanim (while still using it), but it got too expensive (imo).

    The cheapest / best / most scalable way to go forward (as of this writing) is to learn to use Timeline to handle your animation "states" (in lieu of Mecanim) -- and use Animation Rigging to do anything special within those states (i.e. opening doors w/ Animation Rigging is the equivalent to changing sprites in my previous example). If you want to get started with this, there was a ninja "runner" demo in the examples Dave Hunt (@davehunt_unity) provided that showed how to handle stuff like running and attacking at the same time. It used MegaCity assets to look cool. You should really look in that direction before you commit to something like Mecanim (unless you legitimately do not expect to ever have a large amount of transitions -- i.e. a board game, for example). However, when it comes to enemies that aren't very complex, Mecanim can still be useful, so the skills you've learned so far are still applicable (just not to very complex characters -- such as the player himself -- though your mileage may vary).



    I am still debating the platform right now. I'll let you know when I decide on the specifics though. No worries. :)
     
    sinjinn and NotaNaN like this.
  28. sinjinn

    sinjinn

    Joined:
    Mar 31, 2019
    Posts:
    149
    It is "Easy Character Movement" or "ECM" and it was part of a humble bundle. Maybe you know it.

    Anyway. I have been following along player movement Bolt turorials. But then I guess I have to package up what I make? so I can use it later? I don't have that process too well mapped out. I start a new project, I look at the prospect of remaking a tutorial charachter controller....I remember I have ECM, so I use it "in the meantime".

    My wish is to dive deeper into charachter controls, but it's been a sticking point for me. My mind tends to wander off. So I decided to work on further parts of my game, and use ECM in the meantime. It seems to function fine for now, and maybe my use cases are so simple that I don't need to know how it works. It has come to a point of choosing my battles. I know it's something I have to contend with eventually. Thanx for the recommendations. I will check them out.

    You say Animancer is too high priced? but aside from the price, if you had to choose a workflow without considering price, would you use animancer?

    "The cheapest / best / most scalable way to go forward (as of this writing) is to learn to use Timeline to handle your animation "states" (in lieu of Mecanim) -- and use Animation Rigging to do anything special within those states"

    What are you saying? Are you saying that there is a way to avoid Mecanim altogether? and use the amazing, beautiful, and intuitive "Timeline" to control my animation state? Are you serious right now?

    I'm starting a different paragraph, but I'm still shell shocked. Where can one learn how to do this oh wise one?
     
    NotaNaN likes this.
  29. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    Ah! -- I do know this one! Thanks for clarifying.
    And yes -- although I've not bought it myself, it seems like a pretty solid (physics-only) base controller (which is what you need, since you'd want to handle animation separate from physics wherever possible).


    ($80 -- yikes!)
    It is, when Unity provides an option that's just as good -- for free.



    Yes -- I am.

    Freeform Animation Rigging actually has been setup to work alongside Timeline in Unity 2020 (mostly for the purposes of its "blending" features so that you can use it to both edit and display your animation clips with "weighted" and blended values -- i.e. for IK). However, most "blending" can be done with standard "weight" slider features in the Animation Rigging examples. Since this "weight" slider blending can be preserved in Timeline and even exchanged or manipulated on the fly, you can easily setup multiple "states" using Timeline playback and blend between them (i.e. idle to walk to run) while even altering properties on the gameobject itself (rotation, based on input direction, for example) -- sparing you from having to deal with Mecanim variables and tedious state transition nightmares.

    Sorry, but there isn't a full-on (demonstrated) workflow for this right now, but it is indeed possible.


    https://www.gdcvault.com/play/1026754/Technical-Artist-Summit-Freeform-Animation

    Go to 28:30 and see the keyframe blending via Timeline to get a gist of the basic idea behind using Timeline for the layered "state" of your animation transitions using tracks and playables (i.e. how to go from idle to walk to run).


    https://www.gdcvault.com/play/1026151/Introducing-the-New-Animation-Rigging

    See 16:00 to see the walking around portion for the MegaCity ninja demo.

    I don't remember if Unity did anything with Timeline in the project files for this example to handle the running (or if they still used Mecanim for the left/right and jump/land motions), but even if they used Mecanim for the movement / jumping portions, they literally only _had_ to use one state (for air / ground) and the other locomotion "states" come for free via simple scripting -- i.e. if horizontal movement != 0 and on ground, then blend in walk/run animation clip (depending on absolute speed) -- else if _not_ in air (i.e. on ground = 1), then blend in an Idle animation clip. You could even set a "landing" animation using a virtual bone + gameobject to "check" for the ground and blend that in _before_ changing to the track with the "idle" / "run" animations on it, if desired. I don't remember how Unity does it for the ninja, but a simple gameobject (or virtual bone) is okay to use for checking the environment.

    Other "states" such as striking a ninja pose or swinging a sword could easily use Timeline to blend them in/out by making Timeline select a particular track/clip (beneath an "Idle" animation in the Timeline) and blending between them (or different rigs -- in the case of an upper-body slash that can happen while running / moving).
    There's a lot that can be done here -- so looking into Timeline (particularly Playables) alongside Animation Rigging is really worth the effort imo. Even if you can't entirely avoid Mecanim (because your blending doesn't needs lots of state-based setup) -- you aren't tied down to it either, and can use Animation Rigging in the cases where it makes sense.

    In the most simple case -- you have an "Attacking" and "NotAttacking" state, and everything else can be handled by Animation Rigging's "rig-blending" (i.e. the rig's "weight" sliders) alongside visual script logic.


    Even if I had the money to pay $80 on Animancer, this is the workflow I would recommend.
    Its skills translate to other parts of the game design after all -- not just animation. :)
     
    NotaNaN and sinjinn like this.
  30. sinjinn

    sinjinn

    Joined:
    Mar 31, 2019
    Posts:
    149
    OK. Well. I guess I will just have to try to understand what you've written there. It may take me a few weeks though.
     
  31. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    So odd I didn't get a notification of this reply -- but again, don't worry.

    I didn't get much time this weekend to focus on what I had intended to, though I definitely plan to attack the subject of animation with the video setup I have in mind.
    My focus is on Unity -- not just Houdini -- so this sort of non-conventional workflow stuff is right up my alley. :)
    For now, it might be worth it on your part to just study the Timeline and Animation Rigging packages / examples a bit and get comfortable with doing things with them (particularly in writing your own game logic for Timeline using Playables -- and building a skeleton rig that you can control the animation rig and clip using Timeline). Those GDC videos have their own Unity Blog posts with the files you need to look at as an example. Brackeys has a good Animation Rigging video too, to let you know the fundamentals of setting up a basic rig and controlling it via code.

    Hope this helps in the meantime! :)
     
    NotaNaN likes this.
  32. sinjinn

    sinjinn

    Joined:
    Mar 31, 2019
    Posts:
    149
    Definitely does. Thanks for your help.
     
    awesomedata likes this.
  33. Nova1824

    Nova1824

    Joined:
    Sep 25, 2020
    Posts:
    10
    That was one deep rabbit hole. I enjoyed going through your discussion here.

    If you don't mind me joining in, I am also looking to develop my skills in procedural animation. I'm going to school for Mechanical Engineering so my background is in more of the logic based and algorithm side of things. The procedural approach appeals to me for this reason and I've spent about 30 hours over the last two weeks diving into the Rigging System, Inverse Kinematics, Mechanim, procedural animation approaches, retargeting, active ragdolls, etc. I'll post a few of my resources for reference:

    David Rosen's Example: (the man, the myth, the legend)



    Ubisoft's Alexander Bereznyak on IK Rigs:


    Procedural Motion Unity Tutorial:


    Brackeys: Animation Rigging and Ragdolls

    https://www.youtube.com/watch?v=Htl7ysv10Qs&t=425s

    Interesting Active Ragdoll Technique:
    https://www.youtube.com/watch?v=-pX-PobRLzk

    Unity Animation Rigging Constraint Example:
    https://www.youtube.com/watch?v=ajmp3J7N3Ow

    Jay Hosfelt from Epic Games: Animation Bootcamp
    https://www.youtube.com/watch?v=a-zKMzboOec


    Before coming here I wasn't aware of the capabilities of Timeline. So thank you for bringing that to my attention.

    What I would like to accomplish:
    - Character body can interact with environment (foot placement, prop attachment, doesn't run into walls, etc)
    - Character aims attacks and movement
    - Fast iterations on animations (different weapons, characters, ect should be easy to add)
    - Character doesn't snag on terrain geometry
    - Shouldn't sacrifice playability for realism
    - Animations should be context sensitive and fluidly transition without snapping or locking the player out of input
    - Limb movement, center of mass trajectory, and head position should be roughly physically accurate and pleasing to watch

    What I am trying to accomplish right now:
    - Character walks forward with IK foot placement using Animation Rigging, IK constraints, and procedural code to move foot targets

    I still don't know if I can avoid using Mechanim. The Animation Rigging package looks quite powerful. I'm not sure if I have to use Character Joints and Rigid Bodies to accomplish the effect I'm going for. I'll keep you posted as I learn/experiment more. I would love to replicate the Overgrowth animation system as best I can. Maybe I can put that Engineering Education to good use lol.
     
  34. sinjinn

    sinjinn

    Joined:
    Mar 31, 2019
    Posts:
    149
    Sounds good. I think I might do the same. I've mainly been getting used to Bolt for the past couple of months, and figuring out a metroidvania type player controller, but really looking forward to integrating Animation Rigging onto the player soon.

    So far, I think the first hurdle, if you want to have any kind of individuality, like odd limbs, is that you have to make the bones in Blender. So I also spent a month trying to learn Blender, with it's awkward interface, but havn't rigged an actual character yet.

    Step 1. Rig character.

    Step 2. I guess this one would be to put all the right constraints in the right places in Unity.

    Step 3: scripting interactions

    Step 4: Give Up

    Just kidding. It's just that there's so much to do. The alternative is to just use the already rigged ninja character. Which is great, because I want to make a ninja game also.

    So, I have the chracter already. Already rigged. With constraints added. And then maybe I should use the new Unity Procedural Animation project to just make him walk.

    I always learn soemthing when thinking out loud like this. To be honest, with this thread here, I will probably re-read it once every month or 2 or so, to understand a little bit more of things that are said but that I can't currently understand. Even with the wolfire games video, I watched it about five times across 2 or 3 years and each time It's like I understand so much more now than the first time. So it's good we get more content to keep focused on what we want to achieve, which is to have the abilty to control chrachters as we want. And not in a janky way as game chrachters are usually.
     
  35. sinjinn

    sinjinn

    Joined:
    Mar 31, 2019
    Posts:
    149
  36. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    Thanks! -- I like to think it isn't too terrible, but it can get rather complex when you're not sure about the angle you need to approach the subject.


    So here are some key takeaways (for how I approach procedural animation), then I'll follow up with your specific situation and how I might approach that:


    In my mind, the way I think of procedural animation is simple: "It's just Animation."

    Animation is nothing more than the movement of a physical 'thing' from one place to the next over time. How it moves (under the hood) is irrelevant to Animation -- only that it moves where (and how) it's intended to move in the end.

    Therefore, there are plenty of 'tricks' one can use to move things the way they want them to move (in an intelligent way).

    Procedural animation is just one 'trick' to move a thing in an intelligent way, but the 'intelligent' part (at least as far as proceduralism is concerned) is mainly due to how much time is saved in keyframing dumb/repetitive actions (and the cost of storing these in memory and picking between them so often) when compared to how vastly (and how often) they change over time.


    I would never suggest to use Procedural Animation for less complex characters. A player character with complex motion states / physics / IK targets or an AT-ST chicken walker from Star Wars that can fall over due to its size or have one of its legs blown off is the perfect complexity for procedural animation. When you're simply wanting a moving player character with basic IK and use a different player model during cutscenes -- the CPU cost could outweigh the benefits it provides in many cases. Animation is easy to make with procedural characters, true, but if your game has lots of complex stuff going on behind the scenes, and many different types of characters with complex animations all evaluating at once... you might want to test how much you're pushing the CPU.

    That being said -- cost isn't always so prohibitive. For example, getting back to Star-Wars, that IK-rigging video by Ubisoft (with the differently-sized StarWars characters) is actually a great case for procedural animation. The skeleton is always the same -- minus the limb size / positioning (such as in the case of Chewbacca and Leia).
    In a situation like this, only the overall length of the bones needs to be processed by the CPU -- everything else (angle, speed, the animation definitions of rotation / timing / etc.) can be used from memory as-is. Weapon animation is simply a redirection of particular limbs to their positioning when a particular weapon is being held. Letting Chewbacca hold a lightsaber is useful if he must hold a knife or sword at some point, and the cost of this kind of thing isn't terrible when the number of characters (and attacks / weapon styles) are ever likely to expand -- for example with DLC and user-generated content.
    If most characters are humanoid (or even partly humanoid -- i.e. they have legs/torso but not arms/head), the cost of their CPU animations / memory savings has already paid for themselves. Since human legs move the same with-or-without a torso/arms/head/legs/feet/fingers/toes, these can be processed in parallel with standard human animations when the legs/torso of the partly humanoid monsters are animated -- saving CPU cost while adding maybe only one 'animation' for a unique monster-like run (that can possibly be given to a human character later too -- if desired). But a tentacle is a tentacle -- whether it is a belt, a whip, or even just a bit of kelp swaying in the ocean. So the resources a procedural approach can save here (if not using a shader-based approach like you _should_ be using for kelp) can all add up to a low-cost beautiful and dynamic world -- when used creatively. Think the NES Super Mario Bros where the cloud and the bush were the same drawing -- (but used convincingly in wildly different ways).



    - Character body can interact with environment (foot placement, prop attachment, doesn't run into walls, etc)
    This can be done with a simple (kinematic) sphere collider and empty gameobjects/transforms at particular locations on the character's joints.

    - Character aims attacks and movement
    Attacks can be done best with procedural animation, but if you only have a few, you can turn on/off procedural animation for just those attacks and movements that have lots of variation -- and otherwise use Mecanim for the more basic/common animations.

    - Fast iterations on animations (different weapons, characters, ect should be easy to add)

    Same as my previous answer. Adding most kinds of attacks/characters/movements/props is easier with procedural, but if you want active-ragdoll style animations, procedural is essentially mandatory at this point (in terms of performance). The slight downside is a Mecanim-procedural hybrid isn't an easily-viable candidate at this point because the complexity in managing the two systems with two different ways of 'executing' an animation, especially when involving an active ragdoll system. This requires you to handle your animation fades/blending manually -- which essentially is what you should be doing anyway as of this moment. Timeline can be very helpful here though. Please let me know if you get a system up and running using Timeline -- I'd love to check it out. There are some helpful "Timeline Event" systems available.

    - Character doesn't snag on terrain geometry

    This simply requires "active-ragdoll" geometry to only interact on certain collision layers at certain times. In general, only certain active-ragdoll geometry (such as hands/feet) should collide with geometry using sphere colliders typically. These should not impact (or collide with) the main sphere (body) portion of the character controller, since they should both be on separate layers that both interact with the environment -- but not each other. When the player falls down, the sphere collider on the main body is simply disabled and the active-ragdoll colliders take over kinematic motion until the player gets to his feet and the main body sphere collider is enabled again.

    - Shouldn't sacrifice playability for realism

    This is why the sphere collider is such an important piece of the equation. Check out how the early Overgrowth animation video (the first one you posted) works. The main sphere collider used for the body is made invisible while the squash/stretch of the triple-sphere approach is used for determining head position and body collisions. This is a fighting style action-game with very specifically damageable areas on the body after all. This was necessary for Overgrowth.

    - Animations should be context sensitive and fluidly transition without snapping or locking the player out of input

    This is where you need environmental "sensors" that swap between hand positions/feet poses depending on the main animation state. That, or you could use something like Kinematica (which is akin to taking a rocket-launcher to crush an ant IMHO -- at least without the right tooling and UI to support a faster workflow).

    - Limb movement, center of mass trajectory, and head position should be roughly physically accurate and pleasing to watch

    This is where you need to study Overgrowth's character solution carefully alongside the GDC presentation. The "physically accurate" portion is because of the "ice-skater" approach he takes in developing his character controller using a wheel-spoke/sphere collider at its core, with the 'floppy-bits' animating on their own layer to add flourish and 'realism' to the trajectories of the main spherical mass (i.e. the part on "ice-skates"). Until the legs are animated, the character appears to be skating on ice -- which is exactly why all the floppy-bits follow (and are pleasing to watch).

    What I am trying to accomplish right now:
    - Character walks forward with IK foot placement using Animation Rigging, IK constraints, and procedural code to move foot targets

    You don't really need explicit IK constraints for the feet -- only an "active-ragdoll" with collision spheres for the feet (on a different collision layer than the main body collision sphere). See the video at this current time to show what I mean:



    The foot targets should move with the animation -- not the other way around.

    Only in explicit circumstances does one need to move the active-ragdoll feet colliders / IK constraints. For example, when sticking a foot to the wall when climbing it like spiderman (like they do it in Breath of the Wild, for example).
    Overgrowth has a similar climbing placement mechanic (when placing hands/feet on the wall), but they use a cylinder collider for that test to know where to put the hands on the wall and whether the player is high enough to climb up and stand on it.


    That being said -- I hope I've clarified some things for you guys.



    Also:

    That looks like a good tutorial at first glance (it partly inspired my AT-ST comment above, but I've been rewatching the Mandalorian too so... that too.) -- but I'm always a bit wary when the tutorial says "some custom C# code" (which usually means A LOT of "custom C# code") and then even moreso when it says a "Marketing Team" created it -- even when it is a so-called "Technical Marketing Team"... lol D:

    There's so little info on this subject though, so I still suggest you take what you can get my friend. Life has kicked me in the teeth lately, so my Houdini tutorials are slow-going right now.

    That said -- please let me know how it goes (and if this one's actually worth a watch!)
    Unity's tutorials are infamously misleading most of the time, but they do get it right once and a while!
     
    ProceduralCatMaker and NotaNaN like this.
  37. sinjinn

    sinjinn

    Joined:
    Mar 31, 2019
    Posts:
    149
    Yeah, I havn't touched it yet. I just hacked together a 2d Charachter controller which I am happy with, for now, I guess. I've kinda been down on a lot of this stuff for the past week or so, just to read the comments on some reddit posts about why gamedevs are miserable, and how working on "The one game" is a bad idea. And then looking on the PSN store and seeing the literally hundered of games on sale, and here I am 4 years in with nothing to show for it yet, it has kinda got me down.

    However, the prospect of adding procedural legs to my character is somethign that I'm looking forward to, even if I copy and paste from the tutorial.

    Out of interest, now that Houdini Engine is free for Unity, I am curious if you have a Houdini license. It seems at the moment you still need to buy it to create assets, even if it's just the indie license at $400 for 2 years. Kinda steep still when compared to FREE. At this point though I think that is an area I shouldn't be concerned with because it's the core mechanics in Unity that is important for me.

    Life likes to kick in the teeth, I hope you are resiliant to whatever has happened.

    Those fully procedural charachter are great, but I am interested how the procedural part of it looks with the ASnimation playiong on it. It would seem (not really gone deep into it yet) that wee could get some interesting results.
     
  38. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    Thanks. And right back at you
    .

    Gamedev is quite a heavy load to take on. Considering I'm 20+ years into it on a professional level and I still have very few personal accomplishments to show for it (aside from too much knowledge and a few prototypes), I'm not much better off than you are. So don't beat yourself up too badly for not "measuring up" to others. We've all been where you are at some point.

    To be more fair to yourself (and to others), a typical Bachelor's degree is 4 years.
    And that is only in a single discipline.
    One of the reasons I've been researching gamedev for over 20 years now is because of just how freaking HARD it is to make the damn things.

    But to be clear -- I'm not complaining about it.
    Instead, I've decided to spend the last 20 years tackling the problems that prevent me (and others) from easily developing a game based on one's vision. Most of that problem is tools -- the other parts are architecture -- and lastly, design -- but more specifically the affordances chosen for those designs.

    Design itself though -- is hard -- or easy -- depending on where you are emotionally.


    I won't get into it much now, but people tend to overcomplicate problems that are actually really easy to solve -- (at least if all sides of the many facets of the problem are felt enough to be considered, of course). However, people's ability to do that is stunted because we lack a true sense of sympathy for ourselves -- and for others -- and this lack of sympathy actually prevents us from seeing (or communicating) the true nature of our development problems simply because we cannot feel them.
    Notice I said "sympathy" -- not "empathy" -- as the former requires you actually _feel_ the visceral pain caused by your problematic approach _emotionally_ first, letting you more easily put your finger on the offending facets of an otherwise 'intangible' problem.

    For solving problems -- This is my secret sauce.

    Sadly, I can't tell you that it doesn't suck to see hundreds of games on sale, then wonder why the hell you aren't on that list. However, I can tell you that such a list isn't meant for everyone.
    I'm not sure why you (and so many others) seek to make games, but at the end of the day, a vision is only as strong as your deepest desire to realize it.

    Not everyone is born with the same skills (or is privy to the same skillset), and because tooling is so limited right now (and because usability problems are being solved in a half-ass way -- in general -- right now), without that fire inside to drive you beyond the despair, or without that brazen "If I can only make S***, then I'll just make S***" attitude, people who envision quality can't really reach the quality they envision -- at least without money (or highly skilled friends who have a lot of spare time to toss our way).
    I know it's not an ideal answer, but if you can see beyond the darkness and hoplessness of those statements -- then maybe you really do have a fire inside you haven't yet realized -- and maybe you really can overcome the limits put on you by fate. That choice, however, is up to your attitude -- and the fire that burns within you that fuels that attitude.


    This tells me that you've got what it takes. Keep those moments alive, and you'll go where you want to go.


    Indeed -- This is the true strength of procedural animation. You can get some puppeteer behavior without it, but active ragdolls is the way to go for realism.


    Off and on. (As you said, it's expensive). However, there are times when Houdini helps me research tools / workflows that I can't achieve in other software yet. But as a regular tool, to make truly visually-stunning games, Houdini (or a tool capable of generating LOD, texturing, UVs, and dynamic meshes almost automatically) is almost a requirement.
    Thankfully -- Blender is getting there with its "Geometry Nodes" system. Sadly, it still has a long ways to go to match up to Houdini's game-asset workflow.
    That said -- I can see where some people download software and learn it / get used to it / etc. -- and pay for it once the software has more value to them than just being a fun toy to play with. It seems like Unity has finally embraced this mindset too, and I hear it's better for their bottom-line anyhow. So I'm not one to scoff at doing what you have to do to get the tools to get you to where you need to be -- as long as you're not genuinely hurting the developer. After all -- if a person couldn't afford it before, learned how to use it and got a job or made money, then was able to purchase it after that, you are more in the black than you were before. So don't delude yourself into thinking others haven't thought about going this route if they had to -- those "hundreds of games" probably use Houdini at some point in their workflows too. Especially if they made more than one game. You _need_ a pipeline for something like that, and Houdini is a pipeline in and of itself -- at least for art assets.
    Like I said though -- Blender is quickly catching up though. Houdini won't be the only game in town forever. It's just that, right now, you'd be amazed at the kind of stuff it can do for games. It really is like magic sometimes.

    A close second though is something called Archimatix -- which is expensive (and sometimes buggy), but still limited to non-boolean operations. However, it can be reasonably-priced if you consider using it alongside Blender or something to quickly block out environments (for a mid-range workflow).
    Just something to think about if Houdini is really scary to you -- especially because of its price.


    Good luck, man. I hope this helps a bit.
     
    NotaNaN likes this.
  39. Nova1824

    Nova1824

    Joined:
    Sep 25, 2020
    Posts:
    10

    Thank you for taking the time to reply. I'll see what I can come up with. Once I get my hands dirty with the animation I bet I'll have some more specific questions. Your comments have helped me organize the approach I will take, as well as narrowing down what's involved. Right now I'm grappling with getting a basic rigid body character controller written. Spent a couple days trying to remove camera jitter. Had an separate issue with the cursor not being locked (that bug went away on its own o_O). Learned a lot about Cinemachine, Unity's physics engine, and how to handle input.

    Btw, this is one of the best tutorials I've found thus far. If you run into anyone looking for a place to start learning some of the basics / intermediate topics of character control or shaders I highly recommend sending em this way:
    https://catlikecoding.com/unity/tutorials/movement/

    I'm looking forward to seeing what Timeline is all about and becoming more familiar with Mechanim (although the farther away from Mechanim I can get, the better lol)
     
    awesomedata and NotaNaN like this.
  40. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    No problem! -- Feel free to ask anything! -- I'm glad I could help even a little!


    You know, I've seen these plenty of times, but I don't know why I haven't recommended them to anyone. I guess I simply forgot about how great these were for beginners (who don't mind reading a little text).

    Ray Wenderlich is another one (of the Super Mario 64 HD fame): https://www.raywenderlich.com/unity

    Since everything tends to be focused on videos these days, it's easy to see how great content like this can be passed up without even knowing it.
    Besides, it's not easy to throw together compelling tutorial videos quickly, so writing what you know tends to be a bit more convenient for people who have a lot of knowledge to share (with very little time to spare).
     
    Nova1824 and NotaNaN like this.
  41. sinjinn

    sinjinn

    Joined:
    Mar 31, 2019
    Posts:
    149
    Oh boy. You sure can write. This is one for the philosophers of Athens. And plaguerists to pass off as thier own. Take a bow. Much respect.

    But to the subjects at hand. I've switched to Unreal Engine. I've gone to the dark side. Not completely, obviously, because I like the way Unity looks. It's way more clean. However, I'm not finding it to be easy to work with , especially on the asset/reneder pipelie side of things. Let me go ahead and talk about it for way too long, and in way more detail than is necessary for anyone but myself to hear. Just to match your word count.

    Trees. always purple. For months. Mtree, the procedural tree asset, purple. Humble Bundle nature enviroment assets,, shader errors, looks transclucent. Update materials. Sometimes works, sometimes not.

    There IS a method to fix this. I must learn that method. Along with other methods.

    I mean, I'm not anti-Unity. I really like it. It's just seems to fractured. Even to me as a new comer, I'm finding they're trying to do many things at once, and I, as a game designer have enough to think about already, than to have so many things to think about other than to make the game.

    I appreciate how they've come up, but this split personality thing is a problem I wish I didn't have on my agenda.

    2. The visual scripting is behind at the moment. like, by 10 miles. I don't want to trash them. I'm not trashing them. I'm just not going to type code. It's just a snoozefest. However, I NEED to code right now. I can do it. There are ways to learn how to do it. I have to look at examples of C# and animation Rigging, and translate them into Bolt.

    And I have done this. I have taken code from a video, and like a good coder I, diligently, went on discord, and asked someone to translate it for me, and it has worked.

    And now it is in Bolt I can maybe experiment on it, and set up the rigs, and.......you know.....Make a charachter, with all the IKs, twist bones, etc.

    So I see the new Control rig on Unreal. It's basically animation rigging, but they have it all set up for you. They have Manny the maneqin, with some animations on him, and they have the whole system in visual scripting, ready to modify.

    I mean. Here I am, feeling like some kinda rodent searching a rubbish pit for how to go about building a rig from scraps thet get throw down randomly soem place, and over there, they're like offering me a steaming hot pizza.


    So, yeah. I'm going to see if I can get some understanding of Unreal, and obviously I'm Unity for life, but I just think I'll be learning much faster if things are set up more easily for me. Unity gave us the rigs and assumed we were all programmers and knew what to do with them, or could figure it out. Or that maybe thier strategy is to let developers make the rigs and sell them on the asset store, I dunno.

    And that paired with the absolute barrior it is to make progress in C#, and with Bolt, even after the aquistion, not being really pushed yet as a real answer ( otherwise why has it been a year and still no official tutorials on how to integrate animation rigging or ANYTHING with it. It's like just another tool, take it or leave it.

    I dunno if this is some kind of "Tommorow syndrome", but it really is like "It's going to be good soon". I think they Unreal seems to meet developers closer to the developers home. Unity is like, " Yeah, you need walk further, around those pits, and take the 3rd exit at the 2nd crossroad, the 11th at the roundabout, be here by noon, and oh wait, we can give this to you, but like it's flat packed so you have to spend soem time assembling it yourself."

    None of this is legit my opinion. Obviously I was trying to match your word count. It's satire. I think Cinemachine the one thing that I think is amazing, and don't see any replacement, and obviously 2D. But on the 3D side. I dunno, I'm thinking in a few years maybe Unity will have more VS infrastructure and more stability, but at the moment I'm over Unreal.

    Ok man. That took way too long, and I probably insulted the Unity overlords in some way. I try to be neutral, and I will proabbly come running back to Unity, certainly after I've learned more of Unreal for a couple of weeks. But How'd I do on word count? that's the most important thing.
     
    awesomedata, Nova1824 and NotaNaN like this.
  42. Nova1824

    Nova1824

    Joined:
    Sep 25, 2020
    Posts:
    10
    I haven't engaged on a lot of forums prior to this so this thread has a lot of novelty to me. I have to say, checking in with you guys and keeping up a long term conversation is really freaking cool. It feels like we got a little squad going of people trying to sort out this animation stuff. The more I understand about it, the more arcane it begins to appear lol. Today I found a good article. I'll share some links :


    Here is a full on research paper comparing Key Framing to Hybrid animation (half procedural):
    http://bth.diva-portal.org/smash/get/diva2:1118379/FULLTEXT02.pdf

    I'd be interested to hear what your thoughts are on this approach:
    https://deepmotion.medium.com/procedural-animation-for-characters-via-scripting-in-c-e60435da9e13


    P.S. Progress update: I got the character procedurally leaning into acceleration and rotating toward the velocity vector. A lot of the time was spent selecting the right object space and brushing up on quaternion / vector math. After some bug squashing I successfully have a static character pose riding an invisible Segway.
     
    NotaNaN likes this.
  43. sinjinn

    sinjinn

    Joined:
    Mar 31, 2019
    Posts:
    149
    That looks like a great book. I think it a level lower than I want to aim right now, but definitely something to read over the year . I am wondering now that it was written in 2017, and was probably in production for a year or more, and have these ideas already been implemented in Animation Rigging and Control Rig.
     
  44. Nova1824

    Nova1824

    Joined:
    Sep 25, 2020
    Posts:
    10
    I think the Animation Rigging package is very powerful and definitely has what is needed to do this procedural animation. But there are so many different ways to do procedural animation. Sometimes its just secondary motion on a rabbit ear, and sometimes its full active ragdoll. I kind of want to build something in between that will speed up the animation authoring pipeline. Ideally, I would be able to save an entire character pose at different keyframes and have the procedural system blend realistically between these poses. Similar to how Overgrowth demonstrated. But I am running into decisions about what controls or blends with the other. For example, do I create a procedurally driven Control Rig that blends with the baked animation. And if so, how should I sync up the motion of the Control Rig with the baked animation? Should I make two rigs? One for the baked animation and one for the physically based, then have the character rig blend between those? That seems process intensive and over engineered. At this point I have read a TON about this topic so the information is swimming in my head, I just need to run through scenarios and tinker with Unity to see what workflow is best.

    My goal is still to make an idle and running animation that have been authored with minimal keyframes, which react to the environment (foot placement, etc) and trace realistic motion paths (animated objects appear to have weight and momentum). Then blending between those two animations based on a state machine.
     
  45. sinjinn

    sinjinn

    Joined:
    Mar 31, 2019
    Posts:
    149
    Yeah. But, that is a lot to think about, and difficult to implement, as in, a lot of research is required. But I guess you are doing a robotics phd, and this is your path. And that is very rewarding in the long term, and I suppose great to tinker with when you can get your head around it, but it does seem it is difficult to implent quickly into a sellable game, for me.

    I suppose that whole system you are describing is essentially part of the player controller. And that would be one part of a stack of technology. Others include narrative, visuals, AI, sound, and gameplay.

    Narrative, Visuals, and Sound are mostly benighn. They don't need to be worried about till much later. But game design is core , and that drives the gameplay, and that game play is tightly bound to PlayerController.

    So it's essential we create a great PlayerController.

    Also, we have new tools. These tools, Animation Rigging and Control Rig need to be tightly integrated into our PlayerController.

    Current Plan: Learn to use Control Rig because of visual scripting interface. Cross apply those learnings into Unity and Animation Rigging.
     
    Nova1824 likes this.
  46. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    Honestly, I'm not trying to defend Unity here -- not at all.
    The biggest problem right now is the feeling of that "Tomorrow Syndrome" that is pretty widespread across Unity as a company right now.



    Not sure if that's an insult or just a bit of salty sarcasm (directed at me but really was triggered by my seeming attempt to keep you faithful to Unity, which was not my goal at all -- since I really just wanted you to stick to your guns, whatever they may be).
    I'll take it as the latter, as a personal attack doesn't seem to make any sense here.


    You did great. Though, I don't agree that word count is the most important thing. Only saying (fully and precisely) what you mean to say -- and to whoever cares to listen -- no matter how many words that takes.

    If you (or anybody else) doesn't care to read (and therefore listen) to what I have to say, then so be it. What can (or should?) I even do to accommodate other people's (often varying) lack of patience? -- I tend to enjoy reading well-written and well thought-out commentary. Otherwise I probably wouldn't be using the forums.


    Getting past the jabs at my verbosity:

    Man, I think just about anyone who has put serious time into Unity feels you on that one -- I know I do.

    I tend to have a lot more patience about this kind of thing than most tend to. Almost everything I've ever had in life I've had to struggle for -- and despite my pain, I've not regretted it for the most part. There's a kind of wisdom (and sadness) you only learn when you have faced pain and desperation for so much of your lifetime. That rodent analogy isn't far off either. -- The desperation, however, acts as a catalyst that lets one see more resources than those who have not faced that level of desperation, causing them to overlook very important (and very fundamental) things which they do not think important enough to notice -- much less analyze.


    Even in an environment of scarcity, and despite the barren-looking landscape, the warm-blooded rodent never starves.
    The same cannot be said for the man awaiting the preparation of his next meal.



    I'm not sure how 'obvious' that 'satire' is. Those sound like real feelings to me. And legitimate ones at that.
    Even if they are only 'half-truths', they do have an effect on how you see the world. That said, I wouldn't take them too lightly.



    Getting back to the discussion:

    Gotta be careful with "tightly-integrated" when it comes to input and triggers.

    While input is eventually necessary to animation, and animation is sometimes necessary to input -- sometimes it has nothing to do with animation or input at all (i.e. when it comes to environmental triggers for example).

    A PlayerController should be nothing more than something that says "Hey! -- I want to execute a 'grab' animation." and also something that says "I want to execute a 'walk' animation" sometimes too. However, the PlayerController itself should NOT be controlling the values that say "It's okay to execute a 'walk' animation" much less the ones that say "It's okay to do a 'grab' animation -- but only as long as you're not executing the 'walk' animation."
    Generally, an AnimationController should be responsible for this -- just not Mecanim (wherever possible) because it is lacking a friendly way to handle the dynamism the PlayerController sometimes needs. This is what Kinematica aims to deliver -- but fails to do so effectively, especially in a user-friendly (practically-applicable) way.

    What I am referring to in terms of 'dynamism' is when, say, the PlayerController says to "walk" but the AnimController says to "duck" (since there is a block over your head) or "fall" (since there is no place to actually 'walk' beneath you). Everything here relies on external 'sensors' and 'colliders' as well as environmental 'triggers' to tell the AnimController what needs to be done. This is what the Uncharted guys and Assassin's Creed guys did (using Houdini to generate the environments + triggers for export to the game engines).
    These environment triggers told the player 'how' to animate (with the AnimController) -- and the PlayerController told the physics (i.e. the invisible Colliders that made up the Player's body) what parts of his body are likely to need to move (i.e. with a 'walk' the whole avatar needs an animation -- but with a 'duck' or 'crouch', only the knees need to be constrained a bit to bend -- which is handled automatically by the environment sensors telling the AnimController when to 'bend' the knees for a 'crouch' -- and lets him 'walk' at the same time thanks to the constraints existing solely on the knee joints / hips. He may even be able to 'grab' while walking while also crouching, if the PlayerController doesn't want to check the AnimController for whether he is 'crouched' or not before executing a 'grab' at a nearby item on the floor. This is up to the designer though, as without the right joint transitions between the two actions -- it can look hokey.


    This isn't a bad plan -- but remember: Unreal isn't bullet-proof.
    It has plenty of fatal flaws in its design pipeline too. Plenty of things get overlooked there too. Thankfully animation in UE is generally not terrible, but do keep in mind that a lot of custom work is still going to be necessary in UE as well -- if you want a robust system. The biggest problem with Unreal is how its 'gotchas' tend to 'getcha' later than Unity's do. Even so -- if you're good at working within limits -- this shouldn't do you much harm. At least you'll have the barebones to work from, unlike with Unity. :)



    ---------------


    I need a little time to look into these more, but they look like solid bits of info at first glance.



    This is a good start.

    This is a bit harder with procedural -- though, if you combine active ragdoll (rather than state machines), the time you spend should be A LOT more productive.

    Unity is the antichrist to me for pushing Mecanim as THE way to make an Animation Controller.

    Mecanim's Animation Controller is highly limited as its focus is 100% not procedural.

    I wrote a post once (back before Freeform Animation Rigging was ever even a glimmer in Unity's eyes) about why Mecanim sucked. This, however, was more an infographic explaining why realistic (procedural) animation should NEVER be controlled by triggering States within State Machines.

    This is because, in general, Animation is the difference between one frame to the next -- and it is essentially stateless at its core. To illustrate this point: Am I picking my nose -- or just scratching it? You'll never know until my hand moves in one way or the other to confirm (or deny) your suspicion.
    The confirmation (or denial) is the 'State' you exist in, and the traveling of my hand toward my nose (or the motion away from it) is never accounted for in the State Methodology that Mecanim / Unity employs. The AnimationController's job is to account for this 'inbetween' movement too -- with the 'sensors' I referred to earlier.
    The PlayerController's job is to simply move the body around physically (remember the ice-skating bunny in that GDC talk? -- That's what the PlayerController does for the most part) -- The PlayerController handles physics and even a little book-keeping sometimes -- but not determining the "state" of an animation. There is no point to this, as animation is best left 'stateless' outside of the 'actions' being performed (or not) by the PlayerController (which determines when and where these actions are okay -- such as 'climbing' or whatnot.)

    I think that last part is where most people tend to get confused with what belongs where. They just assume the Player is some monolithic program (the PlayerController) that does EVERYTHING that might ever involve the player. But this will lead you down a terrible path. One that isn't easy to come back from -- if it's possible at all.

    The best bet is to have a clear distinction between the two -- and at that point -- you'll be good to go. :)
     
    sinjinn, Nova1824 and NotaNaN like this.
  47. Nova1824

    Nova1824

    Joined:
    Sep 25, 2020
    Posts:
    10

    Once again, thank you for spending your time sharing your knowledge/experience. I had one point of clarification on this comment. I totally get where you are coming from when you say that animation should be stateless and the playerController should only be in charge of moving the player around. Transience is the term you used previously I believe. However, doesn't there still need to be a way for the animationController to determine what constraints/rules to procedurally apply in a given scenario. In the climbing example:

    The playerController would handle moving the player around but what determines that instead of moving one foot in front of the other and swinging the arms (walking animation), the character should now move the arms and feet up and down to resemble climbing? Would the AnimationController be the only thing using sensors/parameters to realize the state it is in. And what about the case where an ability is activated, so now the character has to jump into the air, point a weapon, and shoot in a direction. Handling that without any state seems very complicated. What about using a mix of state control and transient/procedural animation. So there is a procedural locomotion state, attacking state, etc. that all use constraints + physics + sensors to move realistically, but the operation is being driven by the intended action.

    Still need to spend more time thinking, but those were my first ponderings.
     
    awesomedata likes this.
  48. sinjinn

    sinjinn

    Joined:
    Mar 31, 2019
    Posts:
    149
    It was in no way an attack on you. Just a way of acknowledging what you said and having fun with writing. actually I have a very pointed writing style. It annoys a lot of people, just so you know. Obviously I'm just here for expounding on ideas I am interested in, and to give me motivation and a little direction as time passes, and I'm not too concerned about other people, or being formal, or anything.

    I do however think you are reading too much into my words, and I can imagine why, because your posts are very long and detailed, and so you inferred that I was attacking that. No way. I'm a bit of writer myself (insert Osborn Jpeg), so I appreciate the amount of effort you put in, and I honestly can't understand all of the things you write in on one go, so I do come back and read over and try to see how far I can get before getting lost again. And it's good to be able to refer to someone who is more knowledgable about these things, and knows more of the history about these systems and perhaps see the flaws and ways ahead.

    I just thought it was time I wrote more, and like I say, I almost always confuse people because I have such a hap hazard way of writing. It's mostly nonsense. Basically. I pretend to order it in a way that is related to some subject, but it's really just nonsense, mixed with Unity related topics, just to give it some semblance of being constructive, or valid.
     
    NotaNaN likes this.
  49. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    Transience.

    Thank you for that! -- I totally forgot that was how I described it before! :)



    I think this time's 'stateless' was previously described more as a 'SubState' at that time -- which is more accurate.

    (Thank you for keeping up with my posts btw -- I know I miss things I've said more clearly before.
    This is a great example.
    Honestly, my wording is like trying to find the right color of pixel in JPEG decompression when pulling out the 'correct' words. The overall idea is more important, so the specific details sometimes get lost to compression in my brain.)

    Regarding your question about more specific animations:

    To be accurate, 'Transience' is a bit of a balancing act between full-on "States" and 'Substates" in practice.
    Sensors/Animations are managed by the AnimController, whereas physics and actual 'State' is managed by the PlayerController.

    Think of the AnimController as a search + search algorithm -- and the PlayerController as the searchbox + searchinput. You initially search with the PlayerController, but the AnimController may not return the results you like, therefore you must refine your searchinput and search again with the searchbox. The 'Sensors' are more like the search algorithm(s) that determines the results (Animations / Other Sensors / etc.) that show up.
    Just like googling something, you rarely get lucky with the first result -- You sometimes need to peek at a few things to get to the information you are really looking for.

    In the case of walking + climbing animations, your PlayerController handles a check that sees if you are A) OnGround, B) Some Climbable Surface is Nearby, and C) Climbing is (Generally) Okay. If A) and B) are true, you can 1A) check for a ladder or other climbable object in front of you, then 2A) check if pressing toward that climbable ladder. None of these other checks happen in the PlayerController if you are already considered in a climbing "State" by the PlayerController (because the climbing defies gravity and therefore needs special physics behavior compared to any other normal movement state).
    Now, if you are to bust out of the climbing "State" to enter a shooting "State", you might fall off the ladder. Therefore, you need to consider both the 'climbing' and 'shooting' as a "state" in some way. But whether it is a climb-shoot, or a shoot-climb state is difficult to decide, as some sort of hierarchy seems to be necessary. However, this is not always true.
    This is where the AnimController steps in. You press the 'shoot' button to distribute the sensors that should easily tell the AnimController you are on the ladder (or other climbable object). Instead of the PlayerController putting you in a traditional 'shoot' State (and the AnimController queuing up that Animation), the 'shoot' button registers itself alongside the AnimController's sensor input to the AnimController, and keeps you in a 'climb' state / Animation, while processing the upper-body twist constraint and executes the start of the upper-body portion of the 'shoot' Animation.
    To the PlayerController, you are still in a 'climb' state, but to the AnimController, you are more in a 'shoot' state than a 'climb' one (as more logic is executing in the AnimController at this point in time than in the PlayerController). The AnimController would also send an event to the PlayerController to spawn the projectile from the gun arm after a very particular timeframe. This keeps the visuals and state-mechanics separate (and therefore consistently transient).

    This is as much an art as it is a science, but the ultimate result is that logic is separated into visualizing and execution logic -- AnimController and PlayerController logic respectively.

    If a player 'jumps' while 'shooting' while 'climbing' on the ladder, the PlayerController ultimately -- and more importantly, Independently -- decides whether the 'jump' button is valid on its side (i.e. to change to a 'jump' state) and the AnimController decides whether the 'jump' button is valid on its side as well (i.e. does a jump/climb/shoot state make any sense when the "climbing" sensor is still active when the 'jump' button was pressed?)
    Since the player is not 'OnGround', then the Physics never shoots the player upward from the PlayerController. Since it is still considered in the 'climb' state by the PlayerController, the Physics continues to defy gravity as well (since the AnimController climb sensor still says we're climbing -- possibly until we press the jump button, and it says we're now 'falling' since there is no 'ground' sensor beneath our feet.

    We may provide a 'Override' state at any time to ensure a player enters a GetHit Animation state in the AnimController, and this, if okay with the PlayerController (who would need to define his own action for what happens when the 'Override' state happens). This sort of state could be thought of as a true "SubState" -- whereas the other sorts of tiny transitions wouldn't really be considered a full-on "sub" anything -- just a point where certain logic or animations are necessary.

    Keeping the two types of controllers separate also keeps them legible as well. :)


    Hopefully that helps a bit! -- ^__^



    -----

    That's fine by me. Just checking is all. After all, I share my thoughts freely -- I have no right to judge others for doing the same.


    Thanks for the clarification. Honestly, my posts are only long/detailed because I was once where you guys were and had nothing but short quips and scattered posts to work from that held nothing of real value I could work from. I never intend to do that to anyone. If I can help -- I will give very detailed explanations wherever I think to so I can make it clear what I know I can help with -- and what I can't.

    To each his own. I don't mind sharing my thoughts because sometimes they help others. Even with that aspect, I know my writing style isn't always well-received, so I can't knock you for that if I don't instantly 'get' what you're saying. That's the point of asking for clarification -- or at least saying "Hey, this is how I'm thinking I understand it -- Is that correct?"

    So feel free to write more. More is always good in some way. :)
     
    sinjinn and NotaNaN like this.
  50. sinjinn

    sinjinn

    Joined:
    Mar 31, 2019
    Posts:
    149
    So what do you think guys think of geometry nodes? is that stuff replacatable in Unity?
     
    awesomedata likes this.