Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Ai Based On A Finite State Machine Unity

Discussion in 'General Discussion' started by EugeneVlasov, Apr 14, 2019.

  1. EugeneVlasov

    EugeneVlasov

    Joined:
    Dec 23, 2013
    Posts:
    20
    Hi guys!

    I have a desire to implement AI on the basis of a finite state machine unit.

    I came to the conclusion that it would be best to single out several levels of abstraction (if you can call it that).

    First level of abstraction:
    State machine for one agent (raider).

    The second level of abstraction:
    State machine for a group of agents.

    The third level of abstraction:
    State machine for the raider squad.

    The higher the level of abstraction, the more specific the task it implements.

    Do you think my assumptions are correct? Could I be able to think in the same direction?

    upload_2019-4-17_16-26-27.png

    I would be very happy to hear all your advice.

    Thanks for your time and your answers!
     
    Last edited: Apr 17, 2019
  2. EugeneVlasov

    EugeneVlasov

    Joined:
    Dec 23, 2013
    Posts:
    20
    Maybe someone can tell?
     
  3. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,614
    Well, I for one don't know what your assumptions are, and I'm not sure what your second question means.

    To me, the first thing I notice is that your Raider level looks like it's a control state machine rather than an AI (decision making) state machine. It makes sense that you can transition between all of those states, but it doesn't describe how those states relate to achieving goals.

    For instance, in "Run" and "Walk" how does the agent know where it's going? Is all running and walking the same? Personally, I would instead have states for things like "ApproachTarget" and "ApproachLocation" and "Retreat". These are all movement related states, but the agent may well be moving differently in each case. When running towards a target is the AI doing the same transition checks as when it is retreating from a location? What about when it's searching an area?
     
    EugeneVlasov likes this.
  4. EugeneVlasov

    EugeneVlasov

    Joined:
    Dec 23, 2013
    Posts:
    20
    @angrypenguin
    Thanks for your reply.

    I understand what you meant. I will try to accommodate your comments.

    Can I turn to you for advice again later?
     
  5. newjerseyrunner

    newjerseyrunner

    Joined:
    Jul 20, 2017
    Posts:
    966
    It sounds like you are trying to figure out a way to introduce group dynamics to AI. I wanted to do the same and I think I've come up with a very good solution that provides the illusion of serious group behavior even though it's entirely smoke and mirrors.

    I created my AI to work individually first. I had a number of states, but for simplicity sake lets say: idle, aggressively fighting, defensively fighting, retreating, searching for target, moving towards objective.

    I also wanted the behavior of an individual enemy to be consistent at any given time, regardless of whether or not it was part of a larger group. I didn't want to have to figure out special cases like when it's allowed to fight if it's doing an objective.

    I realized that people will assume characters are working together if they are simply near each other and behaving in similar ways. The way I approached this was to simply introduce a leader variable and in the decision tree, check to see how far from the leader my AI currently is. If it's close, just act normal, if it's too far, set the leader to the objective and go into move towards objective mode. This actually created a lot of emergent behavior that I did not explicitly program. Here are some examples:

    Coordinated pushes: When I want a group to move to a new objective, I only have to switch the objective for the leader of the group, it'll then move that way and the rest will follow the next time the state is checked. This makes them not all go at the exact same time and looks more natural, like they're finishing up their current fight before moving on.

    Cover fire:
    Whenever an AI detects a new threat (or finds the one it was searching for) it yells to it's leader, which may change it's own target to the new one. It also then relays that information down to anyone who's beneath it, making it seem like they're backing up their friends.

    Scouts / traps: a leader doesn't have to actually be near it's subordinates if the settings are high enough, but can still allow for communication. I often put a single scout in a hallway who's got an entire platoon somewhere else waiting to ambush you. You walk into a hallway and the single scout sees you, you think it'll be an easy kill and dispatch it quickly, but as you were doing so, a big group was closing in all around you, called there by the scout.

    Tight groups: If I set all members of a group to have the same leader, they will stay in one cohesive group and never move away from it too much.

    Loose groups: I can also set it so that the leadership is a hierarchy. This way a lowest level grunt doesn't really care where the highest ranked leader is, it only cares where it's own leader is, which in turn stays close to the big leader.

    Panic: In the case where there is one leader and lots of grunts, if you kill the leader, the lower ranked enemies will no longer have anything keeping them together and they'll scatter and become "every man for themselves." This produces some interesting situations.

    Recover-ability: In the case where I have a complete hierarchy, killing the top leadership no longer causes complete panic. Instead, only the AIs who were directly under the slain leader become disjoint, but they continue to command their own troops. This allows groups to splinter and flank if the general is dead.



    I actually based the way I created this approach off of a presentation by Halo lead programmer: http://downloads.bungie.net/presentations/gdc02_jaime_griesemer.ppt
     
    neoshaman, kdgalla and EugeneVlasov like this.
  6. newjerseyrunner

    newjerseyrunner

    Joined:
    Jul 20, 2017
    Posts:
    966
    You also get the benefit that you can use the same AI for both enemies and allies. For allies, simply set the "leader" to be the player and they'll reliably follow you around but be able to fight their own battles too.
     
    EugeneVlasov likes this.
  7. kdgalla

    kdgalla

    Joined:
    Mar 15, 2013
    Posts:
    4,612
    Thanks for posting this. I'm also interested in this topic.
     
    EugeneVlasov likes this.
  8. Roni92pl

    Roni92pl

    Joined:
    Jun 2, 2015
    Posts:
    396
    Ive used fsm in my game just for handling orders(it's kinda rts), and even when they were supposed to handle simple single orders and fighting I already ran into limitations with them doing just that. Generally:
    FSMs only for very simple behaviours(like literally if your agents move and fight(any kind) it's already too complex for fsms...), behaviours trees are better, but if you want really scalable solution then utility AI is the answer.
     
    EugeneVlasov likes this.
  9. EugeneVlasov

    EugeneVlasov

    Joined:
    Dec 23, 2013
    Posts:
    20
    Thanks for your reply. I'll see what you're talking about.

    Although at the moment I am so carried away with my theory and so far I have not encountered insoluble problems. I want to know what will happen when there are more states.

    And if my theory breaks down, I will know what to take as the basis for the next prototype, thanks to you!
     
  10. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,614
    For me the choice between FSMs and Behaviour Trees and other things is at least as much about workflow as it is functionality.

    Personally, FSMs are great for simple things, but become unwieldy to implement and test when it comes to large or complex things. BTs are somewhat the opposite, in that they're over-complicated for simple things, but complex things are easier to manage in comparison.
     
    EugeneVlasov likes this.
  11. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,492
    To be frank what OP is doing is not fsm it's HFSM, ie hierarchiacal state machine, and they start to look similar to BT at that point, BT is basically the next jump in complexity before Hierarchical planning. lol
     
  12. EugeneVlasov

    EugeneVlasov

    Joined:
    Dec 23, 2013
    Posts:
    20
    Could you tell in more detail what you mean? And what does "OP" mean?
     
  13. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,492
    OP stand for "original poster", that is you. HFSM is basically fsm within fsm, since you describe a tier system where fsm control lower level fsm. That's just an evolution of fsm that deals with some limitation of regular fsm.

    The truth is that in AI you generally sandwich multiple idea at once, since is more about procedural choregraphy/scenography than intelligence. Myself I prefer using a kind of perception->memory-> decision-> action architecture, which is implicit to most model, but use explicitely to help control thing. Generally I use a "utility tree" as evaluation of perception and store the result in a "blackboard" memory, basically it's the set of variables that represent key concepts, then use a BT as decision where it select the final action, which can be represent as simple state machine. But that's experimental, I haven't fully implemented it because some task has gooten priority.
     
    EugeneVlasov likes this.
  14. EugeneVlasov

    EugeneVlasov

    Joined:
    Dec 23, 2013
    Posts:
    20
    @neoshaman
    Very interesting! I have not heard about this before. I need to know more about it. Thanks for the advice!