Search Unity

  1. Unity Asset Manager is now available in public beta. Try it out now and join the conversation here in the forums.
    Dismiss Notice

Understanding GOAP AI

Discussion in 'AI & Navigation Previews' started by adamsc11, Mar 13, 2021.

  1. adamsc11

    adamsc11

    Joined:
    Feb 22, 2021
    Posts:
    3
    Hello everyone,

    I'm learning how to create a basic AI in Unity and I found the GOAP system, following this tutorial: https://learn.unity.com/project/goal-driven-behaviour?uv=2019.4&courseId=5dd851beedbc2a1bf7b72bed

    To understand the concept, I started a basic project with agent who can do the following: eat, sleep, supply a building, work at building.
    Basically, the AI has 3 goals: eat when hungry, sleep when tired and be usefull.

    However, there is some issues that seems not solvable with this basic design.

    Separating ActionExecutors and Actions

    In the presented GOAP system, each agent has a list of Action component. Those component execute an action and also contains precondition / effects.

    As a world contains in this small example multiple building offering the same type of action but with different parameters, I think that It would be better to separate the "Action executor", determining what the agent can do and the "action / task", corresponding to an actual task to achieve.

    In this case, a building task for instance would generate the precondition "requireSupply" if required depending on the

    My question here is: I am correct and is it a common design for GOAP or should I go for another AI concept that I don't know ?

    Parameterized predicates

    Another issues, linked with the previous one: a Dict<string, int> worldspace / belief system seems insufficient because some predicate are parameterized by a gameobject (for instance: RequireSupply(wood, 250), InventoryContain(wood, 250), IsAvailable(building)).

    My plan is to use a more complex predicate system (with heritage on a base class) to describes those preconditions and effects.

    However, I'm wondering if I'm not going to the wrong way. For instance, I thought first that I needed the "HungerLower(0.5)" for my agent to know if the agent was hungry. But infact, I can compute the hunger of the agent in another component and simply set the "hungry" belief when needed.

    My question here is thus: Is it a good thing to used more complex predicate or is there a solution to modelize parameterized predicates (on buildings, objects ...) simply with a <string, int> worldstate system ?

    And again, should I go for another design than GOAP for this kind of AI ?

    I hope I've been clear enough and thank your for your help =).