Search Unity

Learning Opponents (or, "Intelligent" Agents?)

Discussion in 'Game Design' started by AndrewGrayGames, Nov 7, 2014.

Thread Status:
Not open for further replies.
  1. AndrewGrayGames

    AndrewGrayGames

    Joined:
    Nov 19, 2009
    Posts:
    3,821
    Good AI is necessary to create entities that give players a decent, but beatable challenge, and also to immerse them in our works.

    However, one thing I'm interested in is the creation of more intelligent agents. By necessity these can't be every enemy in a game - that'd be ridiculous. But, some could be.

    An idea I recently had was a sort of 'Genetic AI Script', where intelligent entities start with a pseudo-script that tells them what things to do. Each enemy, however, applies some mutation to that list of actions. If an enemy succeeds in doing something (say, defeating a player character, or achieving a key objective), the mutation gets applied to the prototype level, such that other entities of that type use those tactics by default, thus leading to a learning opponent.

    This has a problem, though - this presupposes that we want our enemies to win. We don't, we want them to lose in a fun and challenging way.

    What are some design considerations for growing a hypothetical system like this, such that it leads to procedurally setting up increasingly fun intelligent agents for our players to interact with?

    Note: no code! I'm concerned more with the mathematical constraints that would need to be considered for such a system.
     
  2. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,859
    This is a very interesting topic that I've thought about a lot too.

    What you've described is a genetic algorithm (GA). GAs can be very effective at exploring a solution space and finding better solutions. But they tend to be slow. As in, it usually takes hundreds to thousands of generations before you see much improvement. So they can be a great approach to offline learning, i.e., during development of the game; but for online learning — while the player is playing the game — it's likely to be too slow for the player to even notice.

    There are other learning algorithms that can learn more quickly, though no one single algorithm that works well in all situations. Two of my favorites are decision trees and neural networks (using a fairly recent refinement of the classic learning algorithm that makes these learn much faster). And of course it depends on what your problem space looks like. Learning to fight well in a fighting game is much, much easier than (say) learning to effectively command an army in an RTS game.

    So, the problem you point out assumes that the AIs are actually smart — generally they're not. Humans are the best learning machines around, and we're pretty dang hard to beat. The cases where computers have beaten humans are in environments (e.g. chess) that we're naturally bad at, or in cases (also like chess, nowadays) where you can apply brute force to the problem. But brute force doesn't require learning. So, the idea that a learning AI is going to get so hard the human can't beat it, unless it's given special knowledge or inhuman reflexes or something, well... it could happen, but I wouldn't worry about it until you're actually running into that problem.

    But for the sake of argument, let's suppose you do. Maybe it's a fighting game, and despite putting in a reflex delay comparable to human reflexes, your learning AI quickly learns optimal moves that beat the player into the ground every time.

    On the one hand, we could say "Hooray!" because fighting games are dominated these days by hard-core gamers who love a challenge. Especially if the opponent had adapted specifically to their fighting style, forcing them to adapt in return, I can imagine hardcore gamers would be all over this.

    But maybe you're making something like Super Smash Bros, where you want to attract gamers who don't enjoy getting pounded into the ground over and over. In this case, I think you could probably adjust your reward function (for the learning algorithm) to make it more entertaining. Punish the algorithm if it loses quickly, AND if it wins quickly. Reward it when both the player and the AI's hitpoints are low. Reward it even more if it loses — in a middling amount of time — in some spectacular way. And make sure it doesn't have access to a clock, so it can't just wait in defensive mode for 45 seconds and then drop its guard entirely; it will instead need to adjust its difficulty so that, on average, it loses after 45 seconds or so of consistent fighting.

    Dang, this sounds interesting... if I were in school I'd propose this as a semester-long independent study project!
     
    Ryiah and AndrewGrayGames like this.
  3. RockoDyne

    RockoDyne

    Joined:
    Apr 10, 2014
    Posts:
    2,234
    The problem I see is how do you measure 'fun' in such a way that it works as a metric for ideal behaviors. Procedurally generated, evolutionary behaviors isn't an unheard of concept, but evolution can only happen when you can clearly state that this worked better or this one didn't.
     
  4. slay_mithos

    slay_mithos

    Joined:
    Nov 5, 2014
    Posts:
    130
    There are two ways to go about intelligent AI.

    The first one, relatively easy to implement, is an AI that flat out cheats, by reading data that player can't access about his enemies.
    It can take many forms, like looking at a map without fog of war in a RTS, or reading the player inputs as it is typed and reacting to it directly.

    The second one is to have the AI learn through both successes and failures, and adapt from that.
    If your player has access to, for example, 10 different patterns, each pretty different from each other, and the enemy knows how to properly react to them (and would, if they were cheating), a simple way to go about it is to weight actions.
    Against a player that usually goes defensive and only counters, the AI should favour actions that don't leave them too open to counters, try to surround the player, use attacks out of range, or ignore the player (depending on the game's type, obviously).

    The ways and means to do this are pretty different, depending on if you want it for each character, or if you want it to be more global (be it "all players" for a single player game, or the whole server/world for multiplayer ones).

    All in all, it usually ends up mostly being a weight attached to each choice for the AI, and if the scales are balanced right, it should be harder and harder for the player to continue to "win" using the same strategy, forcing him to switch and adapt too.


    Now, the main caveat that could occur is that your whole game has to be designed around that idea.
    In a RPG, it would mean that no enemy should have native immunities, and should only acquire some as a result of choosing a route against the player.
    If not, it leads to a potential wall, where the only ways for the player to advance is to keep using useless or sub-par attacks that hit those invulnerabilities, because the enemy counters what should in theory be working.


    This whole question is pretty interesting, but it is also really hard to set up in most cases, because the player is usually not limited to specific actions, and deeper strategies are harder to identify and compute from.
    Also, if your system is too rigid, you would see players adding in just a bit of other strategies in order to rig what the AI would choose in their favour.


    All in all, it really ties into a lot of other questions, like balancing, but if you go with a learning AI, your aim should be to beat down the player, if he doesn't adapt by using all the tools at his disposal.
     
  5. AndrewGrayGames

    AndrewGrayGames

    Joined:
    Nov 19, 2009
    Posts:
    3,821
    I used 'fun' when I meant 'challenging but beatable'...because, in part, such challenge is inherently fun (for many people.)

    I used 'a player is defeated' or a key objective achieved' as sort of the fitness function in this case, but I agree there could be others. In a JRPG, did you reduce a player's HP to some percentage of max? If so, this is a working genotype, apply it to all intelligent enemies who follow this template. Or, did the player take longer than some semi-arbitrary amount of time to defeat you? If so, apply the genotype to template.

    I think in games, particularly RPGs, everything sort of circles back to, 'how long does it take the player to win?' A fight is challenging if there isn't much margin between how long it takes you to defeat an enemy and how long it takes an enemy to defeat you. Obviously, the player's number should be bigger so that they win...but, it's more engaging if it's close. But, I'm not sure the TTD metric applies to all genres equally, either.

    ...of course, striving for a universal solution is something of an excersize in folly in and of itself. Maybe you just helped me figure out a key part of a way to set up learning opponents: TTD for enemy should be less than the player's, and within a given 'close enough' threshold, and less than the previous 'close enough' threshold.
     
    Last edited: Nov 7, 2014
  6. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    Well, artificial intelligence can't learn. It's IQ is 0. It can artificially simulate learning, but that will actually result in it being easier to beat. Let me explain. I go left every time, the computer "learns" that I always go left and takes measures. So I go right and win the final battle like it's a joke with 1 unit.

    Furthermore, the hardest enemies are not the smartest, but the most aggressive and capable. If I make a first person shooter game and my AI to always gets head shots, period, because I set his aim vector to your face before each update... that's hard. So I add random values to the vector and that's how I balance the difficulty.

    Think about it. Take a human being who is slightly below average intelligence, he doesn't stand a chance in chess vs. a genius level IQ.

    That's 85 IQ.
    30 IQ would have difficulty doing everyday things.
    1 IQ may not be considered an intelligent being by some.

    Therefore, 0 IQ is a world conquering, unstoppable death machine?
     
  7. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    The way to approach AI is to think small. How does the AI react to something extremely simple, say a ledge. Does it walk off? Does it try to jump? The very most you can hope for is to prepare your artificial intelligence for a very large number of different possibilities. The best way to make your artificial intelligence seem smarter is to limit the number of possible things that the player can do. Like for example chess or Pokemon.

    What I'm trying to say is artificial intelligence can only react in a very static way, It cannot plan only react.

    When you try to simulate learning with AI what you are actually going to do is have the ai attempt to assume what the player is going to do and ignore the possibility that the player might not do that, so you are actually limiting the number of things that the AI is prepared to handle. And as you're developing your AI remember that you are at I is going to have to go up against a quantum learning machine without limits or boundaries.

    This is where the extreme high level helps us to understand the high level. Without a higher understanding, your view is partial at best.

    If you want to quote to argue with me it would be this:

    By trying to make it smarter you're going to make it dumber.
     
    Last edited: Nov 7, 2014
  8. slay_mithos

    slay_mithos

    Joined:
    Nov 5, 2014
    Posts:
    130
    Why do I feel that Misterselmo isn't in favour of enemies adapting to the player's habits?

    I understand the arguments, but we are now at a point where we have a significant amount of computing power, and the AI is a part of many games that can definitely benefit from getting improved.

    Nowadays, you see a lot more FPS games where the AI will try to corner the player, trying to surround him, showing a bit more self preservation...
    Granted, that's just how they were coded, and they will mostly act the same, even if it leads you to a trap that was setup knowing about their behaviour.
    Nevertheless, it makes the average player think a bit more about how to approach certain fights, when they can't be certain that they will have a cover to regen their health, for example.

    It often results in a gameplay that feel more alive.


    An "intelligent" AI won't be intelligent, not before we can have enough computing speed to rival a living brain, dedicated to analyse and "learn" via trial and error.

    What you need to do when you build such AI is to try to identify how people usually play when given those tools, what strategies they are likely to employ, and prepare your AI to react properly to them.
    It's a real challenge in itself to be able to plan ahead on how the players might use your various tool, but I really think that those that manage to constantly challenge the player, not necessarily by screwing with the odds, but by making their enemies adapt, will have a pretty powerful tool for their game.

    The whole game could be based on pretty different things than the usual tropes too, having the common mobs actually be a threat if you don't vary your patterns, being closer to the player in term of choices.

    Hell, you might even build a whole world where enemies literally adapt to the player's actions, trying to mitigate the growing threat he/she represent.
     
  9. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    Actually I believe the entire game should adapt to the players habits, not just a few enemies. Not too outsmart the player, though. That's impossible.
     
  10. AndrewGrayGames

    AndrewGrayGames

    Joined:
    Nov 19, 2009
    Posts:
    3,821
    I think that in a game where you face wholly human opposition, that'd be an incredibly useful thing. Let's say the PC's party hails from Country A. They're fighting the armies of Country B. It would make sense that, over the course of the game, as Country B's army keeps getting beaten by this ragtag bunch of prepubescents, that they would slowly change their strategies to mitigate the threat that said prepubescents pose, both in the strategy that Country B uses in the overarching plot (the game's plot starts off a way, noticing that a small squad is screwing their war effort gets baked in, game progresses from there), but also in the tactics that Country B troopers use in their encounter formations (holy crap it's those commandos! Quick, spam Blizzaga, that helped the last guys survive a little longer!)

    As I said above, too, this can't always hold true. If you're grinding unintelligent slimes in the Noob Cave, it makes no sense that Slimes would learn from and adapt to the player - they're an unintelligent species. It makes no sense for this setup to apply to them. Also, the variation helps to set unintelligent enemies apart from intelligent ones, and create a more dynamic level of challenge to stimulate the player with.
     
  11. TonyLi

    TonyLi

    Joined:
    Apr 10, 2012
    Posts:
    12,697
    I'm quoting this not to argue but to suggest another angle. There's an entire, historied field of robotics and AI research dedicated to planning. In the professional games industry, FEAR famously popularized 1970s-era goal-oriented action planning (GOAP) a few years ago. Killzone, and several strategy games, did the same with hierarchical task network (HTN) planning more recently. Nowadays the big thing is Monte Carlo tree search planning. (Here are some papers on Monte Carlos in RTSes and general game environments if you need a quick overview.)

    Planning AI presuppose certain goals, but those goals can be as general as "survive" or "eat food." As long as they're provided ways to obtain game world information relevant to their goals, and sufficient actions to be able to achieve those goals, they're good at handling situations that the designer didn't anticipate. In fact, I'd claim that they do better in sandbox shooters and RPGs than in linearly-scripted games.

    Does an AI character really need to learn or evolve? Is it sufficient that they can develop plans based on the current state of the game world as affected by the player? To bring it around to Asvarduil's original topic, what if characters just gradually gain additional ways to obtain information about the game world and additional actions that can they can incorporate into plans to achieve their goals?
     
    Last edited: Nov 7, 2014
  12. slay_mithos

    slay_mithos

    Joined:
    Nov 5, 2014
    Posts:
    130
    I think if your game wants to include an AI that "learns", even basic enemies should have a very basic level of this.

    I'm not saying that the slimes should seek an Ice wizard to make the floor slippery and thus prevent the heavy attack that the player abuses and that requires a solid grounding, but after getting hunted a lot, they might decide that running away and hiding is a more viable strategy that fighting head on.

    As for the Country B example, yeah, that is an example of what I meant.
    You see it a lot in the story lines, specially preparing a costly ritual or weapon to sceal off the very special power that the player (or his party member) holds and that kept tipping the scales.

    But it can also be a bit more subtle than that.
    For example, let's take the Elder Scrolls series, because it offers many ways to play it, and a lot of opponents are intelligent.
    If the player mostly plays as an archer, you could imagine that the various factions that are openly hostile to him might start to include shields as a standard equipment, and the enemies start to not run directly at the player, but put their shield up, and maybe take a slightly non-linear path, to make the aiming harder.

    The player could still try to stay as mostly an archer, but he might also start to use fire arrows/spells, use other characters (summons or companions) to disrupt the guard, switch to a longer range for the attacks, so that he can kill unnoticed...

    When the player becomes the worst threat, they might start not walking around in plain sight, put in place ways to detect the player's approach, or even train some elite troops that are specialized at fighting against him/her.


    I definitely agree that an AI should aim to outsmart a player, because the most it will usually do is cause surprise because of the unnatural way to act.
    But when any enemy that is a bit organized could become a potential threat if you just play mechanically, then it starts to become a bit more interesting.

    If there are no major flaws to exploit, it could make grinding quite a bit more interesting too, because after slaughtering a hundred grunts of the Country B in the fields using mostly a single attack type over and over (let's face it, that's what grinding often becomes), those grunts should start to try and either counter or evade that attack, meaning that you would have to start changing your strategy once in a while.
    The system might even take into account multiple levels of counters, making it harder and harder to use this specific strategy or combo.

    I know, it's very theoretical, and the potential for flaws is very present. It also should change how you approach designing enemies and factions that the player might (or will) face, so it definitely is not an easy goal to attain, and maybe the investment in time and resources to account for such a deep change isn't something that most studios can afford.
     
  13. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    I don't have to argue the infinite details of the subject, any more than I have to pay attention to all the people out there who have claimed to have created a perpetual motion machine.

    It is impossible.
    Machines cannot think.

    You seem like a very intelligent person, so being surprised when a computer program does something unexpected is not a testament to the computer program but to the computer programmer. Likewise, it's somewhat embarrassing to say that a computer did something you didn't expect and that you were impressed by its ability to learn and think or "plan".

    It's like a toaster burning the Mona Lisa onto a piece of bread and being called an artist.
     
  14. GarBenjamin

    GarBenjamin

    Joined:
    Dec 26, 2013
    Posts:
    7,441
    I think making an enemy appear intelligent and able to easily kill off the player is not that much of a challenge.

    It is just a simple matter of modeling their behavior to react to various pieces of information.

    Like in my little platform game, for example, the snakes have some intelligence. It may not appear to be much but it is there and would be noticeable for sure if they were completely mindless drones.

    When the player is in front of them within a certain distance, they charge at the player. When the player is further away but not too far, they spit venom at the player. When they reach the edge of a ledge they stop, pause a bit and turn around patrolling back the other way. If you throw a rock at a snake facing you and the rock slides into them the snake will charge in the direction the rock came from.

    I think @Misterselmo hit the nail on the head. What people generally see as intelligence in games is basically just aggression. If I wanted to make a special enemy, let's say another snake. Maybe the.... RED SNAKE. OMG the Red Snake just came out!

    If I wanted to get that kind of reaction from the player I would give the red snake the same basic behaviors as the normal snakes. However, instead of reaching the edge of a ledge, stopping and turning around... I would make this red snake look around and if the player is in front of them within a certain range (say standing on another ledge), the snake would not turn around but instead it would spit venom at the player and then immediately jump to the other ledge. The player would end up with a snake literally chasing them around the playfield occasionally spitting some venom at them just to add a bit more pressure. Since jumping does not fit the expected behavior of snakes I did not implement that in the game.

    Other enemies, such as the bat and the spider will be more of a threat. Some increased aggression combined with the ability to crawl up walls, jump or fly can make them much more challenging.

    That is on an individual level meaning each enemy acting completely on its own.

    Another way to liven up the game and show some additional intelligence is by allowing the enemies to communicate.
    If when the snakes saw the player they alerted the other snakes in the area (maybe doing a Sssss! Sssss! Sssss! sound a few times) it would seem like they were much smarter. And the game would be more challenging.
    Each alerted snake would go into an Alerted state and for a certain amount of time would increase its patrol speed, decrease the time it spends in pause mode at the edge of a ledge, would drop more egg bombs, etc.

    Self-preservation is another thing rarely seen in games. Most enemies are basically cannon fodder. If the snakes have a certain amount of life, say 100 units. Instead of them doing the same thing all the time til death it would make sense if their life is down to 20 units for them to change their behavior to represent their goal changing from stopping the player to preserving their own life. Maybe instead of confronting the player, they now try to stay away, find a place to hide so they can heal their wounds.

    I think most games are so full of mindless opponents not much is needed to implement some intelligent behaviors. I definitely think you don't need to implement complex AI that actually tries to learn. Instead just implement some basic patterns of intelligence and you will be far ahead of most games. Throw in some communication to other enemies and you kick it up to another level. Basically, what I do is try to put myself into the game as that enemy. Given its abilities, what would I do? The more complex the enemy, the more options you have available.
     
  15. RockoDyne

    RockoDyne

    Joined:
    Apr 10, 2014
    Posts:
    2,234
    You still haven't solved the issue of quantifying a qualitative property.

    Defining challenging for most RPG's also means playing conservatively (read as boring). A challenging fight usually causes the player strategy to boil down to one character heals, one attacks, and maybe one is on general support for each turn. That's it. That is what the player will do from the beginning to the end of the fight. Statistically, this was a challenging fight that was beatable. Was it interesting or engaging? Hell no.

    This is the classic issue with procedural generation. Often the best you can do is define what the process can't do and clamp it from there. Take any procedurally generated platformer and chances are you are going to be acutely aware of what the maximum jump distance is within the first ten minutes.
     
  16. AndrewGrayGames

    AndrewGrayGames

    Joined:
    Nov 19, 2009
    Posts:
    3,821
    I disagree. In a JRPG context, a fight is considered challenging when the amount of time to defeat a player is close to, but still greater than, the amount of time to defeat an enemy (e.g. a 'close game'.) The exact numbers for what constitutes a 'close game' must be discovered by iteratively tuning the game.

    This is actually the reason I want to create learning opponents in the first place - to prevent a stratification into the MMORPG 'trinity' that is making its way into offline games (for instance, the more recent Final Fantasy titles have moves for managing aggro, over even combat roles dedicated to actually tanking enemies.) As valuable of a tool as that is to a player, you're right - that's not challenging gameplay where the end result always seems to be in question (even if it mathematically is not...but, the players don't need to know that.)

    This I also disagree with - not only do your remarks not support the given conclusion of some classic flaw in procedural generation, but the supporting argument that procedurally generated platforms make you aware of a key value early on is also not a bad thing. I have one other serious problem with the supporting argument, though: If the player takes 10 minutes to figure out they can jump two and a half blocks, you've done things seriously wrong, they need that information as early as possible. Level 1-1 in Super Mario Bros. uses the maximum jump distance before you're a third of the way through the level. And, it's the first level!

    Reading into your comment, though, what I think you intended to say, was that procedurally-generated levels apply the wrong features to levels, such that you get an early level that asks you to perform feats that are extremely difficult if not impossible, or late levels that fail to provide sufficiently challenging obstacles. That has been my experience with them, especially in games like Terraria. Thus, the true problem of procedural generation is that it produces things at incorrect/jarring/inappropriate times. Have I picked up on what you really meant?
     
  17. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,859
    Misterselmo, respectfully, this is rubbish.

    AIs certainly can learn. On some problems, they can learn better than humans can. Your example of doing something repeatedly to get the AI used to it, then fooling it by switching to a different pattern at the last minute — that works on humans too.

    I don't want to overstate it — in general, computers are indeed pretty stupid — but to say they can't learn at all is just plain wrong. Learning should be an valuable tool in any AI programmer's toolbox.
     
  18. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,859
    And oh yeah, this is nonsense too. :) Planning is an even more valuable (and certainly more common) tool in the AI toolbox than learning.

    See here for an overview.

    If that happens, it means you simply haven't done a good job.
     
  19. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,859
    Of course this sort of self-preservation, whether it's based on learning or simple heuristics, raises an important design issue: does it make the game more fun, or is it just annoying?

    I think it depends on the type of game and the overall objectives. If the core reward loop requires killing lots of baddies, then having them flee from you could get really annoying. But if it's something like a sneaker where killing stuff is not the primary goal, then having enemies flee doesn't get in the way — and maybe it could even contribute in some way. Then it's not annoying, it's interesting.

    Well yeah, it's true that (1) most games are full of mindless opponents and (2) mindless opponents don't need much AI. But I'm not sure that demonstrates much; it could just be (probably is) that good AI is hard, some most games are designed without it.

    What interests me is: suppose learning AI weren't so hard — say, because I get around to publishing that learning AI toolkit I keep thinking about — and you designed your game with that in mind. What could you do that would make it not like most games?
     
  20. slay_mithos

    slay_mithos

    Joined:
    Nov 5, 2014
    Posts:
    130
    Depends, one of my favorite game in the J-RPGs (tales of graces) implemented multiple dodge moves, as well as a system in which you were rewarded for dodging with the right move at the right time.

    In the normal difficulties, it didn't make that much of a difference, dodging earlier, or just blocking were definitely not hard, and it still left the enemy open for counters.
    In the hardest difficulty, the enemies have a much higher aggression rating and stats, making even the common enemies an actual threat if you didn't use all your attacks and dodges correctly.
    If you went conservative, you would usually end up having some of your allies killed, leaving you choice between using valuable items to maintain them alive or getting in drawn out fights to try and earn a few levels before proceeding (which often didn't make as much difference as you'd hope).
    It made the game more "challenging" because it forced the player to actually use strategy, and to play to the best of their character's abilities, as well as "harder" because of the enemies' increased threat.

    Now, I agree that many people will say "challenging" no matter if it actually challenge them as a player to think of ways to use everything efficiently or if it's just enemies that have more HP and hit harder, which makes the whole discussion about "what is challenge?" pretty hard to cover.


    EDIT:
    For starters, you could make the whole game not as much about gaining enough raw power to beat mob A and boss B, and more on using a wider array of skills, not all of them needing to be directly offensive.
    You could put much more emphasis on player choice (in fights as well as in other places), and tailor the experience for that.

    It could also technically apply to other things than just combat, giving the player more ability to make allies, or to prevent potential future conflicts, but this is more of a plot consideration than a AI.

    Short answer would be that you could create interesting enemies, different patterns, and give much more freedom to the player, without it having to lead to an open world game or a D&D-based game.
     
    Last edited: Nov 7, 2014
  21. RockoDyne

    RockoDyne

    Joined:
    Apr 10, 2014
    Posts:
    2,234
    The problem is defining challenge as always being on death's door, which is about as quantitative a goal to solve as you can give a bot. The optimal strategy for both player and bot is to do 90% damage across the party, heal everything back up, whittle away a bit at it's health, and then end turn to lather, rinse, repeat. Statistically, the party was always about to die, but not in danger, which is the behavior you're looking for in choosing it's behaviors.

    How can you easily develop an AI that is actually trying to break the player out of optimal strategies, to make for enjoyable encounters? How would you go about weighing behaviors that could surprise the player, and yet not so much that it's actually a surprise? What conditions would have to have been met to generate the one dino boss in chrono trigger where lightning makes it weak, but also charges an attack?

    As far as I remember, you just need to perform the longest jump, not be pin point precise with it, which is what just about every procedurally generated platformer requires. The point I was trying to make though is that procedural generation is ungodly difficult to make interesting. Making it challenging on the other hand is ungodly simple since it starts at eleven and the nob is broke.
     
    AndrewGrayGames likes this.
  22. GarBenjamin

    GarBenjamin

    Joined:
    Dec 26, 2013
    Posts:
    7,441
    Very true. For my platform game the snakes will likely be the simplest enemies all around. The spider and bat will be a little more challenging, a little smarter and more capable due to having more skills available. I personally think it adds to the challenge in a fun way the few times I have played a game and the enemies ran off. Yes it is frustrating at the same time but for me the frustration was in a fun way. It made it more meaningful when I finally tracked them down. D3 has enemies that do that.


    I don't think it would change my platform game. Possibly it could. It is difficult to say really. I already have certain goals in mind for the enemy behaviors and one of the key things I am doing is to not make them just mindless cannon fodder. I don't hate enemies that simply patrol an area non-stop never reacting to the player or enemies that simply move toward the player like a log heading into the saw blade. I think those sort of mindless behaviors can still be fun. It's just that I know they can easily do more than that. I think there is a lot of room between the normal kind of mindless behaviors we see in so many games and implementing a learning machine intelligence for them. Not saying the learning machine approach is flawed just that I think it definitely depends more on the game and the specific design goals.

    My main goal with the enemies is just to make them a little more interesting by giving them some basic intelligent behaviors. My overall goal is still to create a rather simple platform game. If I was doing an RTS I could see much more value in exploring a learning machine. Even in that case, I think a lot could be done by modeling human behaviors.
     
  23. AndrewGrayGames

    AndrewGrayGames

    Joined:
    Nov 19, 2009
    Posts:
    3,821
    That's the thing - to maintain optimal flow, we need to alternate between periods of high tension (on death's door, only small units of time between the AI winning and the player winning), and periods of low tension (larger margins between who can possibly win, in favor of the player character.) It would be a supremely bad idea to have constantly high challenge, because we risk psychologically burning out our player. Much like in music, sometimes a rest is worth 1,000 notes.

    I think that's part of the whole 'genetic' thing - semi-random mutations on an enemy, but not so much that you never know what you're doing (dinosaurs in Chrono Trigger have a non-negotiable weakness to thunder, otherwise the cavewoman's advice is completely useless, which devalues her as an NPC, and compromises the player's willing suspension of disbelief since she now outright lied about a key mechanic.) Using the CT example: does it make sense for a Tyranno Warrior to be able to cast Firaga? Absolutely not, they're not a magic-themed enemy at all. Does it make sense for two Tyranno Warriors to use X-Slash to deal spike damage to a player? Yes! X-Slash would be a reasonable 'sideboard' move that can be included in some mutations on the basic 'Tyranno Warrior' AI script.

    I think it's a bit easy when we're talking about procedural techniques, that's it's easy to go overboard with it, though, and it's a good point not only for design, but also for the conversation. We are not talking about procedurally-generated enemies. We're talking about enemies that are coded to change behaviors to keep the player on their toes, while still losing competitively. Various aspects of these hypothetical enemies are quite static!

    This only backs up my assertion that "you never go full procedural!"
     
  24. slay_mithos

    slay_mithos

    Joined:
    Nov 5, 2014
    Posts:
    130
    It all depends in what you mean by "learn", I guess.

    Trying something out, possibly hundreds of time, and having it fail above a defined threshold, marking it as "not working", and not using it again in the same situation is basically what we do too.

    Programs are limited by both the hardware and the software, and both are usually inferior to what the nature was able to make through millions of years of evolution.

    If the software is done properly, with enough inputs to be able to "access" the situation, a software can definitely decide on what "works" and what does not.

    Higher forms of learning, like learning to follow a completely different thought process can't be done by a program without it being specifically programmed, and in that case, it's not quite "learning" either.

    So, in the end, it mostly comes down to how far you extend the requirements to qualify as "learning".

    If you go too far, it will also mean that you don't say animals can "learn", because all they do is "trial and error" and "copying", and were not proven capable of thought process involved enough to "create" or "change".

    I think we can agree that even the most advanced program that tries to simulate learning, on the most powerful machine, with access to all knowledge (and means to "understand" it, somehow) will still end up trying to copy what it found.
    It will only do things it was made for, and won't be able to evolve into a higher form of intelligence (unlike most Sci-Fi AIs) without being specially designed to change itself for that, and even then, it would be limited to what it can find in its knowledge base.

    Maybe, one day, when we know the human brain inside out, we will be able to replicate it in the form of machines, but that's for a somewhat distant future, I think.
     
  25. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    I forgot to add, learning is volitional. Computers have no will and therefore lack the capacity for choice. It's been nice having this conversation with you. :)
     
  26. RockoDyne

    RockoDyne

    Joined:
    Apr 10, 2014
    Posts:
    2,234
    I just want to boil down your argument down to: computers can't learn because they don't have a soul, and computer intelligence is bastardized human intelligence which is bastardized godly intelligence (as in human intelligence is special and can therefore never be duplicated).

    Might want to open a psyc book some time. You might just be surprised to find out most of what you define as intelligence is just responding to stimuli and pattern recognition. Both of which are things computers can do fairly well, along with every single cell in your body.
     
    JoeStrout likes this.
  27. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    Yes. You asked how to make enemies learn, I said there's no such thing. Blah, blah, blah... so here we are.

    I agree, AI has to be good. But let's remember that it's "artificial" intelligence. Then we can talk about how to simulate the effect you want, unencumbered by psychobabble, science fiction or theories.

    In particular, what would you like the A.I. to accomplish in terms of player experience?

    EDIT: Let's try to speak in terms of tangible, real things that can be of use to people. This isn't Gossip. Also, posts less than 5-7 paragraphs might encourage a response. Try asking an open ended question at the end, also.
     
    Last edited: Nov 8, 2014
  28. TonyLi

    TonyLi

    Joined:
    Apr 10, 2012
    Posts:
    12,697
    Or by iteratively tuning something like an AI Director. The appeal of an AI Director is that it programmatically watches the game and makes adjustments to make it "more fun." This could be adding pauses to allow players to catch their breaths, or increasing the spawn rate to give them more challenge.

    I agree this works well. It also lends nice consistency to the player's experience. Whenever he sees the red snake, he knows he's in for some trouble. But, at the same time, since we're talking high level AI ideas here, new techniques like Monte Carlo Tree Search (MCTS) might let designers add a little something different to their enemies. The rough idea is that the AI semi-randomly tries different things, keeping a record of how each plan worked out, and gradually arriving at statistically good plans. I guess you could think of it as the Groundhog Day or All You Need Is Kill approach. :) Imagine if the player easily defeated the red snake in level 1, but by level 10 that snake has hit upon some patterns that allow it to survive longer. It's a different approach to design because the designer gives up a lot of hard-coded control. I think the industry as a whole is navigating this right now, looking for good balances between design-time control of behavior and letting the AI find its own way.
     
  29. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    When the AI acts differently each time, what does this do to the player's experience? How do you get good at a game where core mechanics change on the fly?
     
  30. RockoDyne

    RockoDyne

    Joined:
    Apr 10, 2014
    Posts:
    2,234
    Your entire argument is human intelligence is magical and can't be replicated by machines in any way, shape, or form.
    Did you actually read the initial post or did you immediately post saying machines can't learn. The stuff Arvarduil started with is actually pretty well established and used. The essences of it is to take an algorithm and tweak random variables, keeping the version that's closer to the desired output. I just said it couldn't be done because you can't measure fun and interesting as desired outputs.
     
  31. TonyLi

    TonyLi

    Joined:
    Apr 10, 2012
    Posts:
    12,697
    That's a good question, and an argument for hard-coding predictable behaviors so the player has something concrete to measure skill acquisition against.

    On the flip side, though, how do you get good at a game against other human players? Their mechanics can certainly change on the fly....
     
  32. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    In a nutshell, that's the point of competitive games and matchmaking. There's always a challenge that is right for you.

    Should every game be that way?
     
  33. GarBenjamin

    GarBenjamin

    Joined:
    Dec 26, 2013
    Posts:
    7,441
    That makes sense. I can see some value in the approach. Basically I'd see this as a sort of Dungeon Master. Monitoring the player and the enemies and watching the outcomes of encounters. Looking for patterns in the player wins to learn what to guard against and looking for patterns in player losses to find possible weaknesses in the player's strategy.

    I get the idea behind it I just don't get wrapped up in all of the techno babble terminology "Monte Carlo Tree Search" and so forth. ;)
     
    RJ-MacReady likes this.
  34. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    I def. agree with the terms thing, also, all this name dropping and linking is probably keeping people intimidated from posting. It should be a little more accessible.
     
  35. AndrewGrayGames

    AndrewGrayGames

    Joined:
    Nov 19, 2009
    Posts:
    3,821
    The reason it's considered a tree search is you can replace both operations and values with nodes on a tree. Various types of tree searches (red/black, monte carlo, etc.) just use different algorithms for traversing the tree.

    The literature is intimidating, because a great deal of it written for those of a more academic background; it's focused more towards theory and explaining the underlying principles of how these mathematical systems work, as opposed to 'Predictive Artificial Intelligence for Dummies'. (Not calling anyone a dummy, just referring to the series of books.)

    As far as 'it's like a dungeon master', or 'AI Director', that appears to be an effective way of doing things, if Left4Dead 2 is anything to go by. Personally, though, my preference is more towards a Metroid: Prime setup, where maybe individual enemies don't have the optimal setup to use against the player, but each school of intelligent enemy has their own distinctive behaviors that change over the course of the game. There's some cases where an AI Director would be overkill, really.
     
  36. GarBenjamin

    GarBenjamin

    Joined:
    Dec 26, 2013
    Posts:
    7,441
    LOL ... I never read any literature on it. Just based my reply on what the core concept had to be. I don't get excited about tree searches period. I mean sure I find the algo stuff somewhat interesting at times but in my experience these academic types often use a power hammer where a plastic toy hammer is all that is needed. I just focus on the goal not this techno mumble jumble. Yes I don't need to read it to know what it looks like and sounds like. Because it is always basically the same. Lol
     
    RJ-MacReady likes this.
  37. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    The number of times I asked my teachers for an in-depth explanation of various subjects, I discovered that most human knowledge is surface deep. People can regurgitate, but rarely expound on or apply knowledge in a realistic setting.
     
    Last edited: Nov 8, 2014
  38. GarBenjamin

    GarBenjamin

    Joined:
    Dec 26, 2013
    Posts:
    7,441
    You hit the nail on the head! This is exactly how I view it and is probably one of the big reasons why I don't get into all of the terminology and so forth. Honestly, I think many of these people are so immersed into the subject at hand they kind of get lost in it.

    I like exploring different AI and have looked at some of the work done. Every few years I take another look. My opinion is spending so much time studying the stuff is not very helpful in making a game. I think glancing over the material and using it as "food for thought" can be beneficial. There are many games that have been released over the years that have used (at least in part) the latest & greatest advances of AI and machine learning. Yet when a person plays the game I would say all of that effort is not noticeable and the same experience could have been delivered using much simpler methods.

    Again, I think it all comes down to modeling behaviors. Look at the enemies you have, figure out how you want them to react and just use a state system, proximity visual/audio detection, perhaps some communication between enemies etc. While it can be a fun challenge in itself, game enemies don't need to truly learn. That is more a personal challenge the game developer is going after. Focus on the game. Map out enemy behaviors. Implement them. It is easy to make enemies challenging. The trick is making them challenging, interesting and also balancing them so the game is fun to play.

    However, from a purely personal challenge and learning viewpoint I can understand the interest in such things. I am just suggesting this kind of stuff is in no way needed to make a great game.
     
    Last edited: Nov 8, 2014
  39. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    I kind of think it's like if the character act like it doesn't know where you are until it sees you... What difference does it make how that is implemented?

    It's going to look the same to the user what are you using raycasting and checking the distance and all that vs. Creating an entire artificial intelligence system, complete with motivation emotion and backstory for that character, and you're actually checking an entirely different view port for an image of your character based on what that enemy would be able to see based on the Count of Monte Cristo algorithm.

    All that matters is the user experience.
     
    GarBenjamin likes this.
  40. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    If you program your character so smart that actually does act like a real person and makes weird mistakes, it's more likely to appear to be a glitch then some advanced feature.
     
  41. GarBenjamin

    GarBenjamin

    Joined:
    Dec 26, 2013
    Posts:
    7,441
    That's exactly what I mean by the same experience could be delivered using much simpler methods.

    No difference to the user but a big difference to the developer. Some approaches will take 5 to 10 times longer to implement, be much harder to debug and be more prone to glitches due to their complexity.

    How I view these things is first.... what exactly is the problem I am trying to solve? It certainly makes no sense to immediately jump to complex methods of doing something that most likely can be achieved much simpler. That's classic over engineering. Simplify and set yourself free! ;)
     
    Last edited: Nov 8, 2014
  42. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    Well, that's because you're a software engineer. Your interested in the result. Academics would rather focus on the problem.
     
    GarBenjamin likes this.
  43. RockoDyne

    RockoDyne

    Joined:
    Apr 10, 2014
    Posts:
    2,234
    Something that is overlooked in this argument is, can the player play with it? If they can, then why wouldn't you be interested in it?
     
  44. GarBenjamin

    GarBenjamin

    Joined:
    Dec 26, 2013
    Posts:
    7,441
    Why do you say that? I am not sure I understand the questions so if you do then certainly contribute. My first thought was "the player can play the game regardless of which method is used to simulate intelligent behavior"... but that is based on "it" referring to "a game".
     
  45. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    I thought it was addressed to you.

    I think by "it" he is talking about this "learning" A.I.

    "If they can [play with it], then why wouldn't you be interested in it?"

    Why am I not interested in overly complex, error prone artificial intelligence? That's what this is asking, to me.

    Because it's overly complex and error prone. I want my player to enjoy the experience. Not focus on interacting with a fake entity, studying its realism, marveling at its lifelike qualities.

    We're talking about games, not just virtual museums where we can sit and marvel and play with complex artifacts.

    How is this coming through on your end?
     
    GarBenjamin likes this.
  46. RockoDyne

    RockoDyne

    Joined:
    Apr 10, 2014
    Posts:
    2,234
    I guess I wasn't that articulate. My thought was about thinking about the AI as another component that the player can mess around with. It's easy to say that a simpler method is going to give us similar results and therefore better, but this is presupposing that adding complexity isn't providing additional depth.

    If we were talking about a 2D sidescrolling platformer, I would agree that it's pretty idiotic to make behaviors that aren't simple. For something like a stealth game though, an entire genre that has a unified central conflict of player versus AI, I would love for it to be as deep as you can make it.
     
    RJ-MacReady likes this.
  47. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    Can you give an example of a complex AI?
     
    GarBenjamin likes this.
  48. GarBenjamin

    GarBenjamin

    Joined:
    Dec 26, 2013
    Posts:
    7,441
    I think you could produce a very good simulation of enemy intelligence in any genre. Obviously, if we're dealing with bees, snakes, spiders, bears and such we would be limited by what we are modeling. But even with these simpler enemies in games... how many times do we see them behaving in basically the same manner unique only in appearance due to graphics and animations and possibly movement patterns? I think there is a lot of value to be gained from focusing on improving these lower forms.

    I'm not knocking attempting to add depth to the enemy's intelligence simulation. I think it is a great goal as long as doing so continues to add value to the game experience. Mainly I just wonder how do you want them to act? What is the behavior you are trying to produce? Like in the stealth game, what are the enemies and how should they behave?

    I am just suggesting that maybe you could produce those behaviors using simpler methods than trying to basically build neural learning systems and so forth.
     
  49. RockoDyne

    RockoDyne

    Joined:
    Apr 10, 2014
    Posts:
    2,234
    If all you care about are making specific behaviors, then it absolutely doesn't make sense to try and get some kind of learning system working. At the same time, you could say the same thing about all procedural content though. If you want to make anything specific, then you never turn to procedural generation.

    If you only use it for one game, then it's money down the drain, but if it's built intelligently (the catch I know), then you're going to be able to use it for the next ten years. In a lot of respects, these kinds of systems would/could/will/do look more like development tools, that might pre-compute behavior models, than being instances of SHODAN running in the game.
     
    AndrewGrayGames likes this.
  50. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    Yes, but can you think of even one hard example of a learning A.I. being used in a game design?
     
Thread Status:
Not open for further replies.