Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Learning Opponents (or, "Intelligent" Agents?)

Discussion in 'Game Design' started by AndrewGrayGames, Nov 7, 2014.

Thread Status:
Not open for further replies.
  1. RockoDyne

    RockoDyne

    Joined:
    Apr 10, 2014
    Posts:
    2,234
    It's at least being used as marketing spiel all the time (the generic "AI responds to how you play"). I don't ever hear/find out enough about the actual techniques that are used to know. Whether it fits your idea of a learning AI, the answer is no because the pentagon still hasn't declassified their HAL 9000 that just plays WoW.
     
    RJ-MacReady likes this.
  2. GarBenjamin

    GarBenjamin

    Joined:
    Dec 26, 2013
    Posts:
    7,441
    That's exactly what I was getting at. :) This kind of thing is more of an initiative in and of itself rather than focusing on a specific game issue. Sure it would result in a development tool ultimately (assuming it was working). that developers could use in a variety of situations to implement simulated intelligence. Still... how does one know the end goal could not be achieved in another (probably easier) way if all of the focus is placed on a generic super system capable of learning?

    I can see the value of having a plug n play AI system. It just seems like people are focusing on the end before the beginning. What exactly are the problems this super system would solve? By starting with the specific problems it may be much more likely to end up with a system that works in the real world. How do we know the people designing the AI systems that have been discussed are even on the right path if there is no criteria to measure against?
     
  3. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    I can't think of any instances. Everything I can imagine can be solved by just programming in certain behaviors and responses within the A.I. to different stimuli. It doesn't need to be self-aware to shoot a fireball at you. We're using artifice to simulate intelligence. That's all.
     
  4. brilliantgames

    brilliantgames

    Joined:
    Jan 7, 2012
    Posts:
    1,937

    You would be the worst person to party with. :p
     
    0tacun likes this.
  5. RockoDyne

    RockoDyne

    Joined:
    Apr 10, 2014
    Posts:
    2,234
    Thing is, most of these systems aren't that complicated. One part of it is usually a tree or FSM, however it's modeled, which isn't any different than what you would typically make for AI. Now you just have this other part that is picking up input from the player or world, and is tweaking the underlying model, possibly on the fly. Most of the theories are pretty simple, so naturally they become a nightmare in execution, usually memory or the cpu cycles can't be easily hidden.

    I think this conversation would benefit from just throwing away 'learning' as a term. Instead we should just think of our uber AI should be thought of more along the lines of procedurally generating behaviors with conditional testing. The model for which you can compare it to will be the scientific method, minus the whole peer review process because F*** those guys.
     
    GarBenjamin likes this.
  6. TonyLi

    TonyLi

    Joined:
    Apr 10, 2012
    Posts:
    12,670
    I mentioned Left 4 Dead and FEAR earlier because they're good examples of AI that learns, or at least reasons and plans dynamically, rather than following hard-coded patterns. This allows them to do things the designers didn't anticipate. Here's an example that FEAR's AI designer cited:
    In addition, designing the AI this way means that you just need to define goals and abilities. You don't have to write individual scripts for how to apply an NPC's abilities to accomplish each goal in every situation. In a medium to large game, this could be dozens of individual scripts to write and debug, with no guarantee that you covered every situation. Instead, planning systems like FEAR's write their own scripts on the fly, based on the current situation.

    (Jeff Orkin's whole presentation is really good, BTW.)
     
    AndrewGrayGames likes this.
  7. GarBenjamin

    GarBenjamin

    Joined:
    Dec 26, 2013
    Posts:
    7,441
    I'm all for playing around with the AI to make it better but I think the way they are describing it is probably just to create a red herring. I doubt the enemy is really lesrning anything. At least if I was making this I would handle it like this .. examine door state (we can see this on screen)... if not blocked then open it. If blocked then ram shoulder into it. If still blocked then shoot at door, etc. This kind of lesrning I get. It makes perfect sense. But I think people are reading that and their imagination is making much more out of it than is really there. Not some advanced learning machine but a simple matter of states. The state of the door. An immediate memory of the NPC (a fancy way of saying a flag).

    It would be very easy to make the NPC perform a chain of actions against an obstacle door or otherwise: Attempt normal way past (opening door in this case). Kicking door. Ramming entire body into door. Firing bullets at door (if wooden, glass etc). Determining new entry point. Dive through window.
     
  8. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    Sounds like "procedurally generating behaviors with conditional testing". I mean, we're talking about what is essentially keeping track of things that work vs. things that don't work and trending toward the things that work. Is that a fair read on this?

    @brilliantgames - Really? You're going to play the cool kid on an internet game design discussion forum?
     
    GarBenjamin likes this.
  9. GarBenjamin

    GarBenjamin

    Joined:
    Dec 26, 2013
    Posts:
    7,441
    And that makes perfect sense. I mean that is what fhey should be doing. Keeping track of state. Updating state. In this case, enemy was in Open Door. Failed. State changed to Kick Door. Failed. State changed to Find Alternate Entrance. Window found. State changed to dive through window.

    Maybe I am the one who has the wrong view. When I read about these "advanced AI learning systems" I get the impression they are trying to actually make them truly intelligent. To truly learn until soon your computer crashes due to lack of memory.

    But the examples from games are straightforward. It is what a person should be doing when writing the AI for enemies. It's just simple actions / states strung together in a logical sequence. And yes, we can make it remember so the next time it comes to a closed blocked door it goes straight to dive through window. Just reorder the sequence based on the result. When the enemy dove through window and entered the room. Success! So, renumber the sequence for dealing with a door so dive through window is now done immediately after failing to open the door. If success again then you could renumber the steps again so next time the enemy does not even try to open the door and dives straight through the window. You don't need advanced data tracking routines and so forth.

    Now... if you want something that really is a challenge... it's creating all of the graphjcs and animations for the door, the enemy, the window and so forth. That is the nightmare part at least to me.
     
  10. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    Now I'm remembering how fun it was watching guards check the lockers in MGS2. I love the idea of smart AI, but ultimately it's a con. The lockers are just an array of interactive objects, everything is algorithms... see the code, man.
     
    GarBenjamin likes this.
  11. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    What about music? Games are a multi-discipline art form. Sucks, but that's why Indies get respect.
     
  12. GarBenjamin

    GarBenjamin

    Joined:
    Dec 26, 2013
    Posts:
    7,441
    Yeah music and sounds too. The audio visual stuff are the difficult parts. Like you said, the other stuff is just code. States, lists, flags, condition checking and reacting.
     
  13. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    Musicians are very different, you can find them and they're always looking for exposure.
     
  14. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,859
    Sorry, I wasn't attacking you — I generally find your posts well-informed and insightful. It's only this particular claim that is nonsense. Machines can and do learn. They've been doing so for decades. I have books on the topic on my shelf, and personally know a fair number of machine learning researchers who would be quite surprised to learn that what they've been doing for years doesn't actually happen.

    I'm sorry, I still think you rock on a personal level, but this is also incorrect. The term "think" is too vague (unlike learning, which is well defined) to have any meaningful discussion about it. But whatever thinking is, there is certainly no basis for the claim that a device made of lipids executing genetic instructions is more capable of it than a device made of silicon executing binary ones.

    I could say the same about you. (Or me.) In fact when I was in graduate school studying neuroscience, our whole purpose was to study exactly how human behavior arises from the relatively simple functions of neurons (which can be simulated like this, for example).

    In some sense, I guess that's true, because we created it. Though in some cases (e.g. GAs), we didn't do very much; we just set up the conditions and picked a fitness function, and then let evolution take over.

    I didn't mean to be rude, but I am, in fact, sure I'm 100% correct on this point. Machines can and do learn.
     
  15. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,859
    But wait — that would be a different conversation. It started out asking about learning AIs, and I think that's a very important topic to consider, because as we've amply demonstrated here, a lot of designers still don't understand learning algorithms and what they can do. And even with that understanding, it may not be obvious where/how to best apply it to game design.

    A few messages up, somebody asked for a concrete example of learning used in a popular game. I'll ignore the obvious ones like Black & White, which was based entirely on learning, and instead point out the "ghost" feature in Tekken 5. The algorithm learns to imitate your play style, and then you can send your "ghost" (which plays similar to you) to your friends to fight against, and vice versa. That's a pretty creative application, I think.

    Now, when you're talking about enemies in a shooter or platformer, I just don't think it makes sense there. The life expectancy of such an enemy is generally a few seconds. It's just not enough time to learn anything useful, nor for the player to notice that the enemy has learned anything. You could apply it at the squad or species level or something, I suppose, but even then — they're basically moving targets, and I don't see much opportunity there.

    But what about a companion NPC? These are with you for the long haul; there's plenty of opportunity to notice their artificial intelligence (or lack thereof). And plenty of things for them to learn, too.

    Or, what about predicting the player's choices in a resource management game, in order to shift some of the busywork onto an AI manager?

    But I guess the thread title is learning opponents, so... hmm. I can see it in something like a fighting game, where you face the same enemy for an extended time, and probably multiple times. Being able to predict the player's moves would allow you to start countering those moves, forcing the player to mix things up a bit, just as they would have to do when fighting a human opponent.

    Also, while you certainly can make a very hard fighting game AI through traditional methods, it's a lot of work, and easy to screw up in a way that leaves a fatal flaw — some combo that always works against the AI, every time. Once that gets out, your game is solved, and serious players will lose interest. So a learning AI, assuming you have a decent toolkit to build it on, may well be easier and cheaper to produce (you just do the initial training in the lab, before you ship).

    Are there other cases where a learning opponent would be advantageous? Or, if we could widen the topic slightly, other good applications of learning algorithms (apart from enemy AI) in games?
     
  16. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    So we're debating semantics? Learn/think. I'll cede the point because you wouldn't post all if this without a purpose, on some level you must know something I don't, or you're seeing this from a perspective I'm not.

    I don't quite know how this applies to A.I. though, which is why I keep asking for examples to discuss.
     
  17. RockoDyne

    RockoDyne

    Joined:
    Apr 10, 2014
    Posts:
    2,234
    Kinda missed the point I was trying to go for there. I was opting to throw out a term that has been mystified in peoples minds. Not really trying to change the conversation so much as rip open the underlying logic.
     
  18. GarBenjamin

    GarBenjamin

    Joined:
    Dec 26, 2013
    Posts:
    7,441
    That's what I have been thinking. In most game opponents are nothing but cannon fodder. They spawn, they encounter player, they die. Very limited lifetime to actually learn or use what little they learned. If their experiences were monitored by that AI Director who tweaked each enemy's strategy slightly that could be a way to change overall behavior over time.

    I am not knocking the idea of working on better AI for enemies or even for changing their behavior to simulate learning. I think there may be some misunderstanding about that. My point has simply been that focusing on these so called advanced AI learning systems (the ones being listed and name bombed) is most likely not the key.

    The way forward, I believe, is by first identifying exactly what we want to achieve. What are the behaviors we want to see?

    Again, when I say behaviors I think people may be using a very narrow view. A behavior is not simply Patrolling, Avoiding and so forth. Behaviors also encompass probing, analyzing and so on.

    We do this kind of thing already to some degree. When an enemy performs a raycast or whatever to determine if there is a wall or other obstacle ahead that is a probing behavior. It simply means they are gathering information. If an enemy is to take the information gathered through probing (or other feedback) and use it to make better decisions we can call that a reactive behavior. And some may say it is a learning behavior. But everything these enemies should do can be expressed in terms of behaviors. And I think once people make that switch to view it this way this will all seem much simpler.

    So... if we can identify exactly what the behaviors are that we are after... that is the prerequisite to being able to improve enemy AI.
     
    Last edited: Nov 10, 2014
  19. TonyLi

    TonyLi

    Joined:
    Apr 10, 2012
    Posts:
    12,670
    Machine learning has a clear definition in the field of artificial intelligence. I used some terms earlier in this thread that some had issues with. (And I'll admit that Monte Carlo Tree Search got too deep into the weeds.) I used those terms because, when we're talking about artificial intelligence, we all need to sprechen al mismo langue (speak the same language). That language is already clearly defined, and has been for decades.

    But, to bring this back to @Asvarduil 's original question:
    I have a design-related question that I'd really like to get input on.

    When NPCs do their own learning or planning, designers have less direct control over them. It's harder to script their activity. Instead, you give them a set of abilities and hope that they can navigate the situation properly.

    What are the design implications? This seems like it would work better in more-isolated situations, such as short-term tactical combat, than in broader situations, such as the NPC's long-term behavior over the course of the whole game.

    It also seems like you need to give more thought to engineering situations where NPCs can learn (or plan) to do things in new and interesting ways.

    My motivation for this question is selfish. I've written a procedural quest generation add-on for the Dialogue System. It's inspired by Richard Bartle's articles on using planning to allow NPCs to dynamically generate new quests based on their goals and the state of the world. (For example: If orcs have invaded the farmer's fields, the farmer might generate a "kill 5 orcs" quest.)

    I haven't released it yet (apart from a private beta) because the hard part turns out to be on the designer's shoulders. With short-term tactical combat, the goals are simple (kill the player), and the abilities are simple (shoot, reload, open door, etc.).

    In the wider game world, however, how do you effectively identify what every NPC's goals and abilities are? How much is enough to be able to continuously generate new, interesting quests?

    And can these quests ever be as interesting as a hand-written quest? I suspect not, but there's a certain appeal to them. For example, if the player drives the orcs out of their cave, rather than simply killing them all, the orcs might find themselves wandering the farmer's field. The game writer might not have anticipated this scenario. But a procedural quest generator could take it in stride and generate a new quest -- one that exists in response to actions that the player took.

    And, from a practical standpoint, you can't record voiceover if you don't know at design-time what the NPC is going to say. Any ideas for this dilemma, too?
     
    AndrewGrayGames likes this.
  20. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,859
    Well they might, but they would be abusing the language. :) Learning isn't a behavior; learning is an adaptive change in behavior due to experience.

    I think this is a useful distinction. An agent without learning, when put in the exact same situation, will either react the same way or with random differences (so if you tried it a billion times, on average you could collect statistics about the reaction that would apply just as well to the first trial as the last one).

    A learning agent, on the other hand, will change its behavior over time in a way that improves its success (generally, as measured by some reward function). So it won't react the same way, and even if you tried to collect aggregate statistics, you'd find these changing over time.

    Trying to lump learning in as just another behavior only muddies the water, I think. Most techniques in common use in game AI — including the ones you've sketched out here, if I understand it correctly — don't involve any learning. This can become pretty obvious to the player, and breaks the illusion of intelligence (in a game that attempts to create such an illusion at all).

    I never much cared that Koopa Troopas always had the same behavior; they were just moving obstacles, and I was happy to jump on their heads. But in Oblivion, I got to a point where I could make myself completely invisible and still attack, through a combination of tricks. And enemies would say "Hm? What? What's that?" but, because they couldn't detect me, stand there like stupid bumps on logs while I toasted them over and over. I even tried switching to a low-damage technique and basically smacking them repeatedly. They never ceased to act surprised, and they never decided that whatever was going on, this was no longer a healthy place to be. The illusion was completely broken.

    Now, you could resolve this failure with just more complicated behavior trees... but it's hard, because you (the designer) have to basically think of every possible case. The real promise of a learning algorithm is that, if you do it right, you don't have to think of every possible case. The agent could learn "I'm taking damage when I'm in such-and-so a situation, so let's change the situation" without having to understand who/how that damage is occurring. It could then change the cosmetic response (instead of acting surprised, it should say something like "Ow! Again! Argh!" and act frustrated), and also do something about it (leave the area, for example).

    As another example, a common trick in Oblivion is to curse (say) a helmet, then reverse-pickpocket this into somebody's inventory, and then smack them. They're programmed to don any armor they have when the go into fight mode, so they put on this burning helmet, and just say "Ow! Ooh! Ow!" while it slowly burns them to death. A learn algorithm ought to be able to correlate putting on this new piece of gear with suddenly taking damage, and try removing it. (Though this one is a bit harder, since it may also happen frequently that they take damage just from getting attacked in this situation.)

    In any case, I think the chief danger with any non-learning AI is that your game will be solvable: some combo or technique you didn't think of (like the two examples above) will enable players to beat your AI, every time. Learning makes it this much less likely (or should).

    Well, I agree with this general idea... but the behavior we're after may be something quite nebulous like "doesn't have any exploit that makes it solvable, including exploits we haven't thought of yet."
     
    GarBenjamin likes this.
  21. GarBenjamin

    GarBenjamin

    Joined:
    Dec 26, 2013
    Posts:
    7,441
    I see what you are saying and respect your views. You provided concrete examples of the undesired behavior and desired behavior. Thanks.

    I still believe this can be broken down into tiny elements. In the oblivion example it seems like a simple feedback mechanism should be in place. The enemy knows it is being hit (damage is being applied) so the response should be to move away and perhaps try to locate what is causing the damage. After a bit of time not being able to determine where the damage came from perhaps it could attack wildly randomly around it or it could simply run away. To implement this I would not need anything beyond a behavior for receiving damage and probing to locate the attacker. It would not matter if the attacker was a few feet away away invisible or a good distance away shooting arrows at the enemy.

    I am really not out just to argue. Just to get a deeper discussion going on the "how to" examining the issues with current AI and exploring if there are simple ways of resolving the issues.
     
  22. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    So, I make a game of Rock, Paper, Scissors.

    Each move, I look at the last 3 moves the player has made.

    I create a counter for each of these combinations of moves. Each time one appears, I increment the counter for that combo.

    When I detect the first two moves of a pattern, I can use my counters to predict the third move.

    Is my understanding sufficient?
     
    GarBenjamin likes this.
  23. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,859
    Yeah, I get that. And you're right, for any particular example, there's often an easier/simpler way to accomplish it. Heck, a simple look-up table is often all you need to keep track of what's happened under different circumstances in the past, and decide what to do about it. That's a machine learning algorithm too, but a much simpler one than what's usually kicked about in the literature. And if it gets the job done, I'm all for it!
     
    GarBenjamin likes this.
  24. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,859
    Yeah, that's certainly one way to do it (you've just described an N-gram predictor, with N=3, also sometimes referred to as a trigram predictor). And this algorithm works surprisingly well in some circumstances. If I were going to make a game play rock-paper-scissors, it'd certainly be the first approach I would try.
     
    GarBenjamin likes this.
  25. TonyLi

    TonyLi

    Joined:
    Apr 10, 2012
    Posts:
    12,670
    BTW, my question isn't rhetorical. I really would appreciate some input if anyone has any ideas. Is my dream of procedural RPG quest generation practical?
     
  26. GarBenjamin

    GarBenjamin

    Joined:
    Dec 26, 2013
    Posts:
    7,441
    Exactly. I have read a lot of AI work throughout the years but admittedly less in recent years. In my opinion they always over complicate things. Focusing so much on modeling a human brain and animal learning seems to distract from the purpose. Some of it is good and most is interesting. But I like to focus on simple applicable solutions addressing the root problems.
     
  27. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,859
    It's an interesting question, for sure. I think it could be doable, but whether it would be fun, I don't know. I've never been all that interested in being every NPC's errand-boy, even when those errands are hand-written. So it's hard to get too excited about running automatically-generated errands. But that's just me — I know a lot of people do enjoy that sort of thing.

    What I would enjoy is having a world of agents with real lives: jobs, property, loved ones, habits, etc. If I burn down the tavern, all the standard tavern-goers should get upset about it, possibly finding somewhere else to drink, and possibly (if they know I did it) running me out of town with torches and pitchforks. If a resident loses his house, he should find friends or relatives to live with — and again, if that sort of thing happens a lot, townspeople should start working together to get to do something about it. If a parent's only child or breathless yoot's one-true-love is slain by orcs, they should be so distraught that, just maybe, they fling themselves off yon cliff. And so on.

    This would be far more interesting to me than the typical fare, where NPCs go about their scripted schedules every day, regardless of the homicidal kleptomaniac (i.e. the player) rampaging through their town.

    I'm not sure this requires a whole lot in the way of learning, but it does require a pretty sophisticated knowledge and communication system, as well as a decent planner, and probably some sort of drive-reduction system for prioritizing goals.
     
    Teila, GarBenjamin and TonyLi like this.
  28. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    So let's say I'm making a brawler, Joe.

    And let's say that in my brawler people are abusing the jump kick mechanism, because it circumvents the normal attack patterns for the game and hit the enemy from an angle that they're virtually defenseless against.

    Let's say I let this player do this for the first three stages and he thinks he's so smart.

    And after gathering hundreds of enemies worth of combat data, I have my enemies start jump kicking at him. When he comes at the enemy they immediately do a ranged attack in the air even before he jumps.

    What is this called?
     
  29. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,859
    I'm not sure I understand. Are you gathering this data, and then releasing a new version of the game with altered AI logic? Or is the game doing this, on the fly, on each player's machine as he plays?
     
  30. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    The game is keeping track of the player's tendencies to score hits with different attacks. Then when he's matched against a foe, that enemy "knows" what that guy is going to attack with most likely and launches a defensive measure just in time to baffle the player.
     
  31. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,859
    Oh yeah, that's a good application of a very simple learning algorithm. (Many fighting games are fundamentally rock-paper-scissors in disguise.)
     
    RJ-MacReady likes this.
  32. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    O.K. so the original question is how to use this "learning" stuff and still control the behavior of the AI opponent?

    ...it's a non question, you're just modifying existing behavior. OP has a lack of understanding of the subject. He presented this is actual learning, thinking AI that is autonomous and therefore uncontrollable. No, this is a behavior following, patterned AI that modifies how it weights certain behaviours on some predetermined criteria of efficacy, user enjoyment, etc.

    If you don't want a certain behaviour, weight against it or just don't program it.
     
  33. RockoDyne

    RockoDyne

    Joined:
    Apr 10, 2014
    Posts:
    2,234
    Would it surprise you to know that computers can actually carry out a lot of scientific research autonomously? Material sciences (metallurgy being particularly straightforward) can easily be streamlined enough so that computers can run experiments from beginning to end. They can create the samples, run the tests, analyze the data, and arrange for the next series of tests just as easily as any person.

    Of course if you don't know why it's pertinent to be discussing computers being able to run the entire scientific method, it's because the scientific method is a rigid and highly structured system for learning. So long story short, sentience has ABSOLUTELY NOTHING to do with learning.
     
    Ryiah likes this.
  34. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    Hmm. I'll put my understanding of computer algorithms up against yours, if you'd like. I can't guarantee I know more, but I'll chance it.

    Everything you've described... Everything... is exactly an example of a basic recursive algorithm. Text book. "Over and over again", "small changes", it's always in regards to some sequence of items, events, it's all going through data and creating new datasets from that data.

    If you use "learn" wherever "store in memory" is found in computer science, then you're absolutely correct.

    Go through a list of x and find n where y is z.... etc. Store results in abc and order them by xyz.

    It's following instructions. The computer can't write it's own instructions, but if it could... it would still be following instructions.

    So man invented the abacus... nobody thought it was going to rule the world. Then came machines.

    "IT MOVES ON ITS OWN WE'RE ALL GONNA BE ENSLAVED" did ensue.

    One guy makes a set of computer instructions that allows a computer to win at chess for some state fair and people lose their damn minds.
     
  35. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,859
    What's the point of bickering? There are a class of algorithms which standard terminology classifies as "learning algorithms," and they really are different in what they can do from the non-learning algorithms. Whether you call this "thinking" or not, I really don't see why anyone should care. They exist. We could list them out: decision trees, neural networks, Bayesian classifiers, N-gram predictors, etc. (Listing them all would be hard, because there are so many, and new ones being developed all the time, but it's fairly easy to see what they have in common.)

    Faced with a problem, a programmer's job is to select the best algorithm to solve it. If you deny the existence of this class of algorithms, then you're denying an entire shelf of tools in my toolbox. That's good for me, since it means I can do things you can't (or won't) do. Or maybe you'll do them but just refuse the standard terminology (i.e. refuse to call them learning algorithms), but what's the point of that? Terminology is useful exactly when it is used in the standard way. Instead of saying, "How would you apply algorithms that gather some sort of statistical data from multiple trials and use this collected data to modify the probability of certain responses in a way that increases the value of some reward function or predicts outputs associated with inputs, including but not limited to decision trees, neural networks, Bayesian classifiers, and N-gram predictors, to game design," we can say "How would you apply learning algorithms to game design?"

    Honestly, that's the topic. If you want to deny such algorithms exist, please stop replying, because I don't think it advances the discussion some of us are trying to have about them. If you believe they exist but don't want to use the standard terminology for them, again, I don't see how that's contributing.

    To me, the interesting game design questions are:
    1. What do learning algorithms enable us to do that we couldn't do with traditional approaches?
    2. How do we ensure that a learning agent produces behavior that's fun for the player, rather than just skillful?
    3. How do we control learning agents in a game that has certain plot lines we want to convey?
    And quite likely there are other game-design related questions I'm not thinking of. But, "there's no such thing!" does not seem like one of these to me.
     
  36. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,954
    It seems you are the only one losing your mind. Is it so difficult to understand that a computer can be designed or programmed to adapt and learn? The only difference between the human brain and a computer is the level of complexity.
     
    JoeStrout likes this.
  37. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    The point is... this is pointless.

    Here's what I mean:

    Discussion breakdown thus far:

    Monte Carlo, Trees and other nonsense - 25%
    Humans do not have souls -5%
    Uses for non game computer learning - 25%
    Computers cannot truly learn - 35%
    Examples of computer learning in games - 5%***
    Ladies, ladies please - 5%

    ***of which I posted at least half of
     
  38. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    Spoken like a real trekkie.
     
  39. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    "That's good for me, since it means I can do things you can't (or won't) do."

    Yeah. Sure.
     
  40. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    Most of what you're describing has occured to me organically as a result of problem solving, Joe. Unlike a computer, I can make mistakes. I can learn from these accidentals by evaluating what things do to benefit me. Sometimes, it takes years for something to become useful, etc. My brain is not an advanced computer, it's more like the entire internet, with distrubuted terminals all over my body sending and receiving data. Nobody instructs me to learn, I learn. I program the computer because I see a problem and solve it, not always in the same way or in a direct method.

    But all this mumbo jumbo is so far from practical that I can't even see its real relevance. If your goal is to learn everything and discuss everything imaginable and then be a master developer without the headaches, the swearing under your breath and, oh yes, the organic process of learning that humans are uniquely capable of... it won't happen.

    You want to believe, perhaps, that all of this computer learning talk is accomplishing something but nobody but me and a couple of others are even relating it to real examples.
     
  41. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,954
    You know it. Amusingly enough though I don't agree with the concept of evolution. I do view our brains as being designed and implemented by someone. Whether by an omnipotent God or a college student playing a highly advanced Dwarf Fortress is irrelevant to this discussion.

    Though it might explain the flood in the Bible - "Crap, I flooded my fortress AGAIN!"
     
    RJ-MacReady and AndrewGrayGames like this.
  42. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
  43. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    This thread has devolved into bickering. It looks like I might have to keep my eye on the Game Design forum - which is a shame. It should be a forum dedicated to discussing how to design things, and game concepts. It should never be about trying to change someone else's point of view.

    Locked because of bickering. In future this will lead to warnings, and possibly an eventual ban for people who persist in attacking others or belittling their opinions. Attack the design, not the person, and provide proof if doing so, or at the very least - a very good reason.

    I don't want to see "because I know best" because most people really don't. But a good reason or some facts will go a long way to keeping a thread clean and ultimately, useful.
     
Thread Status:
Not open for further replies.