Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. We have updated the language to the Editor Terms based on feedback from our employees and community. Learn more.
    Dismiss Notice

What is conscious AI?

Discussion in 'General Discussion' started by yoonitee, Jan 5, 2015.

  1. thxfoo

    thxfoo

    Joined:
    Apr 4, 2014
    Posts:
    515
    Often you want this for the game to be fun.
    E.g. in Wolfenstein New Order enemies ignore if you kill a buddy near them or if dead buddies are around them (if you do it silently). Stealth play would be much harder otherwise. For me this also breaks immersion a little, but not sure it would be the same fun otherwise. In many games more realistic enemies would not be fun.

    Or watch the video about the Hearthstone AI, they want to create one that is easy to beat, not one that crushes most humans:
    http://www.gamasutra.com/view/news/224101/Video_Building_the_AI_for_Hearthstone.php
     
  2. derf

    derf

    Joined:
    Aug 14, 2011
    Posts:
    354
    Again though that depends on what game play your going for. The AI of the enemies can be as smart or dumb as you want. My original point still stands and that is if you want your enemies to behave "smarter", it takes a large amount of planning and developing the logic and behavior to get a basic AI system or an elite system.
     
    GarBenjamin likes this.
  3. yoonitee

    yoonitee

    Joined:
    Jun 27, 2013
    Posts:
    2,363
    Perhaps we underestimate the importance of culture in AI. For example, some tribes are perfectly happy with 2 or 3 names for colours. A baby might call any small animal on 4 legs a "cat". It is our culture that makes us correct the child and tell it the difference between a cat and a dog. Or we might learn ourselves the difference but only if it is important to know that cats scratch and dogs bite. So trying to get AI to learn everything themselves is probably wrong. We really do need to teach AI a lot of things. Either through many years of training and reinforcement. Or programming in the knowledge.
    A good AI will probably consist of a lot of pre-programmed knowledge and the ability to disregard this knowledge if learns new knowledge that is contradictory.

    Without the words for "cat" and "dog", it is probably hard for an AI to group all things cat-like (claws, meowing) and all things dog-like (woofing, wet nose), into separate concepts. The AI needs something to "pin" these abstract ideas on.

    Can an ape know the difference between a cat and a dog? What if you showed it a poodle and a hairless cat?
     
  4. BFGames

    BFGames

    Joined:
    Oct 2, 2012
    Posts:
    1,543
    You forget the most important thing, especially in games! How will your AI learn to trash talk other players?
     
    Kiwasi likes this.
  5. thxfoo

    thxfoo

    Joined:
    Apr 4, 2014
    Posts:
    515
    If you add more info like "claws, meowing" and "woofing, wet nose", finding that these are different concepts gets piece of cake compared to doing it from just looking at pixels.

    Google's image recognition DNN learned by itself how a cat looks like, just from getting millions of pictures (without annotations, so it does not know which are cat images, see http://www.dailymail.co.uk/sciencet...--immediately-starts-watching-cat-videos.html, a little old (2012), this research is fast moving).
    It just tries to find different things in images and comes up with the category cat by itself (of course without a name for it).
    You can show that it developed a specific neuron that "glows" if it sees a cat. You can also ask it what a prototypical cat looks like.
    For some tasks like these DNNs beat the average human, e.g. could you differentiate an eskimo dog from a sibirian husky? (But for this example you probably need some labelled images too.)
    huskey_vs_eskimo_dog.jpg
     
    Last edited: Jan 22, 2015
  6. yoonitee

    yoonitee

    Joined:
    Jun 27, 2013
    Posts:
    2,363
    What I wonder though, is if you fed one of these systems human language would it come up with a concept that an 'r' sound is different to an 'l' sound? For example in Japanese there is no distinction. Or is it necessary for a human to say to a child "That's not a labbit that's a rabbit." Or the mother would respond favourably when a child says "red" but look bemused if the child said "led". Hence reinforcing the two concepts.
    Or perhaps the system could go the other way and think that there are 10 different types of 's' all slightly different.

    They say that children who watch TV but have no human interaction find it harder to learn to speak. I think if an AI is fed nothing but google images but has no human interaction, they might make similar mistakes.
     
  7. thxfoo

    thxfoo

    Joined:
    Apr 4, 2014
    Posts:
    515
    I think for many things that a child needs feedback a computer does not need that online because you can define in advance what is desired (for the simple stuff of today, maybe very different tomorrow). E.g. the computer can compare its audio output to that of a human. But you have to tell it in advance if it should speak like an average American or like a specific person. Then it can compare and learn by itself.

    Slightly OT but still related:
    I don't know examples involving sound (except speech recognition, but that is only a sub-problem of what we talk about).

    I just know the example where they fed a DNN 100 character long english strings from Wikipedia. It gets a feeling for language.

    So it learns more or less how grammar works, so it learns the concepts of nouns and verbs and sentence construction. All that from just getting strings. The sentences it produces make no sense mostly, but the grammar looks OK.

    The system gets some characters and should predict the next one: if it says next being char 'x' has p=0.05 and 'e' has p=0.6, then with p=0.05 you select 'x' and tell it you selected 'x' and ask what is the next char now. With such a simple setup it seems not to learn what the text means, but it learns language basics. If you want "understanding" you have to look at stuff like IBM Watson.

    And if it comes up with "new" words they look like plausible english words. E.g. they look like a verb if a verb is needed or a noun if a noun is needed.
     
    Last edited: Jan 22, 2015
  8. yoonitee

    yoonitee

    Joined:
    Jun 27, 2013
    Posts:
    2,363
    I get these probability things, of given 5 words what is the most likely 6th word etc. (Or first 30 letters what is the next one.) But I think this misses out in things like the German where you put your verb right a the end, the verb doesn't really depend on the previous 10 words but only on the 11th one before. Therefore the system has to have some way to have a wildcard like "Ich habe * gesehen." Where * could be any long sentence of what you've seen.

    I'm not really sure how a probabilistic machine can come up with the rules of grammar. Indeed, Noam Chomsky says that we must have an inbuilt grammar because learning grammar is impossible. (Although some people disagree nowadays.) If an AI learns enough sentences it could predict the next word in 99% of sentences but unless it knows grammar would get completely new sentences wrong.

    For example it may never see a sentence concerning turquoise donkeys. So if it sees the word "turquoise" and "donkey" together it would assume it is ungrammatical???? But because humans know that turquoise is a colour and donkey is an animal it is fine. But we know that "turquoise thoughts" is grammatical but doesn't make sense.

    I had a go at making one of these systems that read "Treasure Island". Then made it store the probability of given x letters what is the next letter. If x is small you just get gobbledegook. If x is about 10 you get some sentences which looks realistic but don't make sense. And if x is about 100 you just get back Treasure Island.

    Here is the javascript for it:

    Code (JavaScript):
    1. <input type="file" id="files" name="files[]" multiple />
    2. <output id="list"></output>
    3. Depth:<input id="depthValue" value="10"/>
    4. <div id="info"></div><br/>
    5. <div id="message"></div>
    6. <button onclick="createtext()">Create some text</button>
    7. <button onclick="rest()">Reset</button>
    8. <script>
    9. var fr;
    10. var mainText="";
    11. var dict={count:0};
    12. var depth=10;
    13.  
    14. var message = document.getElementById("message");
    15.  
    16. function receivedText(text){
    17.      mainText = fr.result;
    18.      depth=document.getElementById("depthValue").value*1;
    19.      AIread();
    20.      message.innerHTML+="reading file..";
    21. }
    22.  
    23.   function handleFileSelect(evt) {
    24.       var files = evt.target.files; // FileList object
    25.  
    26.       fr = new FileReader();
    27.       fr.onload = receivedText;
    28.       fr.readAsText(files[0]);
    29.   }
    30.  
    31.   document.getElementById('files').addEventListener('change', handleFileSelect, false);
    32.  
    33. function AIread(){
    34.      var n=0;
    35.     for(var i=0;i<mainText.length-depth;i++){
    36.         var d=dict;
    37.         var c=mainText[i];
    38.         if(!d[c]) d[c]={count:0};
    39.         d[c].count++;
    40.  
    41.         for(var j=0;j<depth;j++){
    42.             d=d[c];
    43.             c=mainText[i+j+1];
    44.             if(!d[c]) {
    45.                 d[c]={count:1};
    46.                 if(j==depth-1) n++;
    47.             }else{
    48.                 d[c].count++;
    49.                 if(j==depth-1 && d[c].count==2) n--;
    50.             }
    51.         }
    52.  
    53.    
    54.     }
    55. var p = n*1.0/(mainText.length-depth) ;
    56.      document.getElementById("info").innerHTML = ""+100*(1-p) +"% randomness ("+n+" records/"+mainText.length+") change every "+1/(1-p)+" letters";
    57.     message.innerHTML+="finished";
    58. }
    59. var newText="";
    60. function reset(){
    61.    newText="";
    62.    clearInterval(myinterval);
    63. }
    64.  
    65. function createtext(){
    66.   newText="";
    67. for(var i=0;i<depth;i++){
    68.   newText+=mainText[i];
    69. }
    70.   message.innerHTML=newText;
    71.    myinterval = setInterval(dotext,1);
    72. }
    73. var myinterval;
    74. function dotext(){
    75.      
    76.     var d=dict;
    77.     for(var i=depth;i>=1;i--){
    78.         var c=newText[newText.length-i];
    79.              d=d[c];
    80.         if(!d) return;
    81.     }
    82.      var R=Math.floor(Math.random()*d.count);
    83.      var N=0;
    84.      var found=false;
    85.       for(var ele in d){
    86.         if(ele!="count"){
    87.         //message.innerHTML+="["+ele+"]";
    88.            N+=d[ele].count;
    89.             if(R<N) {
    90.             newText+=ele;
    91.             found=true;
    92.             message.innerText=newText;
    93.                 // if(d[ele].count!=d.count) message.innerHTML+="|";
    94.                  //message.innerHTML+=""+ele;
    95.             break;}
    96.        }
    97.      }
    98.   if(!found) {
    99.         newText+="?";
    100.     }
    101. }
    102.  
    103. </script>
    It takes about 10 seconds to read treasure Island as a txt file in Chrome. If you load in say Treasure Island AND Alice in Wonderland (looking at the previous 10 letters) you get some strange sentences such as:

     
    Last edited: Jan 22, 2015
  9. thxfoo

    thxfoo

    Joined:
    Apr 4, 2014
    Posts:
    515
    I predicts the next character, and by that builds words.
    As said, what it says does not make sense. But it learned some basic grammar and words. It does not recreate sentences that it has seen, it creates random new ones (without sense).

    E.g. one example if you start with "The meaning of life is":
    "The meaning of life is the tradition of the ancient human reproduction: it is less favorable to the good boy for when to remove her bigger."
    You get another result each time because you do not select the most probable character, but any with the given probability.

    Yes this example has errors. But much that it outputs has locally correct grammar. All they did was feed some wikipedia to a DNN to predict the next character after some given characters. If you want it perfect or to make sense you have to do something smarter.
    It does not reproduce wikipedia (then it would say 42). It has not much storage capacity, it has to learn some rules because storing the stuff would not be possible.

    It cannot store it because it has not much storage capacity.
    But yes, if by bad luck you select multiple very unlikely characters in a row, you probably get a non-word. But you need very bad luck for that to happen for multiple characters in a row.
    Most non-words they showed look like english words to a non-native speaker:
     
    Last edited: Jan 22, 2015
  10. yoonitee

    yoonitee

    Joined:
    Jun 27, 2013
    Posts:
    2,363
    I'm intrigued by the claims in this paper: which says the neural network correctly closed brackets (...) and quote marks "...". But the details are very vague. Hence I'm a bit suspicious of it. If that were really true surely that would be an amazing breakthrough! But as I can't find the details anywhere I think maybe they are attributing intelligence to a machine which is does not really have a rule to close brackets. If I could find the details of this I really would be amazed!

    Also, surely to train a neural network don't you also have to feed it things which aren't proper sentences so it knows the difference?
     
    Last edited: Jan 23, 2015
  11. thxfoo

    thxfoo

    Joined:
    Apr 4, 2014
    Posts:
    515
    Yes, this paper is about the experiment I mean.

    At a presentation I saw they said it closes single brackets, if it opens 2 it mostly closes the inner one fast. But if there are more than 2 open it does not always close the correct amount.

    But for me the closing brackets thing is much less impressive than that it learns basic grammar, so concepts like verbs and nouns and how to use that, just from looking at strings and without any logic to do anything.

    No, they did not do that. For certain things it would probably do better then (e.g. the bracket thing could get much better if you show it what you want and what you don't want). But this was just an experiment to see what happens if you just feed Wikipedia strings.

    E.g. you could train one ANN just to look that brackets or quotes are closed. That is very simple and would always do it right (they use a recurrent neural network (RNN), these have "memory", so it will just learn to count the brackets). Then one that knows nouns and verbs. And one that knows grammar. And then a final one that learns how to combine those other ones.
    Such a system would do much better than that stupid one, but again they mostly just wanted to see what happens if you do such a stupid thing.

    So just a simple experiment to show the power of ANNs. The Atari game playing DNN blew my mind much more than this.

    Edit:
    Also if you like such things, watch a video about IBM Watson, e.g. the third and final match against human Jeopardy champions. I also remember one interesting discussion explaining which puzzles are very hard for such AI and why, don't remember where. One example is:
    (Taken from here http://www.nytimes.com/2012/03/17/t...with-humans-in-crossword-tournament.html?_r=0, did not read it. It's about DOCTOR FILL, but Watson seems cooler.
    Watson: http://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html)
     
    Last edited: Jan 23, 2015
  12. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    2,991
    There is a huge difference between "game AI" and "academic AI". Game AI is about giving the illusion of acting like a living being. Academic AI is about developing ways to actually think like a living being. So the real question I think the OP is asking is "When will games benefit from academic AI instead of game AI?"

    Games using linear story telling and/or branched story telling would probably seldom benefit from academic AI. Games that use modular story telling might some day benefit from academic AI, but only as long as the academic AI did not impede performance. In a game with modular story telling, perhaps academic AI could be used to programmatically generate content (levels, story content, NPCs, etc) for gamers based on the content specific gamers have seemed to prefer.

    Additionally, academic AI could theoretically be used to tweak game AI to keep the player's interest level maximized. For example, maybe academic AI could be used to tweak the game AI to constantly generate an epic battle that the specific gamer could just barely win. Not just to control the number of enemy NPCs spawned, but to come up with strategies that keep the player engaged without totally overwhelming the player. There are already game AI methods (like utility and planning) that address these things up to a point. Perhaps academic AI could be applied as a tool for dynamically generating new planning code based on previous gameplay with the player.
     
    GarBenjamin likes this.
  13. yoonitee

    yoonitee

    Joined:
    Jun 27, 2013
    Posts:
    2,363
    I have been thinking about it some more. Clearly since things like crows can problem-solve without language. The ability to think of actions and consequences must have evolved before language. The structure in the brain of how these actions and consequences are stored becomes translated into the "grammar" of language. Thus in effect we had grammar even before we had language. Then language must have come afterwards to express what was going on in our brain to other humans.

    So in our brain we have structures like:

    ($1 attacks $2 and $1>$2) ==> ($2 gets hurt)

    which a lot of will be innate. Since it is probably hard wired that we don't attack other humans or things bigger than us.

    Other things might be physical rules like

    ($1 push away $2)==>($2 goes backwards)

    If evolution hasn't supplied us with that rule in our head then it isn't doing it's job properly! We probably have some coarse way of ordering things around us in 3D space in our brains too.

    The variables such as $1 might refer to particular things like "the big lion that is was here yesterday". So we take this little memory and insert it into our "attack" rule for us to decide if we should attack the lion. All done without language. But then language comes along. Because we all ready had the "grammar" which is really the ways the brain represents actions and consequences in the brain, language is a thin layer on top of that which translates these into speech (or sign language).

    Some people say that what makes us human is the ability to have two concepts $1 and $2 and form a third concept $1+$2. e.g. $1=sheep , $2=red things, $1+$2=red sheep. So with this rule and the thin layer of language on top should be enough to give human-like AI. Once our brain can organise things in hierarchies of smaller concepts this can be extended indefinitely. And the small layer of language on top can create hierarchical grammar in the same way: "He said that she said that he saw...." etc.

    So I think what we would need for human-level AI that separates us from the apes is:
    • rule like structures in the brain for common sense things mainly Actions-->Consequences
    • ability to form and store new rules from smaller rules.
    • ability to translate these rules into language.
    • The ability to empathise (apply rules about yourself to others)
    And what is dreaming? That is just taking a rule like ($1 attacks $2 and $1>$2) ==> ($2 gets hurt) and applying it to things you may have experienced that day. And taking the consequence and seeing where it leads. Visualising these rules and working out the consequences.
    In a similar way thinking is just applying these rules in order. Consequences that are uncertain lead to stress which leads to you thinking more about that rule.

    I don't think pure neural networks are enough to have these rule-like structures in the brain. Otherwise why can't apes speak? There must be some special brain-architecture involved - and we can deduce this architecture by examining human grammar. And therefore grammar can't be learned by a neural network it must be programmed in. The special structures for grammar and logic in our brain evolved thousands of years ago. These structures in the brain also impact on the structures of pleasing sounds we call music. A neural network with the added rules of grammar could probably learn, say, French grammar and language by matching natural language with it's inbuilt grammar structure and giving it's best guess as a probability. That's my guess anyway.

    How would it work? I think the when you think about a particular rule it must pass into the working and visual memory, and at the same time your language part of the brain is also connected to the working and visual memory finding a close match with a sequence of words respecting also the cultural differences in word order. There is probably also some ancient part of the brain which divides concepts into male and female which would explain languages with gender.

    Also, don't forget we don't just say exactly what we're thinking! Since we can also think about the action-->consequence scenario applied to the words we say. So we can use deception, small-talk or all manner of other things. But all these things governed by the hard-wired grammatical structures. For example to lie, we have to imagine a false situation and then describe it. So imagination comes before language which is only a thin layer on top. (Also, when lying we often look up or away or stare because we are engaging our visual system imagining a false idea!)

    Programmatically, perhaps, we could model this using some kind of formal language to model the thought processes in the brain with a natural language on top in which a neural network translates the formal language into natural language. Then passing this back through the listening mechanism to imagine the consequences of hearing these words! Complicated!

    I would like to find (or set up) a forum for AI where interested people (esp. programmers) could talk about and contribute to this area.

    PS

    It would be fun to create some kind of system that dreams. A bit like the Sims. So that it would have a set of actions like walk, attack, etc. and then dream of what would happen if it did these things! And you could watch it dreaming as a 3D movie. It would dream of lots of different situations, then you could choose which dream it would carry out. It could be interesting for two players. It would learn by experience so the next time it dreamt it would have more realistic dreams.
     
    Last edited: Jan 25, 2015
  14. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Saw an interesting article where they realised that even after years of training chimps to do sign language the chimps never asked a question.

    They speculated that maybe the need/ability to ask questions was one of the defining things between human level intelligence and our nearest cousins.

    Maybe the simple act of asking what consciousness is or questioning the Universe is where you pass the human level of consciousness.

    So if your AI routine starts asking questions you didn't program into it to expand its knowledge base then maybe you have a conscious AI.
     
    yoonitee likes this.
  15. yoonitee

    yoonitee

    Joined:
    Jun 27, 2013
    Posts:
    2,363
    Interesting. You could be right. I think the first question human's ask is: "Where mummy?" Which seems to imply that they have some kind of model of the world. Although we could be reading too much into it. They could just be copying what you say when you play "peek-a-boo." "Where daddy?" "There he is!"
     
  16. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    It is now officially on the suggestion board for 'things to do 6 weeks from now' :) It might not survive, but it's an option.
     
    ChrisSch likes this.
  17. yoonitee

    yoonitee

    Joined:
    Jun 27, 2013
    Posts:
    2,363
    Here. I put my sentence generator online here. It proves that you can make something that seems intelligent without it having any intelligence at all. Try loading in a txt file. You can get lots of free txt files of stories from project Guttenberg.
    Feel free to look at the source.
     
  18. ChrisSch

    ChrisSch

    Joined:
    Feb 15, 2013
    Posts:
    763
    Fingers crossed it survives. :D
     
  19. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718


    Conscious = Aware of one's own existence, sensations, thoughts, surroundings, etc.

    To be conscious, we would say that a thing is aware of itself or "self-aware". This means that a human realizes that he is a human, and that he can stop doing human things and instead study his own behavior as an objective observer and learn from his own tendencies to create an evolved, enlightened perspective and a new set of behaviors for himself. Most people aren't capable of doing that, they are entirely wrapped up in the "here, now, this" of their lives and stuck in a state of action-reaction-feedback with the world around them, desperately trying to fulfill their needs as they arise.

    A machine is a human invention that consists of metal, plastic, etc. and is essentially a very high abstraction built upon the concept of electronic switches that are either "open" or "closed" (electronic terminology) and can be used in conjunction with logic gates and storage mediums to create a persistent method of storing and processing data.

    Intelligence = capacity for learning, reasoning, understanding, and similar forms of mental activity; aptitude in grasping truths, relationships, facts, meanings, etc.

    Most significant to intelligence is the gathering of new information and assimilating this into knowledge. Computers are, again, highly sophisticated calculators with a lot of memory however, they lack the capacity for reasoning, comprehending or finding any significance in the data they process because, again, they are literally chunks of metal and plastic with electricity flowing through them.

    Artificial intelligence is a magic trick, an illusion. We use the computer to make it seem like there's intelligence there. These algorithms can be pretty damn sophisticated. However, the machine can not truly be conscious relative to a human being who is, of course, actually conscious.

    What is conscious AI? An oxymoron.
     
  20. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    What if consciousness is a user illusion, your brains way of keeping 'you' busy while it does more important stuff.

    Take any game that you enjoy playing, it's just pixels and sound but you trick yourself into believing that you are taking part in that world. What if your consciousness is a game that your brain is playing.

    Therefore conscious AI would be any AI that thinks it's playing your game in your world or if it's really clever it would realise that the world it's in may not be real but a game you made.

    But then are you even conscious and are you in a game?
     
  21. yoonitee

    yoonitee

    Joined:
    Jun 27, 2013
    Posts:
    2,363
    I think when a robot stops in mid-sentence because it realises what it is saying is hurtful/about to uncover it's web of lies/or simply because it has thought of a better way of putting things, then that kind of robot I would class as conscious.

    Question is how would you build such a thing?
     
  22. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    2,991
    Well, you would start by studying "academic AI" instead of "game AI". But even in the field of academic AI, I don't think anybody has completely solved how to create a fully conscious computer based intelligence system. You can read about the Touring test for some interesting background on human like communications with an AI, but even that test is criticized as imperfect.
    http://en.wikipedia.org/wiki/Turing_test

    As for game AI, there are plenty of hybrid game AI designs that can make it feel like you are playing against (or with) a human, but the goal in game AI is the illusion rather than actual consciousness. In fact, if somebody did manage to design a conscious AI in a game, then the AI would figure out that it is part of a game and then the game would break the fourth wall in storytelling. That would basically ruin the game for the person playing it. Imagine if an NPC looked directly at the camera and said "I refuse to continue playing because I know my character dies in the next scene."
     
  23. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    Consciousness isn't magic and life isn't special. I doubt I'll live to see conscious ai, but I do think it'll happen. The human brain is a mess and is made of many small, terrible processors. We see the benefit of multiple small systems vs systems of fewer, higher quality units. The iPhone lens improved dramatically when Apple replaced a single high quality lens with multiple low quality lenses. I think the answer to ai will come from this trend too. It would even let the ai's brain be closer in structure to ours :D

    If a robot has a pressure sensor in its arm and recoils and says 'ow' when an applied pressure exceeds a the threshold predetermined to damage the materials that make up the arm, is it any different from when you do the same?

    It's not up to me sadly, but the only thing that'll change is whether it's started a few weeks from now or a few months from now.
     
    ChrisSch likes this.
  24. yoonitee

    yoonitee

    Joined:
    Jun 27, 2013
    Posts:
    2,363
    I've got a new idea how the brain works. Prepare to be astounded. ;)

    The brain (at least the linguistic part) works in the same way as a random access computer. The difference is that whereas addresses in a CPU are stored as a binary number 01010101001 or a hex number 0x213H132 if you please, the "address" of a neuron or group of neurons is given by a sequence of phonemes. e.g. the cucumber neuron is referenced by kyu-kum-ber. Each neuron has this address imprinted in it. When you see a cucumber your visual system finds a match and sends the "address" kyu-kum-ber to the working memory. The working memory is just like the registers in a CPU. The working memory can manipulate these addresses just like a CPU. So since there are about 40 phonemes, the brain can be modelled as a base-64 computer. (It't not really important how these numbers are represented as electrical signals).

    When the working memory calls an address, it first sends out "kyu" which then activates all neurons starting with this, then like punch cards it narrows down the search until one neurons is left. We can think of these addresses much like IP addresses and the brain just like the internet.

    Each neuron can also store addresses of other neurons to make virtual links. Thus the cucumber neuron might also store the address of the gr-een neuron and the fr-uit neuron.

    This explains why human language is made of phonemes and not just a random sound for each word.

    The actual physical links between neurons are irrelevant just as the physical links in the internet are irrelevant to how it works. Most of the brain is just a random collection of neurons which can store new words.
     
  25. thxfoo

    thxfoo

    Joined:
    Apr 4, 2014
    Posts:
    515
    If you read how artificial neuronal nets work and what they achieve, you will see that the oposit is true.

    These connections and their "weighting" are the most important thing, these are what saves information, this is what makes the brain do something meaningful instead of something random.
     
  26. yoonitee

    yoonitee

    Joined:
    Jun 27, 2013
    Posts:
    2,363
    Yes, the neural network model works OK for things like image recognition - a limited part of the brain. (Neural networks are simply fancy ways to do curve fitting to data). But it doesn't explain how we can manipulate information in our working memory. We need to transfer the address off different concepts into the working memory in order to manipulate them. I think language tells us that this "code" which is universal to all humans is based on a limited number of phonemes. And this is how we store and manipulate concepts. We must also have an inbuilt intern grammar and knowledge of logical concepts such as AND, OR, NOT, ALL, NONE.

    Adding to this that the only successful known model of computation is the random access computer surely tells us something.

    If there is another model of working memory available I'd like to see it.

    I think we'll find that there is nothing magical about the brain and that it's basic hardware is a random access computer with a lot of memory and a few neural networks bolted on for parallel processing (which we can think of as the graphics cards of the brain). And then a few specific systems that are good at modelling 3D scenes to simulate our environment. The brain is then just like a chess computer searching for the next best move.

    Even animals like crows may encode concepts in this phenome code but just lack the ability to translate this code into speech. Can you prove that a crow doesn't think in words?

    Perhaps the key to creating AI is to design a natural language, self-referential programming language.
     
    Last edited: Feb 1, 2015
  27. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    @yoonitee maybe the difference is that computers can only do things sequentially but in our brains tons of neurons activate simultaneously as we respond to different inputs?

    As for the language bit you mentioned there, who's that for? Very likely, that's for us. Computers should start with what they're good at, numbers! If a computer can reference quantities and express them and relate them to other quantities, it'll be more meaningful than having them mimic some kind of human readable speech. That's actually one of the focuses for my cruel little pet :D Instead of laying too many artificial layers to make it speak and interact with the user with too much fake intelligence, it will express basic quantities for the user to interpret.

    You see mood decrease as hunger increases. You provide food. You see hunger decrease and mood increase. What's that translate to? A natural reaction of being uncomfortable and hungry and then an expression of values that essentially is gratitude.

    I know I've just described the sims lol, but that's not how things will turn out exactly :p
     
  28. yoonitee

    yoonitee

    Joined:
    Jun 27, 2013
    Posts:
    2,363
    Do we think in parallel?? I'm not so sure. When I imagine going to the shops, I don't think of several routes at once. I think about one at a time. Or if I plan a route through a maze. Or if I imagine what I am going to say to my friend. These higher level things seem quite linear to me.

    True when I look at an image I take in many pixels at once to determine it's a cat. But after that concept has been turned into a word, I think about these words linearly. Just like we have a webcam or graphics card which does things in parallel but ultimately the data is fed into a CPU which follows instructions sequentially.

    I think AI at the moment is too concerned with image processing which is like thinking the most important thing in a computer is the graphics card.

    I think we need language even if a computer just thinks about math. It needs words like "plus", "sum", "vector field", "dot product", "integral", and needs to create new words as it develops new concepts.
     
  29. galent

    galent

    Joined:
    Jan 7, 2008
    Posts:
    1,078
    Using just that description, I wonder how long it'd take a virus to excel as a lawyer?

    but I digress.

    To the OP, there are quite a variety of AI types used in games, some very sophisticated. AI designs that are implemented to create emergent behavior have been observed in the wild as stepping "out of role/character" in response to learned behavior (See the Radiant AI in Elder Scrolls 4: Oblivion as an example).

    In games, intelligent looking AI with serious mental flaws are generally considered best (I'm going to leave the thousands of potential jokes relating to the real world inferred by that last sentence alone.. if for no other reason as I can't pick a "best one" :) ). Aside from the ethics issues... which the real world has demonstrated can be ... lax at times. Games are entertainment. It is likely a conscious AI in most FPS would avoid the blood thirsty player that doesn't seem to die like "normal things should". Particularly after witnessing a couple thousand other "beings" like itself or stronger get ground up. Enemies that beg for mercy (not scripted) are less fun, and more ethically/morally challenging to play against, than mindless hordes that act smart.

    Even non-conscious, but very efficient AIs, are not much fun to play against either. Even if they are limited, by both design and code, to "realistic" environmental input, they are fast to learn, faster to respond, and much more efficient than humans at ... well ... pretty much anything. Player's like to win (eventually).

    As a side note, given a terminator scenario, without human script writers (and bad acting), even with only a single conscious AI and multitudes of reasonably smart AIs (enough to track and hunt humans with the skill of some video game characters)... the machines will clear up that "Human infestation" issue mentioned earlier in a frighteningly short period of time. What'd you expect? Like games, movies and books are entertainment... humans don't like stories that accurately paint our swift demise in the face of ... oh, pretty much any of A) a self-sustaining intelligence capable storing and processing the sum of human knowledge in real time or B) any alien race that can reach us here on earth... or C) an airborne virus with a R nought north of 50 and a Survival rate of 0.

    Cheers,

    Galen
     
  30. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    I wasn't thinking of that. For high level things like that, they are sequential because it makes sense to us that way. I would argue that the underlying processes that make up the illusion of those higher level thoughts being simple are multiple systems running in parallel. Sometimes unintentionally / more than necessary, because of signal leaks between neurons :p

    ai research is probably focused on image recognition because it's something that is so easy for humans to do. Consider scenarios like... if you and a computer were looking for something in an image, you'd find it and the computer would eventually find it. Then when looking at the same photo from a slightly different angle, you'd find the shape again instantly since it didn't really move and the computer would need to process the entire image again.

    Maybe the hope is that having computers recognize things as easily as we do will give rise to smart cameras that can replace security guards lol.
     
  31. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    Let me try to introduce a concept. Underlying all perceived complexity is utter simplicity, once you fully understand the system. High level abstraction can produce new systems that incorporate existing systems, i.e. the car couldn't be invented until the wheel was fully understood and could just be drawn into a diagram as "a wheel". Then the engineer could move on with his life.

    Here, you're trying to design a car based on the design of the wheel itself, and you're wrestling with how to expand the design of the wheel so that it can act as a car...

    Your high level concept is what is flawed, and that's where the work needs to be done.

    Can a computer act as a human brain acts?

    No. Because the human brain would have to be able to completely understand the human brain in order to replicate its own function. As it stands... we still rely on computers to help us keep lines straight on paper.

    It's not that computers are incapable of functioning like we do.

    It's that we can't understand how we work in the first place.

    Rule #1 of software engineering.... know what you're trying to build.

    So this is still a neuroscience issue, gonna have to wait for those guys to finish their work before you start coding up a solution in JavaScript.
     
  32. yoonitee

    yoonitee

    Joined:
    Jun 27, 2013
    Posts:
    2,363
    The problem with all this vision stuff is that it precisely doesn't do it the way humans do.

    This is how I recognise things:

    I'm in town, I see something red in the distance moving... it's probably a bus.
    I'm in the forest, I see something green.. it's probably a leaf. I see something shiny, I scan the area with my eyes moving my eyes along the shiny thing... it could be a snake! I prod it with a stick. No it's just an old rope.

    I see someone I think I know. But they have different coloured hair. Is my friend likely to dye their hair? No likely. It probably isn't him.

    I see a faded letter on a wall, I move my eyes around the letter seeing if I can trace a path. It seems to be the letter S.

    I try to find my car I don't remember the scene by looking but I have remembered parking next to a green tractor. There's something big and green. It has big wheels. It must be a tractor. Look, there's my car.

    I see an object that I've never seen before. It is on someone's head. I deduce it is a new type of hat.

    We use all different ways to deduce what something is. Constantly moving our eyes around, checking for features, thinking about the context, etc. It is an interactive process.
     
  33. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    What you're describing is a database that dynamically creates, restructures and re-prioritizes associations between tables based on shape, color, sound... a multidimensional input array coupled with a heuristic pre-processer that filters said input based on its likely relevance due to a limited capacity to access the internal database and make useful decisions based on the data within. On top of all of this, there sits a little tyrant known as consciousness... which dictates what to do with this information.

    A human brain is more like the entire internet than one computer.

    So, your AI is unlikely to achieve consciousness anytime soon.

    You would be better off just trying random experiments to see if anything neat results from it.
     
    Last edited: Feb 3, 2015
  34. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    Agreed. I wish I understood hardware better so I could do the obvious thing and build a computer with a bunch of low quality cpus. The iPhone camera improved drastically by going from a single good lens to multiple bad lenses. The PS3 owned its generation in raw power (and even functioned as a super computer for linux users) because of the cell architecture and having multiple processors. We need to follow this trend and build a computer out of a bunch of interconnected computers!

    But they all need to be bad, so we can simulate the brain :D

    The internet is a great example because of all of the junk going on at once. Twitter and other news spam constantly feed into and update google's search suggestions. There was even that funny "she invented" being suggested as "he invented" issue some time back.
     
  35. yoonitee

    yoonitee

    Joined:
    Jun 27, 2013
    Posts:
    2,363
    You probably don't need to know the hardware that much. I'm sure with a 100 I-phones each with their own IP address you could build something that distributed data amongst themselves. That's what the internet of things is presumably about. Each device acting as a sensor to some global supercomputer. :O

    @MacReady Yeah, unfortunately, it's that "little bit of consciousness" that controls everything is the bit that no-one knows how to do! (As far as I know!) If we knew that bit we could just add more modules and the consciousness bit could use them as it pleased.

    I have a feeling that this consciousness must only make very few choices. These choices are provided for it unconsciously. All the consciousness has to do is pick one out of say 10 choices based on it's emotional state. The subconscious then searches out new choices. Or maybe we have several layers of consciousness, like a binary tree. Or consciousness it like the prime minister. He is responsible for listening to all the ideas of the ministers then making a decision based on which ministers are most reliable etc.

    When we think perhaps the unconscious is presenting the consciousness with several suggestions of sentences and the we pick one which seems reasonable. I'm sure by the end of this thread we would have cracked it!
     
    Last edited: Feb 3, 2015
  36. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    Again, not to belabor the point, we may not be able to create consciousness at all, ever. It's beyond our abilities. Just like, you can't create the game Minecraft in Minecraft. People have created calculators in it. Why not computers? Why not programming languages? Why not compilers, etc... at some point it's not achievable. You'd need God-like knowledge to create conscious AI. And by that point, btw, it's not artificial anymore... just intelligence.
     
  37. yoonitee

    yoonitee

    Joined:
    Jun 27, 2013
    Posts:
    2,363
    But it IS possible to recreate Minecraft in Minecraft. You can create a universal Turing machine in redstone, and the for the screen you can layout a 800x600 array of blocks. With a fast enough computer and enough memory you could program in a graphics engine and the Minecraft rules into the Turing machine.

    I mean it would be a pointless thing to do and very slow on todays computers but I've no doubt someone will do it in a few years time!

    You don't need God-like knowledge to create AI. You just need to do experiments.
    For example, how fast do nerves work? That is easy to find out. Just wiggle your finger backwards and forwards as fast as you can. That gives a minimum. Same with language. We figured out nouns, verbs, adjectives etc. without being Gods.
     
    Last edited: Feb 4, 2015
  38. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    See? It's the same.
    You CAN'T do it. But you THINK it's simple.
     
  39. thxfoo

    thxfoo

    Joined:
    Apr 4, 2014
    Posts:
    515
    You can. You can prove that you can create a full computer in Minecraft. Nobody wants to do it, but it is possible for sure. Actually it would not even be that hard, e.g. you could create a VHDL to Minecraft compiler to transfer arbitrary hardware to Minecraft.
     
  40. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    See, again... It's not about being smart enough to do it. Okay, so say you create (2)Minecraft in Minecraft. Can you create (3)Minecraft in (2)Minecraft? If not, you haven't recreated Minecraft at all.

    I'm just here to save you time.

    It's like when the alchemists of antiquity were trying to transmute lead to gold. Or how even today people try to build perpetual motion machines.

    Never works. Why? Something doesn't come from nothing, and things are all subject to entropy (and chaos... just for S***es and gigs) so you can't build the Starship Enterprise from the spare parts from the building of another Starship Enterprise.

    Again, it's nothing against you or anyone, it's just the nature of the beast.
     
    Last edited: Feb 5, 2015
  41. thxfoo

    thxfoo

    Joined:
    Apr 4, 2014
    Posts:
    515
    If you can create (2)Minecraft, then you can create (3)Minecraft.

    Some physicists disagree with you. E.g. with Quantum Physics you can create something from nothing, you can even explain the universe from "nothing" (meaning no time, space, matter or our concrete instantiation of physical laws (or causality which would require time)).
    E.g. watch Lawrence M. Krauss "Universe from NOTHING!" (jump to 53:40 if you are lazy, no laws at 59:40):
     
    Last edited: Feb 5, 2015
  42. yoonitee

    yoonitee

    Joined:
    Jun 27, 2013
    Posts:
    2,363
    I think we're getting off topic but if the Universe began at time T=0, I don't think it's correct to say "it came from nothing". Since this implies there was a Universe at T<0 with nothing in it. However, just like there are no words before the first word of a book, there is no Universe before the first event at T=0. Well, in my opinion anyway!

    If you start reading a book from the middle you might think that every word has a preceding word. But if you go back to the beginning of the book you will see that, no, there is a first word. But does that mean the book came from nothing?
     
  43. thxfoo

    thxfoo

    Joined:
    Apr 4, 2014
    Posts:
    515
    Modern Physics is far away from being intuitively understandable. T<0 makes no sense, it would mean before time exists. There is no "before" before time exists. Watch the video to get some ideas what these guys deal with.
     
  44. GarBenjamin

    GarBenjamin

    Joined:
    Dec 26, 2013
    Posts:
    7,441
    I read an interesting article this morning Rise of the Robots. It mentioned a bank (maybe banks) in Japan now have robots staffing their front counters. Bank tellers. It also mentioned that Google has stated robots will reach human levels of intelligence by 2029. I don't know how they arrived at that year other than someone came back in time and told them. I guess they may be looking at the data of progress toward that level and extrapolated enough information to come to that conclusion. But that reason is not nearly as cool as my time machine theory. ;)

    Anyway I thought someone here might find that interesting. Rise of the Robots.
     
  45. R-Lindsay

    R-Lindsay

    Joined:
    Aug 9, 2014
    Posts:
    287
    I suppose one way to make such predictions will simply be to ask "given the rate of processing power, how long before a computer can simulate an entire brain in real time". Then add on a few extra years for good luck. And there you have it, the evolution of a new species. If we take the standard model of memory (psychology) we have have sensory memory, working memory, and long term memory. Working memory is particularly interesting. we can hold ~7 items in thought, around 2 second of sound in a phonological loop, and a certain amount of imagery in a visual-spatial 'sketchpad'. Now tweak those values by multiplying them by, I don't know, 1 million? 1 billion? Those robots are gonna leave us in the dust.
     
    GarBenjamin likes this.
  46. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    This is true until you realize it, then you start to examine your choices more closely and consider where they came from. And then you acknowledge the origin of those things and continually go deeper until you have an existential crisis :D

    It's more of a design problem. The human brain is running on 10-20 watts of power. I think the greatest benefit of these studies will be one day designing machines so efficiently (or inefficiently / messy, in the case of our brains) that they consume under 1% of the power they use now.

    @GarBenjamin could be the same issue we had on the first page of this forum topic lol. There's a lot of definitions and ideas for what ai is, and the one people with a PhD in ai use is image recognition & path finding. Maybe those same people told google that by 2029, ai will have the same potential as people [ where the capabilities being considered are recognizing images in pictures and figuring out basic things from those pictures].
     
    GarBenjamin likes this.