Search Unity

What if I told you that an AI is learning battle tactics from a game, would you be afraid?

Discussion in 'General Discussion' started by Arowx, Nov 5, 2016.

  1. MV10

    MV10

    Joined:
    Nov 6, 2015
    Posts:
    1,889
    If this happened 20 years ago it would be a paradigm shift. All neural net training processes are similar to what the DeepMind developers did here. The only thing that is generalized is the training routine -- the "prior knowledge" (which is specific to each game) is that observation and trial-and-error process.

    Nobody tells us how to beat a certain boss because humans do have the ability to generalize based on experience. DeepMind does not. Only the learning routine itself is generalized. The node weights or whatever DeepMind uses that correspond to expertise in Breakout are going to be completely worthless for playing Pong, whereas a human will instantly recognize the similarities.

    The learning routine is the program, the observations create the data. Assuming a generalized program implies generalized data is like saying you're the next Norman Mailer because both of you were using MS Word.

    In AI circles this is part of a whole field of study called Knowledge Representation (KR). Bayesian probability is pretty popular right now because it helps evaluate the reliability of imperfect or uncertain information. In the AGI world reconciliation is presented as being more about the truthfulness or trustworthiness of a given body of (supposed) facts.
     
  2. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,019
    Maybe 'reconcile' wasn't the right term, but I meant it in terms of something like the ability to carry a coherent impression of oneself and the world around oneself. Like HAL's inability to reconcile two different kinds of 'truths'. The idea is - would an AI reach a point where it would have to unconsciously censor its own data - create a self-imposed bias or even perhaps deceive itself - in order to function correctly or optimally? This is what humans do. It would shake things up a bit if such an AI was put in charge of law and order.

    Not to mention, what would happen if you wanted to put an AI judge (which presumably thinks logically) in charge of a court case that is affected by political correctness? How would an AI deal with information that it knew to be accurate but was not socially acceptable? Maybe @Arowx should have put up a video where the carnage the AI produced was not of human bodies but hitherto functional social norms and rules. That would be much more interesting.
     
  3. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    3,023
    Arowx, I strongly suggest that you read a book about machine learning before jumping to wild conclusions about what could be done with it. All modern machine learning solutions are based on the concept of taking a sample set of data with known answers (input vectors in a training data set) and training on that data to create a long term memory vector, which is effectively just a way of writing a complex equation using brute force in order to later be able to quickly generate output vectors based on new input vectors.

    You can think of it kind of like a way to create an equation based on a graph of some data. If you saw a simple graph, you could probably guess what equation would fit the graph, especially if it only had a couple variables. With many (dozens, hundreds, even thousands) variables, a brute force training solution, such as machine learning, can be an effective way to create a complex equation that fits the data from the training set.

    Machine learning is not currently a generalized type of intelligence. In fact, machine learning requires human help to come up with training data sets and to properly interpret the output vectors. By contrast, humans can apply lessons from one concept to another seemingly unrelated concept.

    At some point, humans will figure out to design an AI solution that can handle generalized learning. At that point, machines will be able to learn everything, but we are not there yet. In fact, we are not even close to that point yet. There is a huge difference between AI and AGI.
     
  4. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,569
    Actually, your brain rewrites what you see. In real time.
    See:
    https://en.wikipedia.org/wiki/Blind_spot_(vision)
    Also related to Bloody Mary.
     
    Billy4184 likes this.
  5. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,019
    @neginfinity, also, according to some David Eagleman books I've read, there's strong evidence that what you see in front of you is constructed more of your memory of it than actual new information. According to MRI scans, when you see something familiar, more information goes forward from your brain to your visual cortex than what goes from your visual cortex back into your brain. So effectively, in order to be efficient, it seems that your brain pre-processes visual information by only allowing through information that does not match your memory of the scene.

    So not only is visual information itself a heavy abstraction of reality (in that things like color don't actually exist) but what you do see of that abstracted information is mostly a reconstruction of past memory.
     
    angrypenguin likes this.
  6. MV10

    MV10

    Joined:
    Nov 6, 2015
    Posts:
    1,889
    That's going to be a rude awakening for the people who make those paint-color-matching machines at the hardware store. That, or they've been secretly harvesting human brains to drive their infernal paint-matching empire!
     
  7. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,019
    Well, what color represents is real (wavelength of light) but color itself doesn't exist outside of the brain. The experience of color is a purely subjective one - it's simply the way that we reconstruct the wavelength information in our heads. It's funny because it's kind of impossible not to experience anything at all as a reconstruction. Even the terms that we use to describe what our brains are reconstructing are in themselves abstractions.
     
  8. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,569
    And unrelated note there was a fun question:
    "Is your color red the same as mine?"

    Afaik the answer to that "it is unknown".
     
    Kiwasi and Billy4184 like this.