Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.

Is Life A Simulation? Inside the Mind of Elon Musk

Discussion in 'Game Design' started by Gigiwoo, Aug 30, 2016.

  1. ToshoDaimos

    ToshoDaimos

    Joined:
    Jan 30, 2013
    Posts:
    679
    You all vastly overestimate what AI can do. AI is just a tool which is useful for various things. Saying that AI virus will take over the world is like saying that MS Office will take over the world, because very advanced Office AI assistant will quit his job to plan World Domination.

    The very notion of "taking over the world" is 100% human-made. It's extremely anthropocentric. AI is nothing like human mind. It's just a machine.

    A lot of problems are completely not-computable. They can't be solved in any way by computers. All AI programs are constrained by this as well. Human mind will always be in general case vastly superior to any AI.

    Comparing general case strategic planning to chess programs is ridiculous. Chess programs use searching algorithms which are about as intelligent as binary search algorithm. General case strategic planning has no meaningful mapping to computing AT ALL. Not even in theory.
     
    Last edited: Sep 25, 2016
  2. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    5,984
    I'm not comparing an AI threat to playing chess, except to show how a computer's ability to store information and use it to model outcomes is limited only by its computing power and memory. That is to say that if an AI threat developed, it would develop inside a 'body' of computing resources far more powerful than any human brain, which would put a human on a much lower footing in terms of reading and reacting to the events of a threatening situation.

    You're the one anthropomorphising - I'm talking about a machine. One that doesn't even need to be intelligent or conscious to cause a huge amount of damage.

    Let's say that an AI was weaponised by the military to destroy the infrastructure of another country (computer viruses are weaponised this way all the time, already). What if it was designed to analyse and learn about its environment - the world wide web for instance - and resist any attempts to destroy its data, and programmed to find weaknesses in the network to destroy essential data and interrupt communication, and if possible cause physical damage by taking control of drones or any other network-connected devices that could be used for this purpose.

    Now what if it was not programmed correctly and misidentified everything as a target. Suddenly here is a machine, not conscious or anything, not with any of these 'world domination' anthropomorphic tendencies that you talk about, but simply programmed to destroy and resist its own destruction.

    This sort of programming could easily go into a military AI weapons platform. Attempts to hack drones happen all the time, no doubt drones already have some ability to sort through signals and determine what appears to be hacking and so on. Now link this up with powerful machine learning algorithms and you would have an AI that is insanely good at resisting attempts to control it.

    I don't know what to say, that's a pretty egoistic statement. Not to mention that it's questionable of any of these 'un-computable' problems are even relevant to a threat situation.

    Lastly, every biological organism is programmed at its core to protect itself against danger and ensure its own survival, except in rare and specific cases. It's fairly basic stuff, without it, you wouldn't know that walking off a cliff is not a good idea. Robots need the same thing, software needs the same thing (against network attacks and corruption and so on) if only to continue to be useful to their owners. All that needs to happen is a naiively coded directive that is interpreted in a way that the programmer's didn't envision, to cause a robot to interpret a human being as a threat to its own survival, in a way that it has been enabled to respond to to the best of its ability.
     
    larku likes this.
  3. MV10

    MV10

    Joined:
    Nov 6, 2015
    Posts:
    1,889
    Huh, this turned into a killer-AI thread after I stopped paying attention?

    If you guys are really interested in this topic, two good sites are LessWrong and OvercomingBias which often have related discussions. Both sites feature regular participation by actual AI heavy-hitters. In particular I've followed Elizier Yudkowsky for quite a few years. He developed the concept of "Friendly AI" (in the sense that he has written actual research papers on the topic, it's not as trite as the name leads most to assume) and today he heads up MIRI, the Machine Intelligence Research Institute. Friendly AI is meant to define provably non-hostile AI versus either outright hostile or indifferent AI, both of which can be easily shown to represent true existential threats.

    Here is one of his first papers on the topic. It's an easy read. (I think there is a newer, better revision to this, not sure, the paper is 15 years old so it's hard to find online.)

    https://intelligence.org/files/CFAI.pdf

    It's also worth reading about the AI box experiments. There are better writeups than this (old) page but I don't have time to search right now:

    http://www.yudkowsky.net/singularity/aibox/

    This is one of the few topics I've found truly disturbing in recent years. Statistically speaking, the first self-modifying artificial general intelligence is very likely to "foom" -- the term they've applied to a "hard-takeoff intelligence explosion" -- where the AI is able to understand and modify itself until it reaches superintelligence, by which point we'd all better hope it's friendly, because we're unlikely to be able to stop it. This is like you versus an ant colony. There is too much to summarize in a forum post, and I guess this is not game-related in any way, but the real research is pretty interesting and pretty nerve-wracking once you digest all the material and the implications sink in.
     
    larku and Billy4184 like this.
  4. Gigiwoo

    Gigiwoo

    Joined:
    Mar 16, 2011
    Posts:
    2,981
    Rate of change is exponential:

    1600: the slide rule was invented.
    1945: the ENIAC weighed 30 tons.
    1969: 10 of the 1st PC's helped NASA to the moon.
    1998: Google was born
    2007: Smart phones
    2016: Global communication, 4M pixels, quad processors, long battery, in your pocket, for ~$800
    2030: a $1000 computer will do 20 quadrillion calcs per second == human brain.

    Extrapolate another 70 years. Now go 1000 more!

    Gigi
     
    Kiwasi, larku and Billy4184 like this.
  5. GarBenjamin

    GarBenjamin

    Joined:
    Dec 26, 2013
    Posts:
    7,441
  6. GarBenjamin

    GarBenjamin

    Joined:
    Dec 26, 2013
    Posts:
    7,441
  7. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    5,984
    Well I hope that it's okay to keep this train off the tracks ...

    Here's an article I thought was really interesting, because rather than talking about AI 'becoming bad' (which I think is the least important threat in the shorter term) it covers what sorts of weaknesses there are for weaponized AI to take advantage of to cause damage to information and property. In short, a lot, and the opportunities will only increase as we link more and more things to accessible networks.
     
    GarBenjamin likes this.
  8. GarBenjamin

    GarBenjamin

    Joined:
    Dec 26, 2013
    Posts:
    7,441
    Of course, if humans weren't such asses... if we didn't have so many people (governments, corporations, individuals) always wanting to snoop on and monitor everyone else... we'd likely have and see AIs in a very different light. Unfortunately, they will likely all end up mimicking their creators more or less. That is what does not bode well for us. And I think perhaps this is the biggest reason people have fear about AIs that have a lot of abilities. It's because they see them in the context of what would humans do if they never had to eat or sleep, etc.
     
  9. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    5,984
    At the moment, I don't think of a potential threat in terms of AI so much as being simply a sophisticated 'mechanical device'. AI implies some kind of actual intelligence which suggests agency and self-direction, and I think that such a thing would not be very useful to the kind of people who would create something for destructive purposes.

    The real problem comes from something which operates sort of at the level consciousness of an insect or an animal, i.e., almost purely on instinctive and unquestioned impulses, while possessing a very powerful capability of analysis and calculation. There would be no way to 'reason' with it since it is not intelligent or conscious in a human sense, yet it would be far more powerful in it's ability to gather and process information related to its instincts.

    Although I definitely think that a superintelligence explosion is a danger, I think people forget that we haven't created yet anything that operates on the level of autonomy of a fruit fly let alone something with the ability to define its own 'political agenda'. Yet that doesn't mean that the brain of a fruit fly inside the body of a drone would not be an immensely dangerous thing (if it had an instinct slightly more destructive than that of eating fruit!).
     
  10. hopeful

    hopeful

    Joined:
    Nov 20, 2013
    Posts:
    5,647
    "Once Zhuang Zhou dreamed he was a butterfly, a fluttering butterfly. What fun he had, doing as he pleased! He did not know he was Zhou. Suddenly he woke up and found himself to be Zhou. He did not know whether Zhou had dreamed he was a butterfly or a butterfly had dreamed he was Zhou." - c. 300 BC

    Or maybe he was a butterfly playing in a VR where he was Zhou. ;)


    How do I know that enjoying life is not a delusion? How do I know that in hating death we are not like people who got lost in early childhood and do not know the way home?
    Lady Li was the child of a border guard in Ai. When first captured by the state of Jin, she wept so much her clothes were soaked. But after she entered the palace, shared the king's bed, and dined on the finest meats, she regretted her tears. How do I know that the dead do not regret their previous longing for life? One who dreams of drinking wine may in the morning weep; one who dreams weeping may in the morning go out to hunt. During our dreams we do not know we are dreaming. We may even dream of interpreting a dream. Only on waking do we know it was a dream. Only after the great awakening will we realize that this is the great dream. And yet fools think they are awake, presuming to know that they are rulers or herdsmen. How dense! You and Confucius are both dreaming, and I who say you are a dream am also a dream. Such is my tale. It will probably be called preposterous, but after ten thousand generations there may be a great sage who will be able to explain it, a trivial interval equivalent to the passage from morning to night.
     
    GarBenjamin and Billy4184 like this.