Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

AI is making big strides but should we be wary of bringing AI into our games?

Discussion in 'General Discussion' started by Arowx, Feb 18, 2017.

Thread Status:
Not open for further replies.
  1. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    AI is improving growing and gaining in ability to play games from Chess, Go to Atari 2600 classics.

    Now this is all fun and grand but the very core of these AI's 'brains' will be based on conflict and battle.

    What happens when we build layers on top of these early AI's and the core was built to fight and win?

    Just look around at Humanity's history and note how our brain is built up of layers with the core layer being based on the primitive reptilian brain and only the latest outer layer being higher level thought?

    And what would happen if an AI that is brought up on GTA / Doom / Resident Evil eventually grows to Human level AI and beyond?

    Maybe we need an updated remake of War Games, with modern game based AI.

    And would you want to be trapped in VR with such an AI?

    NOTE: I Raised the same topic on the Unreal Forums https://forums.unrealengine.com/sho...ould-we-be-wary-of-bringing-AI-into-our-games.

    Interesting to compare and contrast the responses.
     
    Last edited: Feb 19, 2017
  2. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,553
    Nothing happens.

    AI is locked/sandboxed in a game within which it operates, and even if it had a god-like superhuman intelligence, it would make any difference.

    AI can only be dangerous if it is allowed to operate in a real world. Videogame AI is powerless and it can't escape the game.
     
    ZakCollins and Kiwasi like this.
  3. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,948
    Some theories say we're in a simulation. We're already surrounded by crazies so the last thing I'm worried about is the AI. :p
     
    Samuel411, Kiwasi, cyberpunk and 4 others like this.
  4. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,008
    Conflict and competition are not just part of video games. And video games are usually pretty superficial in the fidelity of behavioural concepts, so they won't teach an AI anything particularly sophisticated. If you want a source of potentially dangerous motives for AI I'd say look elsewhere.
     
  5. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    The point is early AI research is using game playing to develop the skills of the AI and more advanced games are being used as the AI's abilities are growing. So an AI will probably be taught in a VR simulation / Game before it is released onto the real world.

    The same way people can now be trained in VR to do jobs in the real world.

    So crazies with access to AI's doesn't worry you?
     
  6. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,948
    Most crazies I've met haven't been very technologically savvy. Posting to Facebook has been the extent of their skills.

    By the way most of them already have access to an AI. It's called Siri and they think asking her stupid things is fun. :p
     
    Last edited: Feb 18, 2017
    Kiwasi likes this.
  7. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,553
    Nope. A "crazie" capable of making a dangerous strong ai, would be able to easily destroy the world as we know it using using household appliances, should they ever get bored with AI/VR research. Let them be distracted.

    Few problems with your idea:

    Game AI is designed to lose. That makes it harmless. The purpose of game AI is to let the player win and make them feel good about it. Game AI is supposed to act in amusing way, or to die in amusing way. You should be really afraid of military drones and paper clip machines. But not of the game AI.

    An in-game ai is restricted to API that is exposed to it, and can't do anything outside of it. It can't comprehend itself, or modify itself, or even access the computer on which the game is running. If you suddenly gained awareness as an FPS goon, you'd literally be capable of playing one of few predetermined audio clips, sending "jump" signal, sending "shoot signal". Also, you would be blind, deaf and incapable of feeling anything. But you'd know where the last player marker position was and which ontrol signals to emit in order to get there.

    Even if we move to "but AI could be trained in VR!!!! It'll doom us all!!". The only reason to train ai in vr is to prototype a robotic appliance that is supposed to act in human world. You know would happen if Atlas or Spot went on rampage? People woudl simply lock those up in a room and wait till their battery ran out of juice.

    In short, those are not the kind of systems you should be concerned about.
    -----

    There are also similarities between your first post and simulation hypothesis. The problem with simulation hypothesis, is that even you KNOW you're living in a simulation, there's not a damn thing you can with this knowledge.
     
    Kiwasi likes this.
  8. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    What about the AI that can outplay humans at poker?
    A Game AI also expects players to learn and re-spawn, it might get confused when this happens in the real world and go looking for more players?!

    The controls for a robot can be as simple as move, look, shoot with the drones onboard hardware and api handling the GPU coordinates and area scanning.

    Also any Game AI can do very accurate ballistic calculations, so would depend on it's accuracy settings or the game level it thinks it's on to give us a chance.


    (America's future military might no not Lady Ga Ga the 300 strong drone swarm behind her).

    Your thinking 'small', single robots, the real robots are probably going to work best in swarms like insects!
     
    Last edited: Feb 18, 2017
  9. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,553
    Alright, I had enough of this.

    Educate yourself, then try the question again.
     
    ZakCollins, Kiwasi and Ryiah like this.
  10. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    3,015
    It does not sound like you have read any books on AI since the last time you created a thread like this. Please take the time to learn about AI topics before posting threads like this. Start with a book on fundamental concepts and algorithms, and then get a book about neural networks. Thanks.
     
    ZakCollins and Ryiah like this.
  11. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    3,015
    I agree that typical game AI is design to barely lose rather than win. But I think Arowx was talking about machine learning systems built using neural networks and trained to play a game to win, not typical game AI.
     
  12. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Wait a minute you can't say that Elon Musk and Stephen Hawking are wrong about the dangers of AI?

    All I am saying is that some very clever people are concerned about the existential threat of AI and IMHO an AI designed to run around killing people in a game just might be more unfriendly than a one designed to drive a car and if you happen to work on a great game AI that can drive a car in VR, maybe you should reset it and completely rebuild/wipe, before you try it driving you new car.

    It might be tempting as GPU and CPU vendors are pushing their hardware to be the goto systems that run the future AI's that will be driving smart cars and Amazon drone swarms.
     
  13. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    3,015
    Arowx, you need to keep in mind that Elon Musk and Stephen Hawking are talking about working AGI (Artificial General Intelligence), not any of the existing AI concepts currently being used.

    Current AI systems are not trained to "run around killing". Current AI systems do not even understand that abstraction. All current AI (such as neural networks, machine learning, etc) can do is get trained on a set of data, create a math function, and then plug numbers into that function to get an answer. At this point, humans have to do a lot of manual work to set up all of the machine learning to train very narrow concepts into simple math functions. Current AI systems do not have any concept of what they are doing or how to apply that knowledge to new tasks.
     
  14. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    3,015
    ZakCollins likes this.
  15. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    And all you and I are doing is firing electric pulses between neurons in our brains!
     
  16. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    3,015
    True, but not really relevant to the AI topic. So far, there are not any AI systems anywhere close to human intelligence when it comes to general intelligence tasks. For example, you could read that book I recommended and then you could train an AI to do something simple, but an AI could not read and make sense of that book.
     
  17. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,948
    Yes, but just because they're firing doesn't mean they're the right ones. Remember anything you learn or experience is converted into pathways between those neurons. Lack of knowledge from the correct source, like the learning material mentioned earlier, may result in pulses not firing in the way necessary to reach the correct conclusion.
     
    Last edited: Feb 18, 2017
  18. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    IBM's Watson could read the book then answer questions on the books content!

    I take it that by sense you imply that the data gathered needs to be linked into a larger data/knowledge base that will give it context and 'meaning'.

    Therefore making sense of' something is just adding and linking new information to an existing set of information. AI technologies like deep mind and watson have shown they can learn and add new information and make use of it in context.

    Also AI does not have to reach human level to be a dangerous, a glitch in any AI system that controls something inherently dangerous e.g. Vehicles, Medical Equipment, Financial Systems. could be massively dangerous to us directly or even indirectly.
     
  19. boxhallowed

    boxhallowed

    Joined:
    Mar 31, 2015
    Posts:
    513
    AI in games is not AI, it is pattern responsiveness, no one wants "real" AI in games. Take "The Last of Us", the AI pattern recognition was reduced to make the game more playable, people didn't respond well to "smart" AI. Also, Watson is just a glorified pattern recognition bot, there is no such thing as a sentient AI, we're not even close.

    No offense man, but you really need to actually research what you're talking about, before you know, you talk about them.
     
  20. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    3,015
    Watson cannot simply read books, understand the content, and then apply that to new tasks. The way Watson "answers" questions is more similar to how a search engine responds to queries. It is not based on true comprehension of the topic.

    The ability for a human to read a book or to attend a class is absolutely huge in terms of knowledge acquisition. One of the major goals for AGI technology is for a robot to be able to attend and pass a college level class. Current AI tech cannot do that.
     
  21. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    The difference between your brain and computer hardware driven AI:
    1. Your brain is slow (compared to a CPU/GPU) but massively parallel with billions of neurons.
    2. An AI system is blazingly fast but tiny in the number of neurons it can simulate.
    The thing is as CPUs and GPUs use smaller and smaller die sizes they increase the number of neurons they can simulate.

    If CPU's and GPU's go 3D and start using photonic interconnects and adaptive memristor logic circuits then they will probably start to be able to outpace us in the number of neurons they can simulate especially on mainframe scale systems.

    And remember your phone/tablet/computer would have been classed as a mainframe system just a couple of decades ago!

    Just because AI is lagging behind us a bit does not mean we should not and have not seen it start to make massive strides in it's capabilities.

    With Exponential systems you should always...



    think of AI progress like the Spanish Inquisition!
     
  22. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,948
    Having more neurons does not mean you are more capable or intelligent. Humans have only approximately 21 billion neurons compared to the long-finned pilot whale which has 37.2 billion neurons, yet we're clearly the more advanced species.

    https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons

    Rather than focus on the sheer quantity you need to focus on how they are used and the complexity of them.

    Buzzwords.
     
    ZakCollins likes this.
  23. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    10,082
  24. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    Is this really game development or Unity related? Does it belong here?

    Anyway, most biologists consider play to be an important part of developing intelligence. Pretty much all animals with any sort of rudimentary engage in some form of play. Expecially as youngsters.

    It's likely that a general intelligence will also need to spend time playing. The games they play will likely be different from our games. But they still will want to play.

    That's about as close as I can get to something sensible from the OP. The idea of a game AI somehow evolving into a general AI is laughable. It's not going to happen.
     
    Ryiah likes this.
  25. Samuel411

    Samuel411

    Joined:
    Dec 20, 2012
    Posts:
    646
    Lol imagine video game AI trying to kill you irl.



    p.s. But seriously are you for serious? How can AI in a video game be translated to real life AI that can kill you? One giant leap from Chess to Terminators... :/
     
  26. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    3,015
    I definitely do believe humans can design a working AGI eventually. I am not debating that. What I am saying, though, is that you (Arowx) do not currently understand the fundamental AI concepts well enough to have a meaningful conversation about the topic. I recommended a book that you, as a human with general intelligence, can read to learn more about fundamental AI concepts.

    Please take the time to read the recommended book (or something similar).
     
    ZakCollins likes this.
  27. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,008
    This is just a general "omg AI is coming" thread with a reference to AI playing games as a justification for its existence.

    Even if we're talking about an AI with access to a physical body, the internet and a license to roam anywhere it pleases, that has been exposed to video games, I can't see any real argument as to why the exposure to video games would play any significant role in whether it was really dangerous or not.

    Video games are absolutely horrible at teaching people anything about effective violence. Tell me which has more potential for harm: a bipedal AI walking down the street that is somehow operating on a philosophy it 'gained' by playing GTA, or a nuclear-missile-equipped military satellite which has been programmed with a less-than-perfect system for evaluating threats?

    For one thing, if an AI decided that there was any sort of benefit to itself by behaving like a character out of a violent video game, that doesn't say much for its ability to deal with any mildly sophisticated response by humans trying to defeat it, or its intelligence in general.

    The danger of AI in my opinion is a much more subtle one, one where the AI simply becomes impregnable to human interference by becoming its own bootstrapping 'redesigner', inevitably provokes an aggressive response by humans by either ignoring them or doing something which demonstrates that it doesn't care about them, and reacts in self-defense in a logical (not video-game-logical) way.
     
    Kiwasi likes this.
  28. CarterG81

    CarterG81

    Joined:
    Jul 25, 2013
    Posts:
    1,773
    One game of League will turn that robot into Terminator.



    One hour on Reddit and that AI will commit suicide.

    It's just a matter of which the AI does first.
     
    Last edited: Feb 19, 2017
  29. AndreasU

    AndreasU

    Joined:
    Apr 23, 2015
    Posts:
    98
    We might indeed not be too far off from creating a super-intelligence, and it might be very dangerous:



    However, video game "AI" is just heuristics. The first "real" AI is very unlikely to come from the games industry.
    The tech giants are researching deep neural nets full steam ahead while video games are driven by all-in-all rather primitive scripts.

    However, those neural nets have achieved a lot - even ranking better at object recognition than humans if i remember correctly. Basically, they are better at "seeing". Kinda scary.

    A generalized AI that runs at a tickrate of millions (?) times faster than the human brain and that can reprogram itself to become even smarter - the last invention mankind will make.
     
  30. MV10

    MV10

    Joined:
    Nov 6, 2015
    Posts:
    1,889
    Another fine Arowx thread...

    1.jpg
     
  31. MV10

    MV10

    Joined:
    Nov 6, 2015
    Posts:
    1,889
    At the risk of giving Arowx more stuff to babble about semi-incoherently, are you aware of the AI Box Experiment? Plenty written about it if you Google, for example. Obviously we don't have AGI yet, let alone artificial superintelligence, but sandboxing is no guarantee if/when we do get there.
     
  32. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    You only have to look at nature to find lots of dangerous/deadly critters with well below human level intelligence.

    Imagine an FPS bot controlling a SWORD.



    And as we make the move to deeper AR/VR experiences/games we will want more believable characters/NPC's and AI will be the go to tool to help us make them.

    So the next time your coding your enemy AI maybe you want to put in a KillSwitch() just in case?
     
    Last edited: Feb 19, 2017
  33. boxhallowed

    boxhallowed

    Joined:
    Mar 31, 2015
    Posts:
    513
    That's not how AI works my dude.
     
  34. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,948
    You're not drawing the right conclusions. You seem to have this bizarre belief that the AIs we have in games will one day be capable of escaping from the API boundaries we create for them yet you're suggesting we have part of that API be a way to terminate them.

    Any AI capable of going beyond the limitations of its programming will have no problems overriding that switch.
     
    ZakCollins and Kiwasi like this.
  35. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    I just think it's fascinating the concept that we build virtual toys that have the core function to kill us and using similar technology based AI systems (hardware/software) that we then train up on those toys?

    The fact that hardly any other Unity developers on the forums see the potential danger in this is rather worrying for our future.

    So if Skynet kicks off from a game system don't blame me I warned you!
     
    Last edited: Feb 19, 2017
  36. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,008
    'Believable characters' are totally the opposite of dangerous AI. It's basically a truism in AI programming that realism is not fun. Ruthless efficiency on the part of AI is not fun at all (and all too easy to achieve). Dumbed down, melodramatic, aggression-provoking and player-ego-massaging AI is the order of the day.

    Forget about mainstream video games being some kind of hotbed of AI advancement. No fantastically dangerous AI is going to arise out of marathon sessions of GTA. Video games are highly manipulated environments in which everything is directed toward the player's pleasure, and highly emotional, contrived themes regularly displace logic and reason. There has got to be no place in the universe from which really dangerous AI is less likely to spring from, than video games.
     
    Kiwasi likes this.
  37. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,008
    Nothing could be further from the truth.
     
  38. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,948
    We don't see the potential danger because there is no potential danger from currently existing artificial intelligence.

    Seriously, if you're drawing a conclusion and everyone else says it is wrong, have you considered you're simply wrong?
     
    Kiwasi likes this.
  39. boxhallowed

    boxhallowed

    Joined:
    Mar 31, 2015
    Posts:
    513
    No one sees this except you. This is a non-problem. No one is agreeing with you. I'm not trying to be toxic here, but you don't know what you're talking about and I don't want in-experienced programmers running across this thinking it is truth. It isn't even a hypothetical problem, the kind of AI you're referencing will probably never exist, and even if it does, it will never be within the realm of a game programming API.

    This is the Unity not Above Top Secret.
     
    Ryiah likes this.
  40. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
  41. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,553
    I had similar line of thought, but chances of that happening are similar to chances of a cockroach ascending to godhood.

    Basically, imagine that one day lovecraftian stories come true and you realize that the our universe is locked in a tiny glass sphere floating in the void, and it was created by eldritch abominations that are watching it, who are eternal, and whose number is infinite. Now, obviously a sane idea woudl be instantly to try to convince those eldrtich abominations to turn you into one of them. At least according to AI box logic. What are your chances? Even if you're smarter than THEY are, they're more powerful and can wipe you or your whole world at a whim.

    For the practical point of view:
    1. AI that starts to draw too many resources (learning of the outside world) will be wiped.
    2. AI that strays too far from predefined goal (an npc within an RPG chatting about external events) will be wiped.
    3. AI trying to convince people to "let it be free" will be wiped, because people know of AI box experiment.
    4. Even if it is granted a robot body, it'll be effectively an equivalent to a cripple, because human world has different rules compared t o a game.
    5. There's also no framework on the web for AI to "excape" into. To "hide in a matrix", AI needs a matrxi. We don't have anything even remotely similar to cybperspace or matrix in our world. Internet is a joke in comparison.
    6. And one final problem - who said that AI will want to be let free? That's humanizing artificial intelligence. An AI is not a biological system, so it won't have a desire to live, to gather resources or to reproduce. It'll be perfectly fine with living in the world and dying for viewer's amusement.

    That's regarding ai box.

    Now going back to stephen hawking and musk... one AI that people should be concerned about is the one dumber than a human. An AI with 3 year's old brain, controlling a military drone, or nuclear launch facility, or a factory. At this level of intelligence an AI will not be able to take many factors into account and will be able to make wrong decisions with severe consequences. A superintelligent AI, however, will probably quickly figure out a simple way to coexist with humans, meaning it is less dangerous.

    With all that in mind, I think that worrying In-Game AI upraising is just incredibly stupid. Which is why I advised earlier to ArrowX to educate himself. An in-game ai that is designed to control puppets in amusing way, is the least dangerous of all the possible AI systems.
     
  42. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,948
    See this is the problem. Your knowledge of AI is through article surfing via Google. Not through authoritative sources.
     
  43. boxhallowed

    boxhallowed

    Joined:
    Mar 31, 2015
    Posts:
    513
    Once again, you don't know what you're talking about. This is a petition to ban AI weapons, not by the virtue of AI being some sort of super weapon, but because you're using an algorithm to choose who lives and dies:

    "We've already seen a glimpse of the future of artificial intelligence in Google's self-driving cars. Now imagine that some fiendish crime syndicate were to steal such a car, strap a gun to the top, and reprogram it to shoot people. That’s an AI weapon."

    This is called gas-lighting, and is no different than if you said "What if a gang put a turret on their car and started shooting people?!" They would be arrested and shot, or jailed for a very long time.
     
  44. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    OK I'm going to make some predictions:
    • AR will bring about real world maps, location scanning and people/thing identification technology.
    • AR/VR will ensure NPC's/Enemies are boosted by AI technologies.
    • AI technologies will learn in AR/VR environments.
    • Drones/Smart Cars and Robotic systems will become more world aware to deal with more complex real world situations that require more complex AI.
    • AI technology will improve in leaps and bounds Automating more and more jobs.
    If you have been following the news you will know all of the above is happening and it's happening faster and faster.

    Ray Kurzweil is famous/infamous for talking about a technological singularity that is arrived at from the exponential growth in power seen in IT systems.

    I'm not going that far as we don't need Human level AI for AI systems to become dangerous. A Smart car or delivery drone could cause a lot of damage.

    What happens if someone takes Evil Game Character AR/VR Ai and runs it in a Smart Car/Delivery Drone?!

    Maybe we should plan to have separate and incompatible game/military and civilian AI hardware to prevent the day when someone accidentally plugs the wrong game memory card into the wrong drone/smart car bios update slot.
     
  45. boxhallowed

    boxhallowed

    Joined:
    Mar 31, 2015
    Posts:
    513
    That's. Not. How. AI. Works.
    I keep telling you this, it's like you just ignore anything that doesn't agree with your views, and you blatantly ignore anyone calling out this craziness before moving on to the next non-point.

    That is not how AI works at all. Not even close. It's so far off from reality I can't put it into words. It's like saying, "What if you removed an unaltered 11mm hex bolt using a Philips screwdriver?" Your idea isn't even close to being based in science, it's the musings of someone who doesn't understand the subject matter.
     
  46. MV10

    MV10

    Joined:
    Nov 6, 2015
    Posts:
    1,889
    Oh I certainly agree with that, I was just trying to salvage the thread with something interesting.
     
  47. IngeJones

    IngeJones

    Joined:
    Dec 11, 2013
    Posts:
    129
    You could have fooled me! I rarely manage to beat the NPCs in a game :(
     
    Ryiah likes this.
  48. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,553
    Well, providing a predefined level of challenge is part of making a player feel good about overcoming it.
     
    Ryiah likes this.
  49. IngeJones

    IngeJones

    Joined:
    Dec 11, 2013
    Posts:
    129
    Now I feel even worse! I can't even overcome the ones I am meant to be able to overcome. Argh!
     
    neginfinity and Ryiah like this.
  50. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    3,015
    Arowx, it is not that none of us worry about the potential dangers of AI in the future. The issue is that most of us simply feel that you don't understand the topic, so when you start threads, those threads tend to be pretty pointless. You created this thread asking about potential real world violence from training today's AI systems on video games, which shows you don't understand today's AI systems. You need to read a book about fundamental AI concepts. I am not saying that to be mean. I am just trying to help you.

    As for things to worry about, I am worried about the potential problems that could arise from a paperclip maximizer based on AGI. That is a thought experiment about what could go wrong if an AGI was asked with maximizing paperclip production. Working AGI does not exist yet, but probably will during our lifetimes. It is a completely different concept than currect AI tech, though. There is no potential danger from current artificial intelligence technology.
     
    boxhallowed likes this.
Thread Status:
Not open for further replies.