Search Unity

Beware, pokemon go has ties to the CIA, allegedly

Discussion in 'General Discussion' started by imaginaryhuman, Jul 22, 2016.

  1. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Well the solution is the matrix then :D, where we can assert dominance over NPC, the infinite video game life tailored to your need!
     
  2. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,572
    That's an inefficient way to deal with it.

    Think of the world being a puzzle with 7.5 billion pieces, whose interaction is governed by very complex rules described on a book that has one billion pages. By correctly positioning the pieces on 510 square kilometer board, you can achieve optimal situation where everybody will be happy.

    However, this thinking on this kind of scale is beyond of ability of any human or any government. A machine, however, might be able to get to this level.
     
    Ryiah likes this.
  3. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    And we can start simulate it in our game, we haven't an AI capable of playing star craft yet. But once it's done (notably the building part) we are go for testing that lol!
     
  4. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,025
    @neginfinity again, I don't think that you've got the whole picture. When you say that by correctly positioning the pieces on the board, you can make everyone happy, in my opinion that fails to take into account that one persons happiness is another's unhappiness - or more rightly, one persons satisfaction is another's dissatisfaction.

    Take a second to really imagine, what would you feel spontaneously if those you consider not to be on your level in whatever terms you value, are suddenly given the same benefits as you? It would be like the Olympics comission handing out a medal to everyone on earth for the 50m sprint at the same time that they gave one to Usain bolt! It's only by the inequality of our achievements - and the subsequent inequality of the benefits, that we measure ourselves.

    Inequality is the driver of progression. I'm all for basic standard of living and healthcare and all that kind of thing, and even more importantly, equality of opportunity (inasmuch as such a thing can be given) but I would loathe to live in a society that tried to regulate equality beyond such basic human dignities, and I'm quite sure it would destroy itself as surely as communism did - except that despite being 9/10ths of the way out it and totally comatose, it would probably end up on perpetual life support through the fact that it was governed by AI.

    Have you read "The Humanoids" by Jack Williamson? I read it as a kid and to this day it has a strong effect on me. As that book correctly pointed out, the only way to create a robot/AI governed society that functioned smoothly would be to drug its population to the point where they do not even understand what it means to struggle for what they believe in. I would have given a lot to read a sequel set 100 years later, and find out what the humans were capable of achieving or experiencing, if anything.

    And so, since an AI would not be able to govern unfairly in a way that satisfies everyone, I don't think it's really possible. An AI dictator which disregarded 'impractical' measures of government might work, but obviously comes with its own set of problems, since what some people think is practical would differ from others, and what an AI thinks is practical might differ from what humans at large consider practical.

    I simply don't think that anyone or anything is fit to govern humans except for humans themselves, since people intuitively understand and accept the failings of other human beings, whereas they would not be able to relate to an AI. Everything the AI did would be viewed with suspicion and fear, the same way that people react with suspicion and fear to others that they cannot comprehend. So while it's a noble idea, in the end I think it would be totally unworkable.
     
  5. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    Stretch yourself out. There are plenty of other sci fi writers that examine what being governed by an AI would mean. In Asimov's universe you have benevolent machines that guide humanity. You also have a robot president. At the other end of the scale you have Frank Herbet's universe, where humans are almost wiped out by the governing AI, and lead a violent revolution to win freedom. This leads to a universe where the primary commandment is "Thou shalt not make a machine in the likeness of a human mind".

    Point is there are writers who have been able to conceive of both ends of the spectrum. And plenty who have sat comfortably in the middle ground. Putting up one option as 'the correct and only way' kind of defeats the purpose of science fiction.

    You are asking a deeper philosophical question here. Is happiness a zero sum game? In general I don't think it is. There is no particular reason why my happiness has to be tied to your miserableness.
     
  6. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    You guys need to meet bullies and racists. o_O
     
  7. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,025
    @BoredMormon, first as to your last point, it's not about happiness tied to miserableness, but more like satisfaction tied to dissatisfaction. If you win a race and I lose, I might be dissatisfied but hardly miserable - I knew that someone had to win and by all accounts you ran a better race that day. It's not an attack on my human dignity - but what constitutes human dignity may not be such an easy question for an AI to answer.

    In fact, there are so many things that we deal with by degrees, things that are judged right or wrong merely by degrees, that it would be very difficult for an AI to goven without making errors according to some people. And as soon as an AI is perceived to be in error, the reaction I imagine would be much stronger and more violent than any that would have been given to another human being, one whom we could relate to and potentially integrate as the authority figure.

    I've always thought, for a long time, that Asimov had an extremely naive view of human nature - in fact I couldn't get through the last book of his I read due to his mixing of superficial "goody two shoes" characters with knife fighting skills in a futuristic universe. None of them go together. He's a great writer of course, and very good at writing on very large timescales but when it comes to understanding individual character I don't think he had the depth of someone like Arthur Clarke.

    I also didn't like Frank Herbert's novels all that much, for exactly the opposite reasons, it was like reading Greek mythology or watching Game of Thrones - they were all about individual egos with little understanding or depth of social politics.

    But anyway, I could be wrong, and it's my own opinion of course - but everything I know about human beings suggests that beyond a certain point, equality is the last thing anyone really wants or needs. Which means that an AI government is going to have its work cut out.
     
    Last edited: Jul 26, 2016
  8. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,025
    @neoshaman let's not get naive here, I'm not talking about inequality in terms of physical or emotional abuse (though I daresay even this, at subtle levels - would trip up an AI government) - basic human dignity is something any government needs to uphold. What I'm talking about are things like inequality of income, inequality of capability, and inequality of opportunity in life (the parts that can't really be controlled), amongst others.
     
  9. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Well social status then?
     
  10. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,025
    Precisely that. Since so much of social status is determined indirectly by the preferences of those around us, how do you regulate it without infringing on people's freedom of choice?

    Also, I was reading an article where it was talking about methods that neuroscience can use to detect people's subconscious reactions to things, basically it boils down to measuring response time for the person to hit either a 'good' button or a 'bad' button when they see something, and then they switch it around (now you have to hit the other button), and correlating the results. The experimenters found that quite a lot of people are unconsciously racist (and consciously and honestly believe that they are not). Is this an involuntary reaction to appearance that has been programmed into us in prehistoric times? Does it affect our responsibility? How does an AI use this information to determine of someone is racist if, for example, they didn't hire a person of different color - maybe the person was simply incompetent and cioncidentally the employer was subconsciously racist? This sort of thing is a minefield of legality and an AI frankly is not going to help us when we are all born 'guilty' of many tendencies and beliefs that color our behaviours in ways both good and bad. We are not 'fit', as it were, for AI rule.
     
  11. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    A that remind me the study on superchicken, ie a group of competent people might actually undermine the group. So competence might not be entirely desirable, diversity by breaking pattern of habit generally outperform the super group. There is also another study that show difficult collaboration that goes to fruition outperform comfortable workspace (because you had to circumvent difficulty by constantly innovate and adapt). Those non intuitive facts might throw a wrench into any optimization of society that don't know to prioritize performance or comfort lol.

    Also not all society are based on conflictual relations, so mediation of status can be done through harmless ritual. BUT transitioning to these society might be difficult, culture do have an inertia lol.

    But intelligence is not omnipotence anyway, whatever the AI does it would have to figure it out, what happen in dramatic failure, or experiment needed to get data that are non ethical, or stuff that work on short term but have non recoverable long term danger?

    But if there is a super AI, they wouldn't need human to work anyway, he can just produce drone extension to take care of the infrastructure. But would that be an AI anymore or an incarnate god? :eek: rofl
     
    Billy4184 likes this.
  12. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,025
    @neoshaman there's also a bit of a theory I've heard that it's actually destructive to a political system to have a leader that is too far above the average citizen in terms of intelligence and ability - those qualities are best left to the hidden 'advisors'. The reason being that the leader then becomes a buffer between the 'masses' and those competent enough to steer a nation's course, two entities that would likely come into pretty severe conflict if they faced off squarely. That's one reason why I think an AI authority figure would not function as well as even someone like Bush.
     
  13. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    That sound like a WEIRD bias theory though (like mine to be frank). By WEIRD I don't mean weird but Western Educated Industrial Rich Democracy ... ie a minority of country where most research are conducted onto their own member, it generally fail when taking to other population, which also had an affect to diagnostic mental illness (american shizophrene tend to have violent thought, while chilean tend to have harmless thought about mundane task). It's still apply since It's more likely such an AI happen in WEIRD country first. The question is would it be more accepted in non WEIRD country? What if the AI became a weapon to colonize further these country :eek:
     
  14. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,025
    @neoshaman it's hardly unreasonable to think that the average person would not be able to relate to the sort of person with the ability and desire to control a nation, someone who could make a decision to sacrifice the lives of thousands of people.

    And this is what an AI would have to do. And everything that was perceived as wrong about its judgement would be ruthlessly attacked by those who suffered the consequences, people who felt that a cold, unemotional machine had seen them or people they loved as less worthy. And not just lives but careers, opportunities, everything that makes up human life.

    Forget it, until we become fully logical and rational ourselves, which we are little closer to than we were thousands of years ago, an AI government is going to be totally unworkable. And minor detail, logic and rationality does not often equate to your own wellbeing.
     
  15. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,572
    I'm quite certain that I get the whole picture. You concentrate too much on equality, inequality and fairness.
    I would expect the system to take "one person's happiness is another's unhappiness" into account and still solve the puzzle.

    As long as my needs are met, I wouldn't give a damn about benefits someone else gets.

    I simply believe that humans are unfit to govern the other humans, because in the end they will almost never intuitively understand and accept failings of others, and instead will concentrate on either improving their own well-being or promoting their own beliefs. Meaning they will almost never work towards making society better.

    An AI would be devoid of human flaws that ruin human government, because of that I would expect it to do much better job.

    I also suspect that equality, inequality and fairness are close to being a non-factor.
     
    Billy4184 likes this.
  16. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,025
    @neginfinity, fair enough, I see we simply have a different take on this. All I can say is that I think the idea is essentially a good one, i.e., like communism, it would be great if only it worked. Instead I think that even if it went according to plan, it would be viewed by most as a dystopia and treated as such.

    It's tempting to think that an AI could simply solve all of our problems through raw 'intelligence', but an intelligent answer is not always the most palatable one. And human life is simply too full of functional contradictions and injustices for this approach to work.

    Who knows, maybe we'll find out eh? :)
     
  17. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,572
    I think you're imagining some sort of picture in your mind that involves some sort of evil AI emperor with human qualities.

    Instead think of a AI-driven traffic control system that spans entire globe. The system watches every road, deals with road blocks, dispatches police and fire departments, and clears roads for ambulances. Do you perceive a judgement of a traffic light as wrong when everybody gets to their destination on time?

    Now, apply the same thing to human society, you might get idea about what an AI governor would be like.
    In my opinion, it would be similar to this:
    You wake up, check your email, and see a message that informs you about new job position being offered to you, which conveniently offers more than you currently earn, involves topic you like, and also lists you options regarding housing, school/facilities for your children, and the like - all taken care of down to the latest detail. You're free to accept or reject it.
    And this is it. An unseen hand making offers trying to move pieces around the board. Same world as we live in, except with less wars and more happiness in it.

    Now, which part of this is this bad?
     
  18. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,025
    I think you misunderstood, I definitely don't think of AI as necessarily evil. In fact, in the picture I'm trying to paint, the AI is the good guy! The idea is that the AI would not be able to set up and rule a functioning human society, one that involves - necessarily - certain levels of what many would perceive as injustice. And because the AI would have to work with these 'limitations' everything that people perceived as injust would be blamed on some perceived negative quality that is usually attributed to it, such as being too 'cold and rational'. It would become a target for any rage against the 'system', whether or not this rage had a logical basis.

    The idea is not that the AI would rule badly as such, but simply that it would expose the 'weaknesses' of human beings in a disfunctional way.
     
  19. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,572
    I think you greatly overestimate effect of this.
    No matter what any government does, there will be always people that will perceive actions as unjust.
    Another thing is justice is a fictional concept to begin with.

    Basically, I think this is not a problem, and complaints about injustice are a non-factor. People will complain for a generation, then they'll shut up when things work out.
     
  20. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,025
    Try making that the basis of a presidential campaign!

    Unfortunately, that first generation is what will determine whether it survives infancy, and my guess is that it won't. Communism also relied on "Just deal with it now, it'll get better later" and look where that went.

    But anyway, it's very hard to make a strong argument when neither of us know just how this AI would function. It's likely that some things that appear to be difficult, it will solve easily, and vice versa. I just can't imagine that on the whole people would be able to be comfortable with having it there.
     
  21. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,572
    Not sure why is this supposed to be a problem, though. Even without justice, you can propose measures people will enjoy. Trying to base a presidental campaign on "justice" would probably be a very bad idea to begin with.

    USSR survived WWII, went into space and existed for 70 years, while ideology still lives in China. Could've gone worse.
     
  22. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,025
    All of them, at the same time? If anyone knew how to propose measures everyone would enjoy, there would be no reason for politics. There's a lot of frustration in society against politicians and government, and I'm afraid it would find an outlet if an AI was put there, something that no one has any reason to love or care about.

    I'm pretty sure USSR was not a great place to live. And as soon as they lost their sense of collective identity (when the cold war was over) the cracks in the system showed very fast. And chinese communism would not have survived if it had not become the foundation for modern capitalism. It relies on customers living in wealthy capitalist societies for its own survival.

    But anyway, I hope to see at least some example of AI governance. Judging by ROSS we're not too far away!
     
    Last edited: Jul 26, 2016
  23. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,572
    Presidental campaign doesn't need "everyone". It needs "majority". And only for the duration of election campaign.

    "A perfect president for you - one that is BETTER than a human! Vote for ai governor today. (sponsored by Cyberdyne corporation)".

    Due to classical flaws of democratic system, I think it wouldn't be impossible to put even a frog into presidental seat, if anyone really wanted to do that.
     
    Billy4184 likes this.
  24. ippdev

    ippdev

    Joined:
    Feb 7, 2010
    Posts:
    3,853
    Just a collateral note here. A site asks you to take pics of your hotel or motel room. Turns out that database is now in use by law enforcement trackers of child porn and trafficking kidnappers who use it to compare room interiors in films/videos. The meaning here is often the use of data and images is not apparent immediately.

    By having folks roam all over with camera using AR they can get geospatial data of local ground objects, building entrances and windows and etcetera to build an entire environment from just data points. Seem the dev of Pokemon GO was into geospatial data.
     
    Ryiah likes this.
  25. schmosef

    schmosef

    Joined:
    Mar 6, 2012
    Posts:
    852
    I hope the CIA gets involved in the forum upgrade and "takes care" of all the spam posters.
     
  26. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,203
    Can't get much worse than Lithium. :p