Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. We have updated the language to the Editor Terms based on feedback from our employees and community. Learn more.
    Dismiss Notice
  3. Join us on November 16th, 2023, between 1 pm and 9 pm CET for Ask the Experts Online on Discord and on Unity Discussions.
    Dismiss Notice

Programmer, Fired After 6 Years, Realizes He Doesn't Know How to Code

Discussion in 'General Discussion' started by Ony, Jun 8, 2016.

  1. Master-Frog

    Master-Frog

    Joined:
    Jun 22, 2015
    Posts:
    2,302
    That said:

    https://t.co/GTPdIdXNIa[/QUOTE]

    In other words... in order for robots to avoid becoming evil, we must first teach them what is to be evil.

    Am I the only one who is still haunted by this:

     
    GarBenjamin likes this.
  2. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    The programming in ethics problem becomes even more problematic when you consider how fluid human ethics are. Owning slaves and inhaling the combustion products of various plants are now considered bad. Contraceptives and teaching girls to read are considered good.

    Its entirely possible that a sentient AI could live indefinitely. Does it really make sense to hard code today's ethical standards into something that might still be functioning in 200 years?
     
    angrypenguin and Ryiah like this.
  3. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    Which thinking about it is essentially the same thing as the American constitution. So maybe it won't be so bad.
     
  4. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,328
    Look, obviously you can go the same way biology went about it. you code in "emotions". Specifically, disgust.

    The system would have two operational layers - upper one which is full consciousness, and lower one which is dumb and non-reprogrammable. Lower one scans actions of upper layer, and when it touches forbidden topic, it switches the whole system into "disgust" mode which effictively suppresses thinking about it and strongly encourages upper layer to start thinking about something else.
     
  5. Master-Frog

    Master-Frog

    Joined:
    Jun 22, 2015
    Posts:
    2,302
    I'm sure this convo has been had by actual people making real robots and stuff, now that I think about it... peace
     
  6. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    yes and no, I keep tab of it, the problem is that geek in success oriented field are too focus on being next bill gates to care, then you lol hactivist dominated by anarchy and mra ethics, they just want society burn with them at the top, and the final caricature is tech apologist, who care more about the glorious tech future than about puny bleak flesh consideration, what is some negligible sacrifice, like the entire life on earth, when you can have amazing gadget AI, that can smarter than us, and help us have shiny mech, like in anime!
     
  7. Warsymphony

    Warsymphony

    Joined:
    Sep 23, 2014
    Posts:
    43
    Would coding in emotions really be effective though?

    We as humans make decisions that are unethical regardless of biological hard coding (emotions). Just take a look through the news or search on google the internet is riddled with videos of horror. Everyone one of us does things we know we shouldn’t. Most of which may be rather insignificant but we know and do it anyways.

    Nobody is around and I need to get to work before 8 so who cares if I do 70 in a 50.

    I really don’t have money to spend on this, but it’s my money I’ll buy what I want.

    I am going to eat this, so what if it makes me feel guilty. I may be trying to get back into shape but it’s just this once I won’t eat this again I swear.

    I am sure anyone of you could come up with better examples than me.

    My point is we all know right from wrong. We derive our ethics from observation and experience. But it’s not enough to know. It’s not enough to be rational, reasoning, or autonomous. No amount of force, control or laws will keep people from doing what they know is unethical. Because we have free will.

    If AI did exist some would be good and some would be bad, regardless of its hardcoding.

    The question is does AI imply free will? If so then AI could be unethical. If not then all AI could be ethical.
     
    angrypenguin, Ryiah and Kiwasi like this.
  8. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    Ethics has nothing to do with emotion on contrary, it's there to put a halt to inappropriate emotion, like killing someone due to "anger".

    Ethics is rationalization, it's applied logic to a set of base values.

    That say coding with emotion is easy, emotion is really just a scoring system of pulsion, it tag a situation with a score and an array of emotion, which is then process by multiple and different unit and store as sentiments. When we act those sentiments are recall and they allow to compute an utility of each actions. You might be tempting to steal an objects at a friends house, but then you evaluate harm that will be done to you (aversion of pain) or your friend (protection of bond) etc ... but also evaluate potential solution and their cost/reward (buy it on internet, get bond point with friend) or alternative, in case of rejection of action base on that desire you evaluate a reason that will feed back to the situation processor (nah it's not worth it, I don't really need it) which cement further the ethic.
     
  9. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,328
    Biology has no ethics. It is concerned with survival and fight or flight response. Those seems to be working well.

    Actually, you're wrong about it. There's no universal ethical system and ethical considerations vary across cultures, so definitions of "right" and "wrong" will wary across individuals, cultures, and the like. However, that's very different from coding "disgust towards idea of harming a human" into an AI.
     
    angrypenguin and Kiwasi like this.
  10. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    This.

    How does one code for abortion? How about using physical discipline to train children? Violence in defence of others? Do I have the right to control property, even if other people will starve because of it?

    I'm not trying to start a discussion on any of these issues. Just pointing out that ethics is a long way from solved. Everyone knows breaks down pretty darn quickly.
     
    Socrates likes this.
  11. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    It's obvious that any ethical bot will operate with the moral of the culture which originate it, which don't bode well for minority lol.
     
  12. Dave-Carlile

    Dave-Carlile

    Joined:
    Sep 16, 2012
    Posts:
    967
    And programming self-driving cars to kill when it's for the greater good.
     
  13. goat

    goat

    Joined:
    Aug 24, 2009
    Posts:
    5,182
    @neoshaman -
    An ethical bot must operate in the culture it was designed to operate it, but the creator of an ethical bot wouldn't necessarily say if they became emotionally upset.

    An ethical bot such an a high resolution video and sound (or radiation recording device as appropriate for the needed environment) surveillance, safety, and health monitoring system would record exactly what happened and bigotry wouldn't be an option unless it was written as law in the jurisdiction that form of bigotry was OK.There is no need to profile based on statistics when everything is recorded and checked. That is why some US politicians in some states are actually trying to have enacted as illegal the use of video camera in the work place (ahem, mostly the 'protect' the meat industry). And that is also why many jurisdictions are insisting that the police wear video cameras but you'll notice the police are not complaining about that and welcome that technology. More telling for that situation is you have the mass media trying to claim it is needed to stop police brutality rather than reveal the truth of the mental and physical abuse the police are put through by criminals in their day to day job. Reverse the situation with the jurisdictions that have traffic surveillance systems and listen to the criminals howl in protest at their rights being violated. Transparent self-serving unethical BS those protests about traffic surveillance systems. The mass media and politicians need to stop groveling at the feet of every self-serving whiner and serve the law as it is written.

    @Dave Carlile
    That is the ill of profit motive and assumption that a 100% safe solution can't be found. That assumption is false. A properly driven self-driving car wouldn't need to kill if it was properly programmed and fitted with effective safety devices (which is pretty much the same thing as programming expressed in a 3D environment).
     
  14. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,328
    IMO, the car's first priority should be protecting life of its owner (owner is not necessarily a passenger). Second priority should be minimizing loss of life.

    In addition to that the cars are significantly less likely to get into accident. So it is extremely unlikely that the car will end up in situation where it either has to crash into a wall or into a crowd of people.
     
  15. Enoch

    Enoch

    Joined:
    Mar 19, 2013
    Posts:
    198
    The learning and evaluating input steps in most machine learning algorithms existing today are usually seperate steps (it's usually evaluate input, compare output to expected result and then backproprigate a delta). But lets say we do use use an algorithim that changes the neural net during the evaluation step, to let the machine go off and learn without testing how well it is learning almost constantly with a stictly controlled set of test data would be bad engineering. The AI would be beyond broken well before it would be unethical. Basically the robot won't even be able to move, much less be able to pick up a gun and shoot a human.

    Most AI built today is built to do something specific (find patterns in this data, find a way to move motors to traverse over gravel, classify different objects in an image). It's all evaluate input, compare output and tell it how wrong or right it was based on its answer so it can get mathmatically close to the right answer in the future. The only way we can guide what starts out as simply a random set of neuron values is to test and backproprigate a delta based on how well or poorly the AI did.

    At no time is the AI going off and doing things the engineer doesn't know about. At no time is it evaluting a data set the engineer did not feed it. And if want it to actually work you need to tell it how well or poorly it did in its evaluation. There will be emergent behavior based on what data it was feed, for sure. But it's really science fiction to expect that emergent behavior will let it all of a sudden feed it self a different data set that it self evaluates.
     
  16. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,328
  17. Dave-Carlile

    Dave-Carlile

    Joined:
    Sep 16, 2012
    Posts:
    967
    That would be costly DLC :p

    Maybe. Life being unpredictable, and millions of vehicles on the road, it will probably be more common than you'd think. There are plenty of scenarios where someone maybe walks out in front of a car and it has to decide to run over the person or steer into a wall or something. Definitely has to be able to evaluate those sorts of choices.
     
  18. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    In the specific case of self driving car it's a false problem, the car won't have to made ethical decision or decide to kill before social regulation with law happen, basically law will decide who is faulty in which context. Ie in a future where driving car are the norm, it will be like train, don't step on reserved area (the rail) and it will be fine, stop at light and push a button to ask for traversal, etc ...
     
    Kiwasi likes this.
  19. Master-Frog

    Master-Frog

    Joined:
    Jun 22, 2015
    Posts:
    2,302
    So you're telling me that all I have to do isn't multiply the steering angle by -1 when the vehicle detects a pedestrian to create a killing machine? Plus its a killing machine that avoids trees and other cars and gets excellent fuel economy because it avoids jackrabbit starts and keeps it under 60.

    I know I am hostile and confrontational, but it is true... these bots have the potential to be pretty scary.
     
  20. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    Another problem, self driving bus and public degradation, self driving freight truck and ambush. How do a machine deal with human behavior and responsibility.
     
  21. Master-Frog

    Master-Frog

    Joined:
    Jun 22, 2015
    Posts:
    2,302
    I think the self-driving vehicles will do better than human drivers, there's not much uniquely human about driving things other than the fact that we have never had any alternative.

    Vehicles can be constructed more safely without the need for a human driver, for example, people could sit rear facing.

    But right now if a computer gets hacked, a bus full of people doesn't drive off a cliff.
     
  22. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,516
    We're not discussing the limitations of things that exist today, though.
     
    Kiwasi likes this.
  23. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,148
    They have to do better than some of my friends who make practically every seat the "suicide seat". :p
     
    Master-Frog likes this.
  24. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    I think I didn't express myself perfectly, even though they drive better, they don't participate in social dynamic, they can't. If a bus is transporting a bunch of edgy teenager (not a school bus, they just happen to meet there) how do you pacify a potentially explosive situation or other type of situation? If a self driving truck is transporting valuable good, how does it deal with ambush from organized thief or accident with causalities that need help that block the road?
     
  25. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,328
    You're overthinking it. It is not ai drivers' job to deal with those.
     
    Ryiah and Kiwasi like this.
  26. ippdev

    ippdev

    Joined:
    Feb 7, 2010
    Posts:
    3,793
    Plus, self driving cars have to scan. These are not down-the-wire closed systems and ergo they have open frequencies that can be piggybacked onto that could falsify data metrics. Loading a curve in the road down a straight parade route for example and falsifying the crowd at the roadside as an open to traffic road with no vehicles could lead to a great many injuries and deaths.
     
  27. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    Chances are valuable cargo will still have human minders. For a while anyway. The minders just won't be responsible for the actual driving.
     
  28. Dave-Carlile

    Dave-Carlile

    Joined:
    Sep 16, 2012
    Posts:
    967
    Exactly. But eventually the cars will be able to just fight between themselves and the humans can be left to their creative pursuits.
     
    Kiwasi likes this.
  29. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,328
    Also, good luck robbing unmanned truck that knows no fear.

    The easiest approach for "valuable cargo" scenario would be to program truck to call police and go into full defensive mode (lock all entrances, and just sit there while police arrives) when an attempt to attack/hijack it is detected. No need to fight anyone or make any moral decisions.... and no reason to make the truck smarter than necessary.
     
    Kiwasi likes this.
  30. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    I'm saying it's a problem of legislation about responsibility.

    Seems like that human jobs will move to "social overseer" as repair bot will be a thing.
     
  31. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,148
    At least until they determine the turrets won't automatically shoot innocents.
     
    Kiwasi likes this.
  32. goat

    goat

    Joined:
    Aug 24, 2009
    Posts:
    5,182
    A self driving car uses cameras and other sensors, optical recognition algorithms, laws of physics and so on to override such attempts at tampering which are overly simplistic. They are not driving blind via strictly following GPS coordinates from point a to point b no matter how accurate the GPS is. Also, these programs are not games, they will have checksums and even tighter security to validate the resident program and data. Then there is the matter of improving material sciences protecting the occupants much better then current vehicles do now. Finally, the drivers themselves are unlikely to be willing to suffer if the car should not function as programmed and so would take corrective action.
     
  33. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469


    Things are happening way faster than I expected, there is a net acceleration toward practical application ... o_O





    :eek:
     
    goat and dogzerx2 like this.
  34. KnightsHouseGames

    KnightsHouseGames

    Joined:
    Jun 25, 2015
    Posts:
    850
    I think this is the first one that doesn't creep me out

    If they gave it a wagging tail, I'd kinda want one

    The only thing that creeps me out is the part with the head staying in place reletive to the body, and generally how the neck works

    If they could modify it to work more like a Veloceraptor neck from Jurassic Park and skin it like that too, it would be like I had my very own pet dinosaur
     
    Martin_H likes this.
  35. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    The part about the head staying in place is to show off control and spatial stability I think
     
  36. KnightsHouseGames

    KnightsHouseGames

    Joined:
    Jun 25, 2015
    Posts:
    850
    Yeah, I get why they did it. But knowing that doesn't push it any less into the Uncanny Valley
     
  37. goat

    goat

    Joined:
    Aug 24, 2009
    Posts:
    5,182
    I'd get one if they did it up like a giraffe. I love giraffes.
     
    Martin_H likes this.
  38. KnightsHouseGames

    KnightsHouseGames

    Joined:
    Jun 25, 2015
    Posts:
    850
    Heh, I'd be down for that, though they'd have to do it to one of the bigger ones

    I still don't know why they didn't make one that has a saddle. They could make a robot steed. Imagine the effect that would have on the battlefield. Bring back the good old fashioned cavalry charge.

    What Boston Dynamic really oughta do though is start a robot petting zoo, outfit a bunch of it's robots with outer skins that look like different animals, and let people come in and interact with them

    Imagine the amount of money that would make
     
    goat likes this.