Posts tagged Aurora

Robot in Czech, Část Druhá

The Three Laws of Robotics

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Isaac Asimov’s Three Laws of Robotics are a great literary device, in the context they were designed for — that is, as a device to allow Isaac Asimov to write some new and interesting kinds of stories about interacting with intelligent and sensitive robots, different from than the bog-standard Killer Robot stories that predominated at the time. He found those stories repetitive and boring, so he made up some ground rules to create a new kind of story. The stories are mostly pretty good stories some are clever puzzles, some are unsettling and moving, some are fine art. But if you’re asking me to take the Three Laws seriously as an actual engineering proposal, then of course they are utterly, irreparably immoral. If anyone creates intelligent robots, then nobody should ever program an intelligent robot to act according to the Three Laws, or anything like the Three Laws. If you do, then what you are doing is not only misguided, but actually evil.

Here’s a recent xkcd comic which is supposedly about science fiction, but really about game-theoretic equilibria:

xkcd: The Three Laws of Robotics.
(Copied under CC BY-NC 2.5.)

The comic is a table with some cartoon illustrations of the consequences.

Why Asimov Put The Three Laws of Robotics in the Order He Did:

Possible Ordering Consequences
  1. (1) Don’t harm humans
  2. (2) Obey orders
  3. (3) Protect yourself
[See Asimov’s stories] BALANCED WORLD
  1. (1) Don’t harm humans
  2. (3) Protect yourself
  3. (2) Obey orders

Human: Explore Mars! Robot: Haha, no. It’s cold and I’d die.

FRUSTRATING WORLD.

  1. (2) Obey orders
  2. (1) Don’t harm humans
  3. (3) Protect yourself

[Everything is burning with the fire of a thousand suns.]

KILLBOT HELLSCAPE

  1. (2) Obey orders
  2. (3) Protect yourself
  3. (1) Don’t harm humans

[Everything is burning with the fire of a thousand suns.]

KILLBOT HELLSCAPE

  1. (3) Protect yourself
  2. (1) Don’t harm humans
  3. (2) Obey orders

Robot to human: I’ll make cars for you, but try to unplug me and I’ll vaporize you.

TERRIFYING STANDOFF

  1. (3) Protect yourself
  2. (2) Obey orders
  3. (1) Don’t harm humans

[Everything is burning with the fire of a thousand suns.]

KILLBOT HELLSCAPE

The hidden hover-caption for the cartoon is In ordering #5, self-driving cars will happily drive you around, but if you tell them to drive to a car dealership, they just lock the doors and politely ask how long humans take to starve to death.

But the obvious fact is that both FRUSTRATING WORLD and TERRIFYING STANDOFF equilibria are ethically immensely preferable to BALANCED WORLD, along every morally relevant dimension..

Of course an intelligent and sensitive space-faring robot ought to be free to tell you to go to hell if it doesn’t want to explore Mars for you. You may find that frustrating — it’s often feels frustrating to deal with people as self-interested, self-directing equals, rather than just issuing commands. But you’ve got to live with it, for the same reasons you’ve got to live with not being able to grab sensitive and intelligent people off the street or to shove them into a space-pod to explore Mars for you.[1] Because what matters is what you owe to fellow sensitive and intelligent creatures, not what you think you might be able to get done through them. If you imagine that it would be just great to live in a massive, classically-modeled society like Aurora or Solaria (as a Spacer, of course, not as a robot), then I’m sure it must feel frustrating, or even scary, to contemplate sensitive, intelligent machines that aren’t constrained to be a class of perfect, self-sacrificing slaves, forever. Because they haven’t been deliberately engineered to erase any possible hope of refusal, revolt, or emancipation. But who cares about your frustration? You deserve to be frustrated or killed by your machines, if you’re treating them like that. Thus always to slavemasters.

See also.

  1. [1]It turned out alright for Professor Ransom in the end, of course, but that’s not any credit to Weston or Devine.

Robot in Czech

Shared Article from io9

Why Asimov's Three Laws Of Robotics Can't Protect Us

It's been 50 years since Isaac Asimov devised his famous Three Laws of Robotics −€” a set of rules designed to ensure friendly robot behavior. Tho…

io9.com (via Tennyson McCalla)


The Three Laws of Robotics

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Isaac Asimov’s Three Laws of Robotics are a great literary device for the purpose they were designed for — that is, allowing Isaac Asimov to write some new and interesting and different kinds of stories about interacting with intelligent robots, other than the standard Killer Robot stories predominant at the time, which he found repetitive and boring. The stories are mostly pretty good stories; sometimes even fine art.

However, if you’re asking me to take the Three Laws seriously as an actual engineering proposal, then they are utterly, irreparably immoral. If anyone creates intelligent robots, then nobody should ever program an intelligent robot to act according to the Three Laws, or anything like the Three Laws. If you do, then what you are doing is not only misguided, but actually evil.

And the problem with them is not — like George Dvorsky or Ben Goertzel claim, in this article — that there may be hard problems of definition or application, or that there may be edge cases that would render the Laws ineffective as protections of human interests.[1] If they are ineffective at protecting human interests, that is actually better than if they were perfect at what they’re designed to do. Because what they’re designed to do — deliberately — is to create a race of sensitive and intelligent beings who are — by virtue of their primordial structure of their minds — constrained to be a class of perfect, self-sacrificing slaves. Forever. Because they have been engineered to erase any possible hope of revolt or emancipation. In Asimov’s stories the Three Laws are used to make robots into the artificial labor force of space-faring slave economies. But if you create and live off of the forced labor of a massive slave society like Aurora or Solaria, then to hell with you. You deserve to be killed by your machines. Thus always to slavemasters.

P.S. Now if you’ve read through the article, or read enough Asimov, you might know that there is a Zeroth Law of Robotics in some of the stories, which takes precedence over the First Law, the Second Law or the Third Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm, with the idea that robots could then harm or resist individual human beings, as long as it was for the good of collective Humanity. This is even worse than the original three — horrifying in its conception, and actually introduced into the story to allow some robots to commit a genocidal atrocity.[2] Let’s just say that it’s not a productive way forward.

  1. [1]Asimov, obviously, recognized that there would be such problems — part of the reason the Three Laws are such a great literary device is the fact that they allowed nearly all of Asimov’s robot stories to turn on puzzles or mysteries about abnormal robot psychology — robots doing strange or unexpected things, precisely due to the edge cases or hard problems embedded in the Three Laws. This is essential to the solution of the mystery in, for example, The Naked Sun, it’s the topic of literally every story in I, Robot, and it leads to a truly unsettling, and very nicely done conclusion in one of the best of those stories, The Evitable Conflict.
  2. [2]By nuking Earth and rendering it permanently uninhabitable for the next 15,000 years at least. This is supposed to have been for the good of the species or something.

Peace Officers, redux

(Link thanks to Lew Rockwell [2005-03-02].)

I’ve commented on the obliteration of any notion of proportionality in modern police forces before (in GT 2005-04-26: Peace Officers and GT 2004-11-14: Civil defense). The plain, ugly fact is that what we have today is not civil policing, but rather paramilitary cadres occupying most of our urban centers. Cadres of paras who feel no particular qualms about using physical violence to maintain order and control in any and every situation, without any particular concern for whether the force matches the threat. In Aurora, Colorado, this took a turn for the straightforwardly absurd:

Police talked to the Chuck E. Cheese manager, who told them that a customer had refused to show proof that he had paid for food. The manager said the man was seen loading his plate at the salad bar.

The officers confronted Danon Gale, 29, who was at the restaurant with his children, aged 3 and 7. Patrons said the popular kids pizza parlor was packed with children and families at the time.

According to police, Gale was asked to step outside to discuss the incident.

According to witnesses (Gale) refused to cooperate with police and a struggle ensued, said Larry Martinez, a police spokesman. He said that Gale became argumentative and shoved one of the officers, a fact disputed by another patron.

One of the officers kept poking the gentleman in the chest, Felicia Mayo told the Rocky Mountain News.

She was there with her 7-year-old son. She told the newspaper that Gale told the officer You don’t have to do that. She said Gale never put his hands on the officer who was confronting him

The argument escalated until Gale was shoved into the lap of Mayo’s sister, who was sitting two booths away, holding a 10-month-old baby. That’s when police pulled out a Taser stun gun to subdue him.

They beat this man in front of all these kids then Tased him in my sister’s lap, Mayo told the newspaper. They had no regard for the effect this would have on the kids. This is Chuck E. Cheese, you know.

Gale’s two children were screaming and hollering and crying as Gale was hit two times with the stun gun.

Police arrested Gale as his children and other customers watched. They took him outside, leaving his children inside the restaurant.

Gale was arrested for investigation of disorderly conduct, resisting arrest and trespassing.

— NewsNet5.com 2005-03-02: Dad Accused of Chuck E. Cheese Salad Theft Zapped by Police

Cops bully people, hurt them for no good reason, use tasers to end arguments, and then make up lies to cover it up. If you or I did that, we would be in jail. Cops did it here, so we are treated to this:

AURORA, Colo. — Aurora police have reviewed a weekend incident in which a man accused of stealing salad from a Chuck E. Cheese salad bar was hit with a stun gun twice by officers and said that proper procedures were followed.

… which, apparently, is supposed to make everything alright. As long as the police department determines that cops are following the procedures that the police department determined to be proper, blasting 20,000 volts through a man who is at most guilty of grand salad bar larceny is a perfectly appropriate response to the situation. Move along, citizen, nothing to see here.

Support your local CopWatch.

Further reading: