In Oregon, birth control will now be able to be sold behind the counter at pharmacies. The Oregon Health Authority and the Board of Pharmacists are re…
Archaeologist Nico Roymans of the Vrije Universiteit announced in a press release the discovery of skeletal remains, swords, spearheads, and a helmet in the modern area of Kessel, at the site where Roman general Julius Caesar wiped out the Tencteri and the Usipetes, two Germanic tribes, in 55 B.C. Caesar described the battle in Book IV of his De Bello Gallico. After he rejected the tribes' request for asylum and permission to settle in the Dutch river area, his force of eight legions and cavalry conquered the camp and pursued the survivors to the convergence of the Meuse and Rhine Rivers, where he slaughtered more than 100,000 people. The Late Iron Age skeletal remains represent men, women, and children, and show signs of spear and sword injuries. Their bodies and bent weapons had been placed in a Meuse riverbed.
This is an amazing find, in the sense that archaeologists hardly ever find remains from known ancient battles, especially not on open battlefields. It’s a thoroughly unsurprising find, in the sense that it reinforces, once again, that Roman civilization was awful, and Julius Caesar one of the most celebrated genocidaires of history.[1]
Today, the IESG approved publication of "An HTTP Status Code to Report Legal Obstacles". It'll be an RFC after some work by the RFC Editor and a few m…
mnot.net
It is so profoundly saddening that this development of HTTP is, practically speaking, necessary.
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Isaac Asimov’s Three Laws of Robotics are a great literary device, in the context they were designed for — that is, as a device to allow Isaac Asimov to write some new and interesting kinds of stories about interacting with intelligent and sensitive robots, different from than the bog-standard Killer Robot stories that predominated at the time. He found those stories repetitive and boring, so he made up some ground rules to create a new kind of story. The stories are mostly pretty good stories some are clever puzzles, some are unsettling and moving, some are fine art. But if you’re asking me to take the Three Laws seriously as an actual engineering proposal, then of course they are utterly, irreparably immoral. If anyone creates intelligent robots, then nobody should ever program an intelligent robot to act according to the Three Laws, or anything like the Three Laws. If you do, then what you are doing is not only misguided, but actually evil.
Here’s a recent xkcd comic which is supposedly about science fiction, but really about game-theoretic equilibria:
[Everything is burning with the fire of a thousand suns.]
KILLBOT HELLSCAPE
(2) Obey orders
(3) Protect yourself
(1) Don’t harm humans
[Everything is burning with the fire of a thousand suns.]
KILLBOT HELLSCAPE
(3) Protect yourself
(1) Don’t harm humans
(2) Obey orders
Robot to human: I’ll make cars for you, but try to unplug me and I’ll vaporize you.
TERRIFYING STANDOFF
(3) Protect yourself
(2) Obey orders
(1) Don’t harm humans
[Everything is burning with the fire of a thousand suns.]
KILLBOT HELLSCAPE
The hidden hover-caption for the cartoon is In ordering #5, self-driving cars will happily drive you around, but if you tell them to drive to a car dealership, they just lock the doors and politely ask how long humans take to starve to death.
But the obvious fact is that both FRUSTRATING WORLD and TERRIFYING STANDOFF equilibria are ethically immensely preferable to BALANCED WORLD, along every morally relevant dimension..
Of course an intelligent and sensitive space-faring robot ought to be free to tell you to go to hell if it doesn’t want to explore Mars for you. You may find that frustrating — it’s often feels frustrating to deal with people as self-interested, self-directing equals, rather than just issuing commands. But you’ve got to live with it, for the same reasons you’ve got to live with not being able to grab sensitive and intelligent people off the street or to shove them into a space-pod to explore Mars for you.[1] Because what matters is what you owe to fellow sensitive and intelligent creatures, not what you think you might be able to get done through them. If you imagine that it would be just great to live in a massive, classically-modeled society like Aurora or Solaria (as a Spacer, of course, not as a robot), then I’m sure it must feel frustrating, or even scary, to contemplate sensitive, intelligent machines that aren’t constrained to be a class of perfect, self-sacrificing slaves, forever. Because they haven’t been deliberately engineered to erase any possible hope of refusal, revolt, or emancipation. But who cares about your frustration? You deserve to be frustrated or killed by your machines, if you’re treating them like that. Thus always to slavemasters.
Some of you know that I am a philosophical anarchist. This conclusion is controversial: most people think that states can in principle have legitimate political authority over the people in them, and that some states really do. So no state can have legitimate political authority is a conclusion in need of some argument to justify it. I’ve tried looking at the issue a couple of ways in a couple of different places. But those are both arguments that start from within a pretty specific, narrow dialectical context. They’re intended to address a couple of fairly specific claims for state legitimacy (specifically, individualist defenses of minimal state authority, and defenses of state authority based on a claim of explicit or tacit consent from the governed). Maybe a more general argument would be desirable. So here is a new one. It is a general deductive argument with only five premises. All of its inferences are self-evidently valid, and most of the premises are either extremely uncontroversial logical principles, or else simple empirical observations that are easily verified by any competent reader. I call it The Self-Confidence Argument for Philosophical Anarchism.[1] Here is how it goes:
This argument is a valid deductive argument. (Premise.)
If this argument is a valid deductive argument and all of its premises are true, then its conclusion is true. (Premise.)
Its conclusion is No state could possibly have legitimate political authority. (Premise.)
If No state could possibly have legitimate political authority is true, then no state could possibly have legitimate political authority. (Premise.)
All of this argument’s premises are true. (Premise.)
This is a valid deductive argument and all of its premises are true. (Conj. 1, 5)
Its conclusion is true. (MP 2, 6)
No state could possibly have legitimate political authority is true. (Subst. 3, 7)
∴ No state could possibly have legitimate political authority. (MP 5, 8)
Q.E.D., and smash the state.
Now, of course, just about every interesting philosophical argument comes along with some bullets that you have to bite. The awkward thing about the Self-Confidence Argument is that if it is sound, then it also seems that you can go through the same steps to show that this argument, The Self-Confidence Argument For The State, is also sound:
This argument is a valid deductive argument. (Premise.)
If this argument is a valid deductive argument and all of its premises are true, then its conclusion is true. (Premise.)
Its conclusion is Some states have legitimate political authority. (Premise.)
If Some states have legitimate political authority is true, then some states have legitimate political authority. (Premise.)
All of this argument’s premises are true. (Premise.)
This is a valid deductive argument and all of its premises are true. (Conj. 1, 5)
Its conclusion is true. (MP 2, 6)
Some states have legitimate political authority is true. (Subst. 3, 7)
∴ Some states have legitimate political authority. (MP 5, 8)
. . . which admittedly seems a bit awkward.
It’s easy enough to figure out that there has to be something wrong with at least one of these arguments. Their conclusions directly contradict each other, and so couldn’t both be true. But they are formally completely identical; so presumably whatever is wrong with one argument would also be wrong with the other one. But if so, what’s wrong with them? Are they invalid? If so, how? Whichever argument you choose to look at, the argument has only four inferential steps, and all of them use elementary valid rules of inference or rules of replacement. Since each inferential step in the argument is valid, the argument as a whole must be valid. This also, incidentally, provides us with a reason to conclude that premise 1 is true. Premise 2 is a concrete application of a basic logical principle, justified by the concept of deductive validity itself. Sound arguments must have true conclusions; validity just means that, if all the premises of an argument are true, the conclusion cannot possibly be false. Premise 3 is a simple empirical observation; if you’re not sure whether or not it’s true, just check down on line 9 and see. Premise 4 is a completely uncontroversial application of disquotation rules for true sentences. And premise 5 may seem over-confident, perhaps even boastful. But if it’s false, then which premise of the argument are you willing to deny? Whichever one you pick, what is it that makes that premise false? On what (non-question-begging) grounds would you say that it is false?