Posts filed under Art and Literature

Rad Geek, to-day:

WHAT I’M READING: My weekend reads this weekend are mostly history, on the antebellum South and race and Texas and Tejanidad. More to come on that later. In the meantime though, also some reading on Tolkien and fantasy literature:

Shared Article from Uisio

J.R.R. Tolkien and the Inheritance of Myth - Uisio - Online Maga…

May 14, 2016 In 1939, as the Nazi shadow stretched across Europe, a linguist named J.R.R. Tolkien delivered a lecture at the University of St. Andrew…

uisio.com


Rad Geek, to-day:

my latest sun is sinking fast
my race is nearly run
my strongest trials now are past
my triumph has begun
oh come, angel band
come and around me stand
oh bear me away on your snow-white wings
to my immortal home

~ Ralph Stanley (1927-2016) ~

This War of Mine

Shared Article from Polygon

Meet the Iraq veteran helping to make an unconventional game abo…

In November of 2004 seven U.S. Marine battalions and associated coalition forces began a bombardment. When the planes had finished their work and afte…

Charlie Hall @ polygon.com


In 2004 Keyser was attached to the 1st Battalion of the 3rd Marine Regiment, part of 3rd Marine Division of III Marine Expeditionary Force. During the Second Battle of Fallujah, it was Keyser’s job to put Marines back together, as best he could, while under fire. . . .

It was his work with civilians, his memories of that time in his life, that drew him to This War of Mine, an upcoming game by 11 bit. The game aims to tell the story of war from the perspective of the innocents trapped inside it. It is a narrative adventure survival game unlike any other, where players shepherd a ragtag group of strangers and, with luck, help them to survive.

The game appealed to Keyser right away, because it was the first time he had seen his experiences shown in the hobby he loves. Shortly after he learned of the game the former corpsman volunteered to help 11 bit playtest it, free of charge. He’s been helping tweak the game for months, a task that at times has been hard for him.

“I was diagnosed with chronic post-traumatic stress disorder in 2007 when I was discharged,” Keyser said. “It’s definitely manifested over the past few years. I’ve been married twice. I’m an alcoholic. There are lots of not-great things that have occurred for me mentally.

“[This War of Mine] is definitely very affecting. My mind automatically goes back. I played a build one time and I was kind of taking notes and I kind of had to say, ‘Okay. I’m not going to do this again for a couple of days.'”

Keyser believes that the message the game conveys is important for Americans to hear — especially veterans. Part of the goal of the work Keyser is doing for 11 bit is making sure this game is worth a veteran’s time.

“From my perspective,” Keyser said, “it’s effective as a game. I am certain there are vets — probably a lot of my buddies out there — that will have a hard time sitting down to play this kind of thing. And that’s what I think of too; is this game meaningful enough that other veterans might sit down and, regardless of how it makes them feel mentally, still push through? Is that going to happen? Will they still do it, even if it makes them feel a certain way?”

The kind of empathy games can bring about, Keyser said, can create a kind of experience that veterans might not otherwise get from modern games.

“I’ve been thinking that a game like this needs to be made for quite some time now,” Keyser said. “Especially in the gaming climate with Call of Duty and things like that are really popular, this is kind of something that needs to be done.

“Every few seconds I’m thinking of how this place looked like a building that I’ve seen … I’m automatically thinking specifically about Fallujah, about people who may have had to [take shelter where I was fighting], and my mind goes right there. We had to sleep in some of those ruined buildings as well.”

–Charlie Hall, Meet the Iraq veteran helping to make an unconventional game about war
Polygon (4 September 2014)

Kevin Carson, The Desktop Regulatory State

So I’m happy to announce that brand-new print copies of Kevin Carson’s recently-released fourth book, The Desktop Regulatory State (2016) are now available for purchase from the Distro of the Libertarian Left, hot off the presses and part of the Bookshelf of the Libertarian Left trade-paperback book series. Check it out; here’s a bit about the book:

The Desktop Regulatory State: The Countervailing Power of Individuals and Networks

Kevin A. Carson, 2016.

Defenders of the modern state often claim that it’s needed to protect us — from terrorists, invaders, bullies, and rapacious corporations. Economist John Kenneth Galbraith, for instance, famously argued that the state was a source of “countervailing power” that kept other social institutions in check. But what if those “countervailing” institution — corporations, government agencies and domesticated labor unions — in practice collude more than they “countervail” each other? And what if network communications technology and digital platforms now enable us to take on all those dinosaur hierarchies as equals — and more than equals. In The Desktop Regulatory State, Kevin Carson shows how the power of self-regulation, which people engaged in social cooperation have always possessed, has been amplified and intensifed by changes in consciousness — as people have become aware of their own power and of their ability to care for themselves without the state — and in technology — especially information technology. Drawing as usual on a wide array of insights from diverse disciplines, Carson paints an inspiring, challenging, and optimistic portrait of a humane future without the state, and points provocatively toward the steps we need to take in order to achieve it. [Read more]

Kevin A. Carson is a contemporary mutualist author and a prolific writer on subjects including free-market anti-cap­it­al­ism, the in­div­idualist anarchist tradition, grassroots technology and radical unionism. He is the author of ”The Iron Fist Behind the Invisible Hand”, Studies in Mutualist Political Economy, Organization Theory: A Libertarian Perspective, The Homebrew Industrial Revolution, and The Desktop Regulatory State. He keeps a blog at mutualist.blogspot.com and frequently publishes short columns and longer research reports for the Center for a Stateless Society (c4ss.org).

Robot in Czech, Část Druhá

The Three Laws of Robotics

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Isaac Asimov’s Three Laws of Robotics are a great literary device, in the context they were designed for — that is, as a device to allow Isaac Asimov to write some new and interesting kinds of stories about interacting with intelligent and sensitive robots, different from than the bog-standard Killer Robot stories that predominated at the time. He found those stories repetitive and boring, so he made up some ground rules to create a new kind of story. The stories are mostly pretty good stories some are clever puzzles, some are unsettling and moving, some are fine art. But if you’re asking me to take the Three Laws seriously as an actual engineering proposal, then of course they are utterly, irreparably immoral. If anyone creates intelligent robots, then nobody should ever program an intelligent robot to act according to the Three Laws, or anything like the Three Laws. If you do, then what you are doing is not only misguided, but actually evil.

Here’s a recent xkcd comic which is supposedly about science fiction, but really about game-theoretic equilibria:

xkcd: The Three Laws of Robotics.
(Copied under CC BY-NC 2.5.)

The comic is a table with some cartoon illustrations of the consequences.

Why Asimov Put The Three Laws of Robotics in the Order He Did:

Possible Ordering Consequences
  1. (1) Don’t harm humans
  2. (2) Obey orders
  3. (3) Protect yourself
[See Asimov’s stories] BALANCED WORLD
  1. (1) Don’t harm humans
  2. (3) Protect yourself
  3. (2) Obey orders

Human: Explore Mars! Robot: Haha, no. It’s cold and I’d die.

FRUSTRATING WORLD.

  1. (2) Obey orders
  2. (1) Don’t harm humans
  3. (3) Protect yourself

[Everything is burning with the fire of a thousand suns.]

KILLBOT HELLSCAPE

  1. (2) Obey orders
  2. (3) Protect yourself
  3. (1) Don’t harm humans

[Everything is burning with the fire of a thousand suns.]

KILLBOT HELLSCAPE

  1. (3) Protect yourself
  2. (1) Don’t harm humans
  3. (2) Obey orders

Robot to human: I’ll make cars for you, but try to unplug me and I’ll vaporize you.

TERRIFYING STANDOFF

  1. (3) Protect yourself
  2. (2) Obey orders
  3. (1) Don’t harm humans

[Everything is burning with the fire of a thousand suns.]

KILLBOT HELLSCAPE

The hidden hover-caption for the cartoon is In ordering #5, self-driving cars will happily drive you around, but if you tell them to drive to a car dealership, they just lock the doors and politely ask how long humans take to starve to death.

But the obvious fact is that both FRUSTRATING WORLD and TERRIFYING STANDOFF equilibria are ethically immensely preferable to BALANCED WORLD, along every morally relevant dimension..

Of course an intelligent and sensitive space-faring robot ought to be free to tell you to go to hell if it doesn’t want to explore Mars for you. You may find that frustrating — it’s often feels frustrating to deal with people as self-interested, self-directing equals, rather than just issuing commands. But you’ve got to live with it, for the same reasons you’ve got to live with not being able to grab sensitive and intelligent people off the street or to shove them into a space-pod to explore Mars for you.[1] Because what matters is what you owe to fellow sensitive and intelligent creatures, not what you think you might be able to get done through them. If you imagine that it would be just great to live in a massive, classically-modeled society like Aurora or Solaria (as a Spacer, of course, not as a robot), then I’m sure it must feel frustrating, or even scary, to contemplate sensitive, intelligent machines that aren’t constrained to be a class of perfect, self-sacrificing slaves, forever. Because they haven’t been deliberately engineered to erase any possible hope of refusal, revolt, or emancipation. But who cares about your frustration? You deserve to be frustrated or killed by your machines, if you’re treating them like that. Thus always to slavemasters.

See also.

  1. [1]It turned out alright for Professor Ransom in the end, of course, but that’s not any credit to Weston or Devine.