I don’t think that technology is solely a matter of “information.” Insofar as technology is distinct from pure science, a technological enterprise has to involve some reference to material production and use.
I disagree, the concept of three helium atoms arranged in an equilateral triangle with edges of length 3 nm is distinct as a different type of concept from a scientific concept in the same way that a certain possible arrangement of pieces on a chess board is just an arrangement of pieces — it’s additional information distinct from information on the rules governing the movement and bounds to the possible arrangement of those pieces. The technology of a windmill is just the blueprint of the windmill (possibly with annotated notes on how a force applied across here will lead to forces of in resulting proportion here and here). It can come embedded in actual material structures and be understood and/or used, but it doesn’t have to. If a society has invented gunpowder, retains knowledge of it, but doesn’t use it as our bundled concept of “gunpowder” — or even use it at all — we don’t say that they don’t have the technology of gunpowder. They do.
A society can have nuclear bomb technology, and yet have deliberately built no nuclear bombs or even bomb-making facilities. (you can insert your own WMD joke)
To be able to speak of something as a technology, it needs to be something that’s embedded in the means-end structures of rational agents acting in the world.
While I have been downplaying it, I think you’re right that use plays a role in our best definition, but I would argue that it can simply be ‘capacity to be used.’ Further use by a rational agent does not imply embeddedness in social context. We can take any technology that you might claim tends towards bad ends and place it outside of any social concerns in the context of the scientist on the desert island — presto, the bad ethical consequences completely disappear.
Well, in an important sense no — the equipment is the same but the technology is different, because to have avionic technology is not just to have certain kinds of equipment, but to employ it in certain ways.
But in an important sense YES — because whether or not they’re aware of it, the structures provide them with access to certain capacities. I agree that there’s utility to the concept of “structures that provide a capacity” + “employment of them” but that doesn’t mean that there isn’t utility to the concept of “structures that provide a capacity” alone. My concern is that in bundling them together we lose a bit of the ability to speak of “structures that provide a capacity” WITHOUT “employment of them”. Just what word would you have us use to describe the aviation technologies the primitives posses? I think it’s important to still be able to imply that these CAN be employed — whether or not they are (and whether or not the employment their possessors dream up is in the same context we would normally conceive — ie using propellers to giblet fruit). “Aviation” is the social context we apply over the technology, the specific way we frame it. We can apply the same sort of social context as a modifying or clarifying term on a “realm” of science, but that doesn’t mean that science or concepts in science are dependent upon that the same social context that modifying/clarifying term does.
The affordances that a technology has — what it makes easier and what it makes harder — may have effects that outstrip any individual person’s intentions in adopting the technology
But I think its a huge mistake to assume that those affordances will be the same in different contexts! There’s just so damn much complexity to those sort of situations from social and cultural biases/focuses to material limitations and strands of technological development are not some linear, deterministic process. It does not seem sufficiently argued to me that working out the formulas for critical mass and high megatonnage in any way determines that a society will end up applying it in the context of a genocidal weapon. I can easily imagine societies and cultures that react by launching off such a development in a different direction than how we did. Just as I can see DARPANET being managed in a very different direction than LOLcats and infoshop.org. I know that we tend to see technologies as opening up inexorable cans of worms these days, but (unfortunately) that’s not necessarily true. Mistakes, quirks of individual focus, social shifts, differing speeds of development and the impact/interplay of other technological strands can play a huge role. So it really seems to me like all you’re getting at is the “law” of unintended consequences. Which is far different from some specific internal logic.
]]>Oh, since I was reminded of it by William’s challenge on whether or not the heterogeneity response cedes too much, I should also mention that my point isn’t just to say There’s more to technological civilization than skyscrapers,
or whatever. I think that primitivists and techno-skeptics go wrong not only by missing forms of technology that fall outside the narrow class of things they are critiquing (which they certainly do), but also that a lot of the things they say about the specific technologies that they are critiquing are dead wrong — sometimes boneheadedly so. (The common idiot notion that the mass diffusion of high-speed long-distance communications — cell phones, the Internet, etc. — is somehow alienating people from real
communication and social relationships, for example, is both wrong and stupid. As are a lot of the most common tech-conservative attacks on cars, teevee, or the alleged evils of allopathic medicine and modern pharmaceutical technology. (All of which certainly have their problems — but often not really the problems that their critics would like to insist on.)
I think there are some philosophical issues at stake here which are worth arguing about, but that a lot of apparent disagreement here is in fact more or less purely verbal.
I’m surprised and more than a little disappointed to hear you talk in terms of certain technologies’ “internal logic”. Try as I might the most charitable way to describe that language is just, well, mysticism. Chunks of plutonium-239 mashed together to achieve critical mass is just an arrangement of atoms.
I agree that chunks of Pu-239 around each other — while no doubt dangerous in relation to human health, etc. — are just arrangements of atoms with no internal logic
to speak of other than their behavior under natural law.
But I don’t know how that is a reply to my point. Chunks of Pu-239 near each other, just as such, aren’t a technology;
they’re just a phenomenon. To be able to speak of something as a technology, it needs to be something that’s embedded in the means-end structures of rational agents acting in the world. What you try to separate out as the social context of application and intention,
I consider to already be baked in as soon as you start talking about something as a technology
, as opposed to just a fact to be observed. (Imagine a cargo cult that managed to get its hands on real airplanes, control towers, etc., but, rather than understanding them and using them as outsiders understand and use them, instead simply incorporated them into the same old rituals. Does this society have avionic technology? Well, in an important sense no — the equipment is the same but the technology is different, because to have avionic technology is not just to have certain kinds of equipment, but to employ it in certain ways.)
And it seems to me important that, considered in the context of use, every technology has certain affordances. I don’t see at as mysticism to propose, for example, that there are some things you can do in writing that you just can’t do very well in speech — and vice versa. Not just because people have or haven’t thought enough about the problems, but because the technology itself supports certain kinds of activities and not others. (If you don’t believe me, try to do some differential equations in speech.) The affordances that a technology has — what it makes easier and what it makes harder — may have effects that outstrip any individual person’s intentions in adopting the technology; that is to say, it is not purely a matter of the goodness or badness of the plans of any one individual, or any mass of individuals, what falls out of the uptake of technologies with different kinds of affordances. To talk about these effects is to do something more than just talking about individual plans, either alone or en masse, because, like all human activities, they may have important unplanned consequences. Which strikes me as being no more mysticism, in and of itself, than any other kind of spontaneous order theory.
The set of concepts denoted by a technology is one of inert structural information,
I don’t think that technology is solely a matter of information.
Insofar as technology is distinct from pure science, a technological enterprise has to involve some reference to material production and use.
My contention is that such nasty ends can never be inherent in an enterprise solely unto itself.
Depends on how you individuate enterprises,
I guess. Does, e.g. the study of military applications of chlorine gas count as a separate technological enterprise from the study of producing and using chlorine as a disinfectant or a bleach? Well, probably in one sense yes and in another sense no. But if you can individuate the technology of military applications of chlorine from those other technologies (it certainly has some overlaps, but there are also plenty of obvious divergences of interests and divergences of lines of research, etc. that are involved), it seems to me like that pretty clearly has a nasty end baked in from the get-go. (Of course, you might also wonder about those who study military uses of chlorine gas in order to design more effective defenses against that. But why should I count the technology of defense against chlorine as being part of the same enterprise as the technology of using chlorine to attack? The scientific base may be the same, but science and technology are two distinct sorts of enterprise.)
simply asserting that “technology” has an “internal logic” in terms of ends
Well, I didn’t say that technology
has an internal logic.
I said that various technologies may have internal logics. There are a few malign examples and lots of beingn examples. For example, I would argue that it is part of the internal logic
(as I’m using the term) of the technology of writing that many new forms of social relationships develop along with it unless deliberately repressed by external force, with many ends that come along as constituent parts of those sorts of relationship. And that most of these developments are for the better — for example, norms of rationality, requirements of public justification for the use of force, appreciation of both history and novelty, etc. These tend to develop unless actively suppressed because with writing (and still more so with technologies that make writing cheaper and easier to copy — the printing press, the Internet, etc.), many rational activities become possible which heretofore were not possible, any many others become plausible which were heretofore implausible (symbolic logic and algebraic reasoning, for one example; detailed historiography, for another; and, hell, lots drop some praxeology and mention double-entry bookkeeping while we’re here), and that when those activities become plausible, new forms of relationships between rational beings become plausible, or existing forms become newly plausible, that could not have been practiced at all, or could not have been widely practiced, before people were practiced in those ways of thinking about themselves and expressing themselves. But new forms of relationships involve new ends — ends whose emergence depends in part on human planning, but also to a great extent simply on what the existing and prevalent technologies make available or unavailable to the human beings who come into contact with them. That’s all I mean when I talk about the internal logic of a technology: developments that it affords independently of any innovator or adopter’s particular plans for it, which tend to come out unless suppressed by external force.
Other times we speak of technological concepts simply in terms of inert informational relations. But this latter use and meaning CAN operate sufficiently unto itself (while the former has to include the latter).
Is there anything you mean by technology
in this latter sense (of information considered apart from that would not simply be covered by a broad sense of the term science
(or perhaps knowledge
)? If so, what?
A nasty end wouldn’t be a defeater for considering the enterprise worthwhile (which I take to be obviously false).
My contention is that such nasty ends can never be inherent in an enterprise solely unto itself. The core information of an idea, in terms of its existence as a technological apparatus, must be considered separately from the social and intentional context in which it may be constructed/applied/embedded. The set of concepts denoted by a technology is one of inert structural information, and exists in the land of is, not ought.
Of course one can write this off as a language problem: When we speak of “technology” we usually bundle in all the assumptions and social context of application and intention. Other times we speak of technological concepts simply in terms of inert informational relations. But this latter use and meaning CAN operate sufficiently unto itself (while the former has to include the latter). So while we might indeed need another word to describe the broader context-embedded substance (I often use “infrastructure”) — simply asserting that “technology” has an “internal logic” in terms of ends is extremely disingenuous and maliciously conservative. It defines the language in such a way as to make evaluations from beyond/outside the present or immediately conceivable social context impossible.
]]>As you may know, I’m a unity-of-virtue kind of guy. So I think that ingenuity is a positive and desirable trait or form of activity, but only because I am only willing to count something as an exercise of ingenuity (as opposed to what Aristotle’s translators usually call mere cleverness
) if it’s also done honestly, fairly, kindly, responsibly, elegantly, etc. etc. etc. I’m happy to deny — very sharply — any position on which this is just a matter of realizing some end that can be called good independently of the fact of knowledge or the act of ingenuity, if that’s what you’re worried about — the whole point after all was to say something about utterly purposeless knowledge — but of course there’s a difference between You don’t need some further independently praiseworthy end in order to justify the discovery and invention
(which I take to be importantly true) and A nasty end wouldn’t be a defeater for considering the enterprise worthwhile
(which I take to be obviously false).
I do agree with you that the core value of ingenuity and rationality makes it worth praising technology as such, even if there are particular technologies (currently very prominent technologies) which have been turned towards, or which have an internal logic that spontaneously tends towards, foolish or destructive ends. (After all, that’s why I said technological civilization is awesome, not just space telescopes.) But I mention the heterogeneity of technology because I think that it’s important to keep in mind in order really to take the question concerning technology seriously. In the sense of appreciating what primitivsts and tech-skeptics are on to, when they are on to something (which is more often than never — if one is going to defend technological civilization, as I think one should, I think one does have to keep in mind that Hiroshima is a pretty fucking serious objection). But also in the sense of recognizing how importantly wrong they are at the level of basic principles, and how the things that they are onto, when they are on to something, are actually points that have a much narrower application than they realize.
]]>Glad to see you clicked through and enjoyed the animation. Apparently my readership has changed up in this period of semi-hiatus because that post got me an inbox full of emails from folks who couldn’t understand what I was saying and then were horrified to find out (!) that I actually support science (my sarcastic knowledge of their views I think made it even worse for them). Anyway, a couple points:
The heterogeneity of technology is the default argument everyone undertakes in response to primitivism, but I can’t help but feel it’s missing the point and ceding too much. Despised consequentialist though I be in these circles, I’m deeply troubled that people in this discussion have ignored the ACT of invention to focus solely on contextual critiques of the inventions themselves. I see ingenuity as a positive and desirable brainstate, perhaps even the greatest actualization of neurological freedom (it’s pretty damn close to a summary of the moral good of “freedom” my utilitarianism would maximize, see also Godwin’s “perpetual improvement of the original mind”). What is being invented seems ultimately largely beside the point. Yes, there’s obviously some ends and means interrelation in the present context, since certain ideas (critical mass, nerve spots) are likely to be used to minimize future brainstates of ingenuity, but that doesn’t write such avenues of investigation or even development off inherently. It just means that they’ll remain less useful than other avenues until we fix this goddamn society.
Zerzan always gets the shaft in philosophical comparisons, but I have a profound soft spot in my heart for his core response to postmodernism: “The text endlessly compounds upon text and we can never transcend or escape because there’s no existence or reality outside the text? Dude, I’m out to abolish motherfucking LANGUAGE. I think grunting is suspect.” <3 <3