pandoras box
I frequently read The Stone, a New York Times blog devoted to contemporary philosophy. Today’s [column](http://opinionator.blogs.nytimes.com/2013/01/27/cambridge-cabs-and-copenhagen-my-route-to-existential-risk/?hp) showed how far away the field is from dealing with reality, something philosophy is supposed to do well. The column, written by Huw Price,�the Bertrand Russell professor of philosophy at the University of Cambridge. With�Martin Rees�and� Jaan Tallinn, he is a co-founder of a project to establish the�Centre for the Study of Existential Risk.
Existential risk, if I understand the article, is the risk that we will self-destruct at the hands of some technology that has decided it is superior to humans and wipes us out. Sound more like science fiction than philosophy. Here’s a key paragraph.
> By “existential risks” (E.R.) we mean, roughly, catastrophic risks to our species that are “our fault,” in the sense that they arise from human technologies. These are not the only catastrophic risks we humans face, of course: asteroid impacts and extreme volcanic events could wipe us out, for example. But in comparison with possible technological risks, these natural risks are comparatively well studied and, arguably, comparatively minor (the major source of uncertainty being on the technological side). So the greatest need, in our view, is to pay a lot more attention to these technological risks. That’s why we chose to make them the explicit focus of our center.
I say these guys have missed the point almost entirely. We are flirting with existential risk, but not at the hands of some exotic technology, but rather from technology in general, from automobiles to drones to lipstick. Price and his colleagues are sensitive to criticism of their claim.
> Objections to this claim come from several directions. Some contest it based on the (claimed) poor record of A.I. so far; others on the basis of some claimed fundamental difference between human minds and computers; yet others, perhaps, on the grounds that the claim is simply unclear – it isn’t clear what intelligence is, for example.
> To arguments of the last kind, I’m inclined to give a pragmatist’s answer: Don’t think about what intelligence is, think about what it does. Putting it rather crudely, the distinctive thing about our peak in the present biological landscape is that we tend to be much better at controlling our environment than any other species. In these terms, the question is then whether machines might at some point do an even better job (perhaps a vastly better job). If so, then all the above concerns seem to be back on the table, even though we haven’t mentioned the word “intelligence,” let alone tried to say what it means. (You might try to resurrect the objection by focusing on the word “control,” but here I think you’d be on thin ice: it’s clear that machines already control things, in some sense – they drive cars, for example.)
The threat today has nothing at all to do with the intelligence of machines and the possibility that they will outdo our systems of controlling ourselves and the Earth. One root lies in the mismatch between our models and the realities of the world. The fact that we tend to be better than other species in controlling our environment is completely irrelevant. We control only a bit of the world we live in, and tend to do it badly. We have been using mindlessly technology, both intelligent and dumb tools, to solve every problem without recognizing the unintended consequences of what we just did. That’s the real risk we should be thinking and talking about. It’s not likely that automobiles, even equipped with intelligent controllers, will rise up against us, but their use is upsetting the world’s climate at an ever increasing rate. The results of temperature rise may not exterminate us, but it will surely throw the world into a tizzy.
It would more useful by far to get Huw Price and his hyper-intelligent colleagues at Cambridge to put their minds to grapple with the real issues that technology poses. It is dehumanizing. It promotes the consumer economy that needs to be fed at the rate of more than one planet worth and is growing. But is it is only the result of the privilege of scientific thinking that drives all modern political economies. Yes we are the source of risk to ourselves, serious risks, but to worry about hypothetical problems when real ones are right in front of us is the epitome of academic arrogance and the poverty of philosophy.
The real risks lie at the level of the beliefs that drive modernity. As long as our scientists and philosophers think they can know all there is to know about the world and use that knowledge toward humanity’s progress, we are is deep doodoo. The world is complex and will always keep critical knowledge about the future secret. Trying to open Pandora’s box was a bad thing to do in mythical times and still is. The evils hidden in her box are equivalent to the unintended consequences we unleash on the world by operating with a faulty set of primary assumptions.
I am not going to repeat all my arguments about what we should be paying attention to here. I did this in a [recent post](http://www.johnehrenfeld.com/2013/01/i-have-just-finished-proofread.html). It seems to me that Price is repeating the apocryphal search for one’s keys under the street lamp, because that’s where the light (one’s familiar expertise) is, rather than searching in the place where they were dropped. If he is interested in preventing, not merely describing, the tragedy he is about to philosophize about, he needs to turn to the real problem of existential risk, not his version of it.

Leave a Reply

Your email address will not be published. Required fields are marked *