dominoes
A few weeks ago, I [responded](http://www.johnehrenfeld.com/2013/03/solutionism.html) to a NYTimes opinion piece by Evgeny Morozov on the subject of ‘solutionism.” He has published a few more on the same technological theme, the latest being an article on March 31, 2013. His current [piece](http://www.nytimes.com/2013/03/31/opinion/sunday/morozov-machines-of-laughter-and-forgetting.html?partner=rss&emc=rss) criticizes technology for its tendency to support mindless actions.
> . . . “Civilization,” wrote the philosopher and mathematician Alfred North Whitehead in 1911, “advances by extending the number of important operations which we can perform without thinking about them.” Whitehead was writing about mathematics, but technology, with its reliance on formula and algorithms, easily fits his dictum as well. . . On this account, technology can save us a lot of cognitive effort, for “thinking” needs to happen only once, at the design stage. We’ll surround ourselves with gadgets and artifacts that will do exactly what they are meant to do — and they’ll do it in a frictionless, invisible way. “The ideal system so buries the technology that the user is not even aware of its presence,” announced the design guru Donald Norman in his landmark 1998 book, “The Invisible Computer.” But is that what we really want?
If that were all that our use of technology did, I would answer with a very strong yes. After all, a satisfying life is one where we are successful in achieving out intentions. Or is it? We pick up and use tools (technology) that we are familiar with and “know” that they will do whatever we want it to. These tools are ready-to-hand; they are meaningful only in use. In our myopic narcissistic way of assessing our actions, we fail to observe and think about the effects our actions have on the world outside of the immediate context of our actions. But except for rare occasions, tool using always creates unintended consequences beyond that immediate context. Such outcomes are unavoidable because the design of the technology inevitably is limited to the world of the designer, not to that of the users. If we and the world were simple machines, the tools we use might operate without such unintended consequences, but we are not.
In many cases these unintended consequences are insignificant or can be easily remedied. (I never call them side effects except when I slip-up because they are as much of the results of applying the technology as the primary purposes.) When they are, for example, creating climate change or Morozov’s example of the social media leading us to make public personal information we would rather not have released, avoidance efforts are impeded by our unconscious use of the technologies, just as the designer hoped we would. The old story about being unable to program a VHS player made bad design the butt of many of our jokes.
Morozov and I (and others) ask how can the users become conscious of all the outcomes of their use of technology and make corrections when they get more than they wished. His example is the loss of privacy due to the way browsers are intentionally designed. But he does understand that the multiplication of small undesired outcomes can accumulate into big problems like climate change. That’s why I have argued for a long time that unsustainability is an unintended consequence of modernity. It’s the accumulation of a zillion isolated applications of less-than-perfect technology. More than technology is involved; the fundamental beliefs we have adopted since Descartes and Bacon underlie our technology. Our persistent use of technology in spite of increasing awareness of these unintended consequences is a form of addiction–habitual use that produces outcomes that negatively affect the user.
Morozov provides a [link](http://www.create-conference.org/storage/create11papersposters/Things%20with%20attitude.pdf) to an article about “transformational products.”
> Recently, designers in Germany built devices — “transformational products,” they call them — that engage users in “conversations without words.” My favorite is a caterpillar-shaped extension cord. If any of the devices plugged into it are left in standby mode, the “caterpillar” starts twisting as if it were in pain. . . Does it do what normal extension cords do? Yes. But it also awakens users to the fact that the cord is simply the endpoint of a complex socio-technical system with its own politics and ethics. Before, designers have tried to conceal that system. In the future, designers will be obliged to make it visible.
These are not as new as he might be suggesting. I wrote about such artifacts in *Sustainability by Design*. I first learned about them over ten years ago while I was teaching in the Netherlands. A Dutch researcher, Jaap Jelsma, wrote about “behavior-steering design” and used as an example the two-button toilet which was quite novel back then. The unfamiliarity of the design stops the user momentarily and “talks” to him or her. Speed bumps are perhaps a more familiar instance. Morozov asks whether such technology that makes us think about what we are doing is a good thing. He quotes from those, including Whitehead, who claim that interruptions that make us think are impediments in our progress to some “perfect” state.
> While devices-as-problem-solvers seek to avoid friction, devices-as-troublemakers seek to create an “aesthetic of friction” that engages users in new ways. Will such extra seconds of thought — nay, contemplation — slow down civilization? They well might. But who said that stopping to catch a breath on our way to the abyss is not a sensible strategy?
I am more certain that we need to recover our consciousness of the mess we are making in the name of technological progress. Stronger drugs are not the solution to narcotic addiction; merely catching a breath is not enough. We have to couple the interruption with a conscious examination of the change in the world, not just in the scene in front of us, maybe a computer screen,. It’s the thinking that is essential, not merely the breakdown.
Those who design our technologies are conventional our moral actors, but they may have to take a more active role in the future. If they are responsible in any degree for both the intended and unintended outcome of the use of the artifacts they create, they should be accountable to some degree. This side of technology raises some new imponderable questions, but needs to be seriously considered before we unconsciously push ourselves into the abyss Morozov alludes to. But then, we, the users, also should be involved as it is our unconscious actions that are part of the chain that is pushing the world farther and farther from sustainability.

Leave a Reply

Your email address will not be published. Required fields are marked *