friendship alg
Central to flourishing is the recovery of being–what it means to be human. In many ways we are the same as other animals. We need food and shelter to survive. We have offspring to keep the species going. But the comparison soon stops, as we have language allowing us to coordinate our actions with others of our species. Language enables up to act intentionally as the result of cognitive processes greater than almost all other species. Whales and dolphins may share some of this capability with us, but it is very rare among the animals. We can invent words to describe what we encounter through our senses. Over time, we have developed a huge vocabulary that grew from the limited perceptions of early hominids to encompass what we have observed through the technological devices that extent our limited vision.
The words we have invented start with observations of distinctions–phenomena that stand out against a familiar context and come to possess meanings through their application in practice. Our language represents the phylogeny of our species, its history of experience. Words come into play while others disappear. Parallel to the evolution of our species, individual human beings acquire their own language, out of that shared by all, from their particular lived history (ontogeny). Since every being has a distinct life history, we are unique creatures; no two are alike at the genetic level and at the practical level, the domain in which we act out our lives. We share meanings of the language we all acquire through or development in a shared culture. It is that set of shared meanings that allows us to coordinate our actions with others. Part of being human is to hold these two opposing distinctions. One is the recognition that we share the world with others, that is we are social animals, acting out of a common pool of words and meanings. The other is our sense that we are unique, not like any other individuals. Even identical twins show differences.
I have been very concerned about the tendency of contemporary practices to blur that distinctiveness. Our well-being has come to be measured by numbers, created by economists. And now I see a much more ominous invention, big data, coming. I read a chilling, for me, article in the latest The Atlantic issue telling how big data are going to be used to decide who gets hired and who gets fired. First used to build baseball teams, big data analysis is spreading to the larger corporate world. Companies that create and employ algorithms that pick out “winners” are sprouting up everywhere. This innovation merely extends the methods that have been used to select employees for decades, but the extension is huge. The article says,
> The potential power of this data-rich approach is obvious. What begins with an online screening test for entry-level workers ends with the transformation of nearly every aspect of hiring, performance assessment, and management. In theory, this approach enables companies to fast-track workers for promotion based on their statistical profiles; to assess managers more scientifically; even to match workers and supervisors who are likely to perform well together, based on the mix of their competencies and personalities. Transcom plans to do all these things, as its data set grows ever richer. This is the real promise—or perhaps the hubris—of the new people analytics. Making better hires turns out to be not an end but just a beginning. Once all the data are in place, new vistas open up.
I think it is hubris, reducing people to some score. I think it is always hubris whether it be the new “people analytics,” as the article identifies this new technology, or the older cruder testing methods. Choices do have to be made. These methods presumably bypass the biases we all have and use as we filter a slate of candidates through our thought process. But it also reduces those making the choices to mere tools for a system based on somebody’s guesses at what is important in the life of the organization. More and more, what is important is also just a set of numbers, profits, productivity, market share, and so on. Workplaces are not just mechanical systems; they employ people, real people to make them go. Perhaps less people all the time as “robots” and automation take over, but there will always be people.
People are essential because the world is complex, and new problems will always crop up that haven’t yet been reduced to some algorithm. Lean manufacturing, the latest standard in productivity, rests on the ability of everyone to observe what has been happening. Sometimes it is the lowest person on the totem pole that comes up with a solution. I wonder if these new algorithms would have picked out the “right”assembly-line workers or shop cleaners.
But what I most criticize about this new way of making business evermore efficient and, in theory, more profitable is the further reduction of humans to numbers. Our nation was predicated on the equality of humans, even though it did not walk the talk for quite some time (maybe not yet). We are clearly not a nation of equal human beings. It used to be that we could say, with some honesty, that equality meant equal opportunity to advance up the social ladder. No longer. I can imagine that this new use of big data will make things worse. The article argues otherwise.?
> Ultimately, all of these new developments raise philosophical questions. As professional performance becomes easier to measure and see, will we become slaves to our own status and potential, ever-focused on the metrics that tell us how and whether we are measuring up? Will too much knowledge about our limitations hinder achievement and stifle our dreams? All I can offer in response to these questions, ironically, is my own gut sense, which leads me to feel cautiously optimistic. But most of the people I interviewed for this story—who, I should note, tended to be psychologists and economists rather than philosophers—share that feeling.
It is not surprising that psychologists and economists look on this as positive. They have been in the vanguard of those scientists reducing people to machines. I would like to hear what the humanists among us think. Their input would be most helpful, as this development is neither completely black or white, and most importantly, not reducible to some fancy algorithm.