IN HIS FAMOUS case study “Dora,” Freud observes his patient playing with a small drawstring purse as she chatters away, sifting through her catalog of memories, yearnings, and fears. Dora opens her purse, inserts a finger, shuts it, pulls out her finger, over and over. Freud watches intently for several minutes, then interrupts to ask if she has ever heard of a “symptomatic act.”
Apparently, Freud believes he has caught her in just such an act, one that reveals her propensity to masturbate. It’s not hard to uncover the hidden lives of people, Freud declares, if you know what to look for. Affirming the miraculous nature of his new talent, Freud invokes the Gospel: “He that has eyes to see and ears to hear may convince himself that no mortal can keep a secret. If his lips are silent, he chatters with his finger-tips; betrayal oozes out of him at every pore.”
Freud’s hubris is breathtaking. He did not, it turns out, discover some secret handbook for decoding people’s inner lives. Indeed, few therapists and doctors today take seriously his map of the psyche. But his ambition lives on, its newest aspirants claiming they can crack our inner code — and even direct our behavior.
Data analysts believe this code is betrayed by our digital trails. If read properly, these trails reveal our deepest yearnings, entrenched prejudices, and guiding instincts. A vast industry — dubbed “Big Data” — is devoted to developing and applying the increasingly arcane science of digital analysis. Its methods and conclusions are hungrily sought out by powerful interests in the public and private sectors alike, who hope to know us far better than we know ourselves and then, armed with this knowledge, to profit from subtly manipulating or even controlling us. Is this just another iteration of Freud’s overweening ambition? Or are the dangers far more pernicious?
Consider some of the finer details that analysts believe they can glean from our data. One retailer has determined that purchasing “felt pads to keep our furniture from scuffing the floor” is a key indicator of creditworthiness. We betray our unworthiness, however, if we fill out loan applications in all capital letters, or include words like “promise,” “will pay,” and “hospital.” Upon reflection, these indicators make some sense. Harder to fathom, however, is how analysts determine creditworthiness from mining our smartphone behavior: the number of contacts we have and messages we receive, etc.
Big Data casts its net widely, slipping its tentacles into our lives via a host of products and media designed to snare any and every piece of telltale information — infiltrating parts of our lives that hardly seem so interesting or illuminating. Hence the proliferation of “smart” products, which will spy on us in even the most mundane tasks. “Smart beds,” for example, will monitor “our breathing rate, heart rate, how often we toss and turn, and then it will give us a sleep report each morning.” What will analysts learn from this information? They may use it to determine a host of consumer needs and wants — poor sleep will make you prone to snacking, for example, and so a likely target for certain marketers. What about smart toothbrushes? What will analysts deduce upon learning that I don’t brush regularly or carefully? If by contrast I brush diligently after every meal, then is this a sign of creditworthiness? There are also smart vacuums, smart coffee mugs, smart forks, smart liquor bottles, smart juicers — even smart egg cartons — which stand ready to betray telltale information.
If we want to make a case for privacy, it doesn’t help that we generally comply with the efforts of spies and data analysts, happily forking over information to avail ourselves of the conveniences of the digital economy. But it turns out that Big Data does not even need our munificence. Analysts believe they can learn an enormous amount about us from very few clues indeed: just the metadata, like the style of our communication and our methods of engagement.
This suggests we have little power, as lone individuals in charge of our privacy protections, to rebuff the sophisticated tools and methods of our spies. The picture becomes more alarming still when we consider that our spies, knowing our vulnerabilities, may seek to mold our behavior and undermine what we often call our free will.
Consider the likely future of beacon technology in the retail sector: beacons will identify you through your cell phone as you enter a store; they will record your location in the aisles, and when you stand near a product that data analysis determines you like — on the basis of an arcane formula — you will be seduced with personalized advertisements. Facebook believes it knows from user information when people are falling in love or breaking up. When you’re flagged as vulnerable, and you’re standing near the ice cream aisle in the grocery store, then voilà: you can be profitably plied with precisely targeted promotions.
Big Data has an important precedent in the work of 20th-century psychologist B. F. Skinner, Zuboff tells us. According to his controversial theory of behaviorism, Skinner maintained that humans could be read like an open book. “Freedom is merely ignorance waiting to be conquered,” he declared. Which is to say, human behavior that seems free and unpredictable is merely waiting for precise causes to be assigned. Skinner blithely envisioned a utopian society as the end result of acquiring this knowledge.
The 20th century was punctuated by chilling social experiments that shared Skinner’s presumptions and ambition. Some regimes aimed at “total domination,” as Hannah Arendt put it, premised on the possibility of utterly exposing the inner lives of citizens, knowing them thoroughly, anticipating their aspirations and associations — or at least coercing them under the glare of constant surveillance. These social experiments were doomed to fail, Isaiah Berlin argued, because humans are made of “crooked timber.” We cannot be made to fit into neat packages or precise social plans. We rebel. The failure of these experiments, Berlin believed, is manifested in the obscene violence they wrought: because totalitarian regimes could not utterly know and program people, they resorted to killing a great many of them instead.
It is dangerous folly to think that humans can be wholly known and determined. Why should Big Data succeed where earlier efforts have failed? More importantly, what will the wages of its errors look like? Forcing us “into the neat uniforms demanded by dogmatically believed-in schemes is almost always the road to inhumanity,” Berlin wrote.
When you insist on seeing but a simple and neatly circumscribed list of motives and impulses, you essentially jettison any room for compassion. Philosophers have long argued that compassion is rooted in being able to understand that human motivations are complex, perhaps infinitely extended, and far beyond easy imagination. The Stoics urged us to see potential offenders as caught up in a vast web of influences and contributing factors, which transcend the neat bounds of the individual. This expansive view of humans is not a panacea but surely it helps guard against endless iterations of hubristic abuse.
The ambitions of Big Data experts do the opposite — they make another iteration more likely. Their ever more fine-grained expertise, so avidly sought out by corporations, puts capitalism in a dark light indeed. A “business ethic” becomes a contradiction in terms, if consumers are almost by definition objectified. What sorts of abuses will corporations condone if we are automatons, programmable and predictable? What sorts of omniscient overlords will they think they are — and what dubious experiments will they enact? Thanks to sophisticated data analysis, Facebook has figured out how to slightly nudge our moods up or down; thus emboldened, the social media giant sought to influence our participation in elections — a feat that was subsequently copied by Cambridge Analytica. When Big Data tinkers with elections in one of the biggest democracies on earth, what won’t it stop at?
Though data analysts will not dispense with “freedom” per se, their agenda sows an attitude of impatience toward human foibles and unpredictability. Big Data inspires technocracy, the rule by experts who know best, know us best, and expect consistent results. But technocracy is the absence of politics, as Arendt argued. We are excused from being citizens, our input exiled from policy decisions. Why heed the will of democratic voters when their will can be anticipated, prompted, molded? Why wait on a free populace when our freedom is disdained or suspect? Reduced to a collection of data points, we are not even human — and, accordingly, in the eyes of policy-makers, our suffering is that much easier to tolerate. And when we stubbornly defy prediction, betraying the algorithms of starry-eyed analysts, what violence might ensue?