The Black Box Within: Quantified Selves, Self-Directed Surveillance, and the Dark Side of Datification

By Evan SelingerFebruary 17, 2015

The Formula: How Algorithms Solve All Our Problems — And Create More by Luke Dormehl

WHAT HAPPENS WHEN “life as we know it” becomes a series of occasions to collect, analyze, and use data to determine what’s true, opportune, or even right to do? According to Luke Dormehl, much more than we bargained for.


Philosophers of technology as well as researchers in related fields have had a great deal to say about the dark side of datification, with fierce debate raging in particular over the “threat of algocracy,” or “rule by algorithm.” It’s an important debate. Unfortunately, though, the typical scholarly contributions use rarified language and aren’t always accessible to broader audiences.


To his credit, Dormehl, a senior writer at Fast Company and a journalist covering the “digital humanities,” attempts precisely to broaden the conversation. This means that less charitable readers may well see The Formula: How Algorithms Solve All Our Problems — And Create More as a watered down, derivative version of Evgeny Morozov’s take on “solutionism.” But I think Dormehl’s book serves a crucial purpose as a user-friendly primer. The bite-sized bits of high theory (including snippets from Zygmunt Bauman, Gilles Deleuze, Michel Foucault, Bruno Latour, Michel Serres, and others) and law scholar commentary (including brilliant reflections from Danielle Citron and Harry Surden) have the virtue of being painlessly illuminating. And the slim package of four chapters and a conclusion is quite sufficient to help readers better appreciate the subtle societal trade-offs involved in technological innovation.


To Thine Own Algorithms Be True


I’ll be the first to admit that “formula” is a terrible guiding metaphor for inspiring critical conversation about how technology is used, designed, and viewed. The word designates abstractions: mathematical relations and symbolically expressed rules. Indeed, the term seems divorced from the gritty specifics of human labor and social norms.


Dormehl, however, has the opposite agenda from those espousing computational purity. To cut through the “cyberbole,” he coins an idiosyncratic definition: “[I] use The Formula much as the late American political scientist and communications theorist Harold Lasswell used the word ‘technique’: referring […] to ‘the ensemble of practices by which one uses available resources to achieve values.’” The Formula, then, is meant to be an existential, sociological, and, at times, historical investigation into individuals, groups, organizations, and institutions embracing a “particular form of techno-rationality”: the conviction that all problems can be formulated in computational terms and “solved with the right algorithm.”


In the first chapter, “The Quantified Selves,” Dormehl outlines how data-oriented enterprises determine who we are and what makes us tick. Readers have undoubtedly noticed that the so-called Quantified Self movement is escalating in popularity. Broadly speaking, the Quantified Self (QS) draws on “body-hacking” and “somatic surveillance,” practices that, as their names suggest, subject our personal activities and choices to data-driven scrutiny. Typical endeavors include tracking and analyzing exercise regimes, identifying sleep patterns, and pinpointing bad habits that subvert goals. Recently democratized consumer technologies (especially smartphones that run all kinds of QS apps) enable users themselves to obtain and store diagnostic data and perform the requisite calculations.


Of course, there’s nothing new about believing truth is mathematically expressed, and, to be sure, attempts to better understand ourselves will fail if they aren’t backed up by data. Here’s the rub: while individuals self-track because they hope to be empowered by what they learn, companies are studying our data trails to learn more about us and profit from that knowledge. Behaviorally-targeted advertising is big business. Call centers aim to improve customer service by pairing callers with agents who specialize in responding to different personality types. Recruiters use new techniques to discover potential employees and assess whether they’ll be good fits, such as having them play games that analyze aptitude and skill. Corporations adopt neo-Taylorist policies of observation and standardization to extract efficient robotic labor from their employees.


By juxtaposing individual with corporate agendas, Dormehl tries to do more than call attention to the naivety of living in what Deleuze calls a “society of control.” He also aims to discern basic dilemmas worthy of more scrutiny. For example, “the filter bubble problem” suggests that personalization can harm society even in cases where individuals get exactly what they think they want; for example, shortcuts for avoiding putatively irrelevant material can, in fact, promote tunnel vision and even bolster fundamentalism. And the easier it becomes to classify differences on a granular level, the easier it also becomes to engage in discriminatory behavior — especially when so many of the algorithms feeding us personalized information are veiled in black box secrecy.


The predicament I find most intriguing is the one Dormehl describes as the “innate danger in attempting to quantify the unquantifiable.” He most assuredly isn’t arguing for a spiritual reality that inherently defies science. Rather, he’s alluding to epistemological limits — how “taking complex ideas and reducing them to measurable elements” can lead people to mistake incomplete representations and partial proxies for the real thing. A related mistake comes from erroneously believing that the technologies and techniques supporting quantified analysis are untainted by human biases and prejudices. I will return to both of these points later when I discuss Dormehl’s own failure to keep them in mind.


Frictionless Relationships


In the second chapter, “The Match and the Spark,” Dormehl discusses how algorithms shape our personal relationships. While companies like eHarmony and OkCupid boast algorithms that can help lonely hearts find highly compatible partners, others cast a narrower net and cater to niche interests: BeautifulPeople.com focuses on “beautiful men and women”; FitnessSingles.com bills itself as “fun, private and secure environment to meet fit, athletic singles”; and VeggieDate.com is what you’d expect. And, what if your outlook is more in line with a “Eugenicist” orientation? No problem! GenePartner.com uses the tag line, “love is no coincidence,” and extols its ability to pair people “by analyzing their DNA.”


With so many possibilities to choose from, including technologically facilitated options for fleeting and satisfying hookups, two issues become salient. First, it would appear that “there is a formula for everyone.” Meet and meat markets have become all-inclusive by virtue, somewhat paradoxically, of becoming so specialized, thanks to the commodification of rare tastes and kinky proclivities. Second, the “antirationalist view of love” is going the way of the dodo bird. The “evidence” doesn’t support the ineffable mystery of desire or even of attachment, nor of serendipity understood as the non-jury-rigged chance encounters.


While these procedures for streamlining how we meet others might seem like improvements over older and clunkier trial-and-error approaches, Dormehl points to disconcerting repercussions. After quoting Bauman on virtual relationships taking “the waiting out of wanting, [the] sweat out of effort and [the] effort out of results,” he asks us to consider whether we’re starting to look at relationships as ephemeral bonds that can be undone almost as easily as pushing the delete key on our computers. After all, fixing stressed relationships requires work. Matchmaking companies nudge us into starting new ones rather than working on old ones. Heck, some apps purport to recognize when you’re in a room with good matches worth talking to!


Once we stop viewing other people, including romantic others, as unique, and instead see them as patterned appearances, then, Dormehl suggests, we will need to face the vexing question of why we should even bother preferring real people to simulated versions. Technology cannot yet present us with compelling doppelgangers, but, soon enough the artificial intelligence behind today’s chatbots will be able to come pretty darn close. If robots become idealized automated lovers, would there be good reason to reject their offers to be whatever we want, whenever we want, especially if they don’t impose demands in return? Similarly, what if companies could get beyond hokey schemes for enabling subscribers to send prewritten messages to their loved ones after they die, and instead offered authentic-sounding, original dialog — say, after getting a sense of a person’s personality by data mining their email, texts, and social networking commentary, much like a less buggy version of the scenario depicted in the Black Mirror episode “Be Right Back”? Would we start obsessively interacting with the equivalent of ghosts?


Although Dormehl recognizes the importance of these questions, he doesn’t try to answer them. That’s okay. A robust response requires discussing normative theories of care and character. The point of reading The Formula is to become aware of the questions, not to find answers.


Algorithmic Futures


The main virtue of the remaining chapters lies in reorienting us to the future. Chapter three, “Do Algorithms Dream of Electric Laws,” begins by discussing predictive policing. It transitions from there to explaining how automation is disrupting the practice of law by dramatically reducing the manpower and cost required to perform select legal services (e.g., “e-discovery”). Dormehl picks up steam when he gets to the topic of “Ambient Law” — “the idea that […] laws can be embedded within and enforced by our devices and environment.” For example, cars could be programmed to test the amount of alcohol in a driver’s bloodstream, only letting the ignition start if the driver is under the legal limit. Or, a “smart office” could be configured to ensure its “own internal temperature” adhered to the levels “stipulated by health and safety regulations.” In these and related cases, surveillance and enforcement are outsourced to machines designed to ensure compliance.


It may indeed seem like a good idea to minimize how often citizens break the law, but Dormehl reminds us that removing choice impacts autonomy. The crucial questions then are: How much does freedom matter? Which circumstances warrant limiting or eliminating it? And when should people be permitted to make decisions that can result in harmful outcomes? Since machines are imperfect, it’s also important to figure out how to minimize the likelihood of their making life-impacting errors while also creating effective processes for rapidly fixing mistakes after they occur.


But the issue I find most fascinating concerns the difference between “rules” and “standards.” Whereas rules involve “hard-line binary decisions with very little in the way of flexibility,” standards allow for “human discretion.” Standards can be problematic — for instance, when they lead to prejudiced behavior. But they also can beneficial — for instance, when they lead to humane treatment. A police officer could pull over a car going “marginally” faster than the legal speed limit and then, after learning the person is a tourist unfamiliar with the rules, let the driver go with a warning. In short, when should forms of discretion be operationalized in Ambient Law programs, and when should Ambient Law initiatives be avoided because the appropriate discretion can’t be coded?


In chapter four, “The Machine That Made Art,” Dormehl considers topics that link the production and reception of creative works to algorithms; for instance, he asks whether machines can predict which scripts will yield high-grossing movies; and, to this end, he examines the efficacy of data mining in determining what content will have mass appeal — an issue that’s been hotly discussed after Netflix’s successful remake of House of Cards.


The most intriguing issue, however, concerns personalization. Dormehl asks us to imagine a future where individuals can “dictate the mood they want to achieve.” Aesthetic objects are then created to produce the desired effects: not just customized playlists to help runners sprint energetically along, but “electronic novels” that “monitor the electrical activity of neurons in the brain while they are being read, leading to algorithms” that rewrite the sections ahead “to match the reactions solicited.” If the processes were effective, they’d force us to confront the question of whether art is valued because it is “a creative substitute for mind-altering drugs,” or whether such a utilitarian outlook demeans much of art’s potential to be transformative, let alone transgressive.


The Existential Conundrum of Quantifying Moods


While there is much to praise in Dormehl’s book, the examples are less than stellar. They are marred by three types of omissions: critical analysis gets left out, crucial facts go unstated, and important context gets ignored.


Let’s start with an instance where critical reflection would have been useful.


Dormehl writes:


Consider […] the story of a young female member of the Quantified Self movement, referred to only as “Angela.” Angela was working in what she considered to be her dream job, when she downloaded an app that “pinged” her multiples times each day, asking her to rate her mood each time. As patterns started to emerge in the data, Angela realized that her “mood score” showed she wasn’t very happy at work, after all. When she discovered this, she handed in her notice and quit.


That’s all Dormehl has to say about “Angela”: she used technology to track how she was feeling. The data revealed she wasn’t thriving at a particular workplace; and the newfound discovery proved to be such an eye-opener that she felt justified in making a decision that had the potential to be life-changing. Dormehl doesn’t provide more detail because, ostensibly, these remarks are sufficient to make his point.


While there is explanatory value to discussing “Angela” in this context, the example is too rich to simply be treated as one amongst many other instances where someone is trying to gain self-knowledge by turning to data-rich insights. After all, throughout The Formula, Dormehl signals that he wants us to think carefully about what it means to be human in an age where algorithms increasingly tell us who we are, what we want, and how we’ll come to behave in the future. Indeed, Dormehl goes so far as to declare that our age is marked by a “crisis of self,” where the Enlightenment conception of “autonomous individuals” is challenged by an algorithmic alternative. Individuals have become construed as “one categorizable node in an aggregate mass.”


“Angela’s” case ought to provide an opportunity to explore core questions concerning the existential implications of self-directed surveillance. Can something as subjective and experientially rich as “moods” be translated into quantified terms? Or are attempts to accomplish this bound to miss or misrepresent important details? And, what is the ethical import of people outsourcing their awareness and decision-making to an app? What does it mean that they trust this app more than their own senses to tell them how they really feel?


A good way to start answering these questions is to analyze a host of specifics that Dormehl fails to pursue. Indeed, once he decided to discuss “Angela” in The Formula, he should have been prepared to tell the reader what kind of information she provided when the app pinged her. Did the interface give “Angela” a drop-down list of moods with categories to select from? If so, that type of design might predispose her to mistake subtle feelings for cruder forms; maybe there just weren’t nuanced options to choose from. It’s a safe bet that she wasn’t thinking about the problem of reductionism when using the technology.


Dormehl also should want to know if the app presented “Angela” with a scale to rate the intensity of her moods. If it did, then we might want to ask how she determined the best way to use it. Turning first-person experience into objective-looking mathematical terms requires translating terms across domains. Poor judgment and limited skill can, obviously, compromise the results. It’s easy to imagine, for example, “Angela” rating a moment of frustration a 10 out of 10 simply because it felt intense, and it’s also easy to imagine that the app recorded her response without offering feedback — by way of questioning, as a human inquirer might, whether the intensity really merited the highest possible rating.


Another important factor Dormehl should address is how — if at all — “Angela” tried to rule out confounding variables. Without some sense of how to separate signal from noise, she can’t determine whether being at work per se led to bad moods, or if the negative affect was generated in select situations, like dealing with a particularly annoying coworker or overly demanding boss. This information is crucial for making an informed decision about whether “Angela” should quit her job or just lodge a complaint with Human Resources.


Finally, Dormehl should plumb the reasons why “Angela” wasn’t confident enough in her own abilities to determine the source of her misery. When our jobs make us miserable, most of us are well aware of it. We don’t walk around plagued by the mystery of why we’re feeling down. Perhaps, then, the most relevant issue wasn’t that self-tracking enabled “Angela” to learn more about herself. Maybe she already knew how she felt about work but was waiting to quit until she could frame the choice as an evidence-based decision. If so, the question Dormehl should be asking is whether it’s a new type of “bad faith” to turn to data to validate difficult decisions we’re already inclined to make.


Persuasive Privacy Advocacy


In order to rhetorically dramatize some of the examples, Dormehl sometimes leaves out important information. Consider his depiction of Facedeals.


A Nashville-based start up called Facedeals promises shops the opportunity to equip themselves with facial recognition-enabled cameras. Once installed, these cameras would allow retailers to scan customers and link them to their Facebook profiles, then target them with personalized offers and services based upon the “likes” they have expressed online.


The operative word here is “allow.” If we interpret the word in an active sense, we get the impression that as soon as retailers install the facial recognition cameras they immediately can use biometric data to link every customer who gets scanned to his or her corresponding Facebook profile. But this isn’t how the program actually works. It isn’t a matter of installing the technology, turning it on, and then, boom, a brave new world of commerce begins. Privacy protections, which Dormehl doesn’t mention in this context, do determine, at least to some degree, what actually happens. In order to receive the personalized deals, users need to give their permission by opting in to the program and authorizing the Facedeals app. Consequently, customers remain in control of the situation; neither technology nor businesses is calling the shots.


Dormehl’s discussion of a related surveillance program is also plagued by the problem of omitting detail to create a heightened sense of unease.


In late 2013, UK supermarket giant Tesco announced similar plans to install video screens at its checkouts around the country, using inbuilt cameras equipped with custom algorithms to work out the age and gender of individual customers. Like loyalty cards on steroids, these would then allow customers to be shown tailored advertisements, which can be altered over time, depending on both the date and time of day, along with any extra insights gained from monitoring purchases.


The prospect of surveillance leading to “extra insights” sounds especially ominous if we don’t know about the constraints that limit what data can be collected and analyzed. Since Dormehl doesn’t identify any of them, I’ll highlight the most crucial limitation. When the program was announced, Tesco clarified that it won’t record personal information, including customers’ images. Had Dormehl mentioned this commitment, he would have given readers less reason to crinkle their faces in disgust and horror.


By mentioning such conspicuously absent detail, I’m not suggesting that there aren’t good reasons to be concerned about corporate surveillance creep. To the contrary, I believe it’s essential to have robust privacy discussions about the matter before it’s too late; technology develops rapidly, and consumer regulations obviously have a hard time keeping up. But in order for privacy advocates to stand a shot of being taken seriously, they need to express justifiable propositions. They can’t make credible cases for protecting privacy by omitting highly pertinent facts.


Why Universities Embrace Surveillance


At time Dormehl fails to capture the full significance of what makes an example troubling. For example, after introducing us to CourseSmart’s educational technology, which “uses algorithms to track whether students are skipping pages in their textbooks, not highlighting significant passages, hardly bothering to take notes, or even failing to study at all,” Dormehl only points to one worrisome implication: the tool — and others like it — allows people to be judged by ideologically questionable “engagement” metrics. This focus allows Dormehl to quickly segue from discussing colleges using these so-called smart textbooks to the rise of neo-Taylorist corporate practices, such as “Tesco warehouses in the UK” where “workers are made to wear arm-mounted electronic terminals so that managers can grade them on how hard they are working.”


Casting a net that unifies student and worker is problematic. Dormehl moves so quickly beyond the domain of education that he doesn’t ask an important question: beyond pedagogy, why are universities aggressively pushing for big-data driven policies? In other words, why are enhanced data collection and analysis regarded as central to the very survival of brick and mortar colleges? Such questions provide the proper context for understanding why administrators may be keen on the large-scale adoption of the type of technology CourseSmart offers, even if professors prefer more old-fashioned approaches to teaching.


Simply put, the engagement issue is a small part of a larger set of concerns — for instance, administrators wanting to increase the odds of students successfully graduating. This has come to mean their using predictive analytics and extensive surveillance to help students select right courses, do well in their classes, and avoid behaviors that compromise mental health. While these goals sound great in the abstract, in practice a host of troubling issues ensue, ranging from profiling to sensitive data being taken out of its intended context of use. In other words, the algorithms powering these operations and the logic sanctioning their use requires greater scrutiny. It might even be appropriate to consider regulating them because of ideals associated with “fair automation practices.”


While I’ve been critical of how Dormehl presents some of his examples, the issues I raise are ultimately minor in comparison to what he does well. Dormehl packs a lot of content into a succinct text, and while he doesn’t offer his own theories or solutions, The Formula successfully demonstrates why academics shouldn’t be the only people interested in philosophical questions concerning technology.


¤


Evan Selinger is an associate professor of philosophy at Rochester Institute of Technology, where he is also affiliated with the Center for Media, Arts, Games, Interaction, and Creativity (MAGIC).

LARB Contributor

Evan Selinger (@evanselinger) is a professor of philosophy at Rochester Institute of Technology.

Share

LARB Staff Recommendations

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!