It’s Not What You Think: On Orit Halpern and Robert Mitchell’s “The Smartness Mandate”

By Evan SelingerMay 30, 2023

It’s Not What You Think: On Orit Halpern and Robert Mitchell’s “The Smartness Mandate”

The Smartness Mandate by Robert Mitchell and Orit Halpern

AN ODD BOOK, The Smartness Mandate (2023) could not, one might conjecture, have been written by ChatGPT. Not yet at least. Its authors, Orit Halpern and Robert Mitchell, professors at Technische Universität Dresden and Duke University, respectively, appear to have crafted it to satisfy their singular curiosities. These led them to include evocative but confusing observations stemming from Halpern’s fieldwork in Chile’s Atacama Desert; several pages of esoteric images, including the Ackley Function for two-dimensional space and energy-circuit “language” for modeling ecosystems; a strange comparison between how leaders in the Black Lives Matter movement and smart forest management ecologists think about possible futures; and a great deal more. The Smartness Mandate is unpredictable, idiosyncratic, brain-busting, and vertigo-inducing. It is self-indulgent too. It is worth reading for some combination of these reasons, but it takes readerly stamina and a willingness to suspend judgment. More on this later.


Thinking Critically About Contemporary Smartness

The overall aim of The Smartness Mandate is clear enough—to understand how and why we fetishize data collection and computation. Why, for instance, have we come to insist on viewing people, places, and even planet earth as sensors? It thus addresses a contemporary vision of intelligence that is most often associated with “smart technologies” and “smart cities.”

Other books have managed to approach the central issues with “smartness” in more accessible ways—and it is worth starting with them before unpacking this book’s opacities. Dare I first invoke Re-Engineering Humanity (2018), a book I co-authored with Brett Frischmann in which we contend that the fear some of us share over just how smart artificial intelligence will become tends to crowd out an equally compelling concern: are we humans increasingly being programmed to behave like simple machines? After all, we are continually thrust into personal and professional situations where we can outsource cognitive tasks to AI-powered tools and let algorithms shape our judgment, diminish our agency, and even make decisions for us. Flipping the computer science and entrepreneurial script about the race to create machines that match or surpass our general intelligence, Frischmann and I characterize inquiry into the diminution of humanity as the “Reverse Turing Test.”

Our concerns are aptly illustrated by AI-generated writing, a trend in automation that is raising the blood pressure of teachers everywhere. You can ask ChatGPT to write an entire essay about nearly any topic, and it will produce in seconds a confident and perhaps competent-looking result. It’s no wonder that schools are trying to ban students from using it.

To keep up with emerging expectations for using prompt-response natural language processing tools like ChatGPT, Grammarly has gone beyond being a “smart” grammar and style checker. Now, it can generate entire email messages for you—everything from a meeting reminder to thank you note or condolence message. And then there’s so much more, including the predictive texting capabilities of iPhones where software persistently anticipates what you will want to type. Relying extensively on these types of programs can nudge authors to become editors or something far more existentially disturbing: dummies to AI ventriloquists. If an AI system infers your communicative patterns and becomes particularly adept at mimicking your style, it can offer you words that sound like the ones you’d choose if only you were to put in the time and effort to select them. Once this computational mind-meld happens, the space between you and algorithms shrinks considerably. You risk being computationally seduced into thinking less and going on autopilot until you become a “personalized cliché.”

Then there are other books like Shannon Mattern’s A City Is Not a Computer: Other Urban Intelligences (2021). Halpern and Mitchell only briefly mention Mattern’s scholarship, but she provides a powerful perspective on types of intelligence that technocratic visions of smart cities unduly diminish. The technocratic vision wants to create urban environments wired up with cameras and sensors designed to capture and integrate information occurring throughout the city—traffic and transportation data, energy use data, public safety data, economic and demographic data, and healthcare data, to name a few examples. The goal is to use artificial intelligence to analyze this information to recognize patterns, predict future outcomes, optimize complex systems, detect anomalies, and much else in order to enhance governance by making it more efficient and less bureaucratically cumbersome. The more utopian accounts envision this combination of the Internet of Things, AI, and “big data” to be the key to making progress on many pressing problems—for example, public transit, housing policy, public safety, public health, and environmental protection initiatives. Mattern skillfully makes the case that when this type of intelligence is prioritized, even with the best of civic intentions, it can crowd out important alternatives. Specifically, data-driven urban solutionism can displace more diverse, embodied, and Indigenous forms of knowledge that have long made cities vibrant places, and also smart environments. Such knowledge can be found in “libraries, community archives, grassroots media, oral history projects” and other sites that aren’t pumping data into smart city databases and dashboards.

And, of course, you can’t understand what intelligence means today without understanding its relation to power and capitalism. As Jathan Sadowski argues in Too Smart: How Digital Capitalism Is Extracting Our Data, Controlling Our Lives, and Taking Over the World (2020), the benefits and harms of using smart technologies are unevenly distributed. Those with privilege tend to think about smart technology primarily, if not exclusively, in terms of consumer goods that hold out the prospect of improving their lives. Companies market smart health and fitness trackers as tools that harness surveillance to enhance your health by monitoring everything from daily steps walked and miles run to blood pressure, blood oxygen, and heart rate. Companies hype virtual assistants that make it more convenient for you to get GPS directions or stream music. But beyond “luxury surveillance” gadgets lies what Sadowski calls a “parallel reality” where marginalized people are either excluded from the benefits of smart technologies or else are overpoliced by them, and also overexamined. For example, the well-off are not worried about smart technology disabling their cars for missing a payment, nor are they worried about being bossed around by an inhumane algorithm while working in an Amazon factory.


The Smartness Mandate

The authors of The Smartness Mandate want to understand how and why we have “come to see the planet and its denizens as data-collecting instruments.” In other words, why do we fetishize computation as the key to just about everything that matters? To address this important question, Halpern and Mitchell try to connect as many dots as possible, which is part of what makes their book difficult. They zealously gallop across disciplinary fields and unpack ideas in “multiple and intersecting discourses, including ecology, evolutionary biology, computer science, and economics,” “draw[ing] inspiration” from French philosophers Michel Foucault and Gilles Deleuze, while also doing a deep dive into “technologies, instruments, apparatuses, processes, and architectures.” As if that weren’t enough, they also acknowledge drawing inspiration from “earlier work in the history of science, science and technology studies (STS), media studies, and urban/design studies.”

What, then, does this zealous interdisciplinarity add to the discussions mentioned above? In one sense, the guiding spirit feels familiar: the authors have normative concerns. They are worried about how the so-called “smartness mandate” limits the authority of other approaches to knowledge, profoundly constrains the ways some people receive care, and further exacerbates long-standing inequalities. In short, they want to illuminate the present while also making it appear “less normal, queerer, and different.” At one point, they say—in unusually simple and direct language—that they hope their book can help readers “envision another path for smartness.”

Let’s consider an example. Halpern and Mitchell explain how a guiding vision of some smart cities, like the one behind Hudson Yards—a “city within a city” in New York City—is to leverage cutting-edge infrastructure to protect its residents from threats, including natural disasters, like floods and hurricanes, and terrorist attacks. Even if function meets aspiration and the smart technology works as intended, Halpern and Mitchell emphasize that there’s still a big problem: it’s a pay-to-stay environment.

Hudson Yards is explicitly designed to be restrictive—only to safeguard the privileged people who live in its fortress-like environment. As with other experiments in what Naomi Klein calls disaster capitalism, this is not an egalitarian approach to public safety governance. While the general public is welcome to shop in Hudson Yards, its top-of-the-line protective mechanisms are only meant to help those who can afford to pay for premium private real estate. Hudson Yards is the high-tech version of the gated community, which, like its predecessors, channels fear, exacerbates social and economic division, and reinforces the neoliberal ideology that social problems should be addressed with market-based solutions.

So far, so familiar. What makes The Smartness Mandate unique is that Halpern and Mitchell don’t end the story there. They insist that we live in unprecedented times. “This turn to computation as the infrastructure of salvation,” they write, “is unique […] to our present.” From their perspective, we can’t understand some of the most consequential ways data is collected, interpreted, and used if we only apply familiar neoliberal theories to generate explanations, predictions, and recommendations. Since exceptional times call for unusual theorizing, we should not, they repeatedly contend, believe this is just another moment in the evolution of disaster capitalism. (Presumably, Halpern and Mitchell feel the same way about Shoshana Zuboff’s account of surveillance capitalism. Although, curiously, for a book that references so many names and ideas, the authors only mention Zuboff once, and practically in passing.)

What, then, are the primary features of contemporary smartness? On the one hand, Halpern and Mitchell do offer a straightforward answer. Smartness, they write, has a “deep logic” that is constituted by a combination of “populations, experimental zones, optimization/derivation, and resilience.” Their aim is to clarify what these ideas entail and give a genealogy of their origin and development. For example, when talking about resilience, they don’t mean in the psychological sense that a person is robust and flexible enough to bounce back quickly from challenging life experiences. Instead, they want to link the idea to a worldview in which crises are “quasi-constant,” and the main unit of analysis is systems, not individual human beings. They trace the outlook to ecologist C. S. Holling’s understanding of resilience as a “state of permanent management” that enables systems to “change” by absorbing disturbances and reorganizing. Holling believed such systems could “persist” better than those aiming for stability. Halpern and Mitchell claim that this idea has been extended to visions of smart cities. Allegedly, Hudson Yards can withstand shocks (natural and social disasters), maintain its fundamental structure and ways of functioning, and adapt by learning to improve how it responds to disruptions.

Unlike the books I mentioned in the opening of this review, The Smartness Mandate is primarily a historical reconstruction of concepts. Approaching the zeitgeist through the lens of intellectual history is potentially informative. However, this heady trajectory proves challenging for Halpern and Mitchell. They focus so intently on ideas that they fail to give adequate examples to illustrate their main concerns. The Hudson Yards analysis, which works as a cautionary tale for many problems with luxury technology, is anomalous. It’s almost as if they believe that simple and relatable material will be a distraction—or worse: it will give the reader a false sense of understanding. In the few instances when they do try to paint a concrete picture, they select unusual examples.

The book opens, for instance, with the imaging of the first black hole—with contributions to this imaging coming from Chile, a pretext for then segueing into a description of conflicts in the Chilean desert over water and lithium extraction. These cases—of collaborative imaging, of background conflict—are supposed to help illuminate the book’s four central ideas. I’ll try my best to explain how.

To image a black hole, researchers and equipment in Chile had to be connected to a global network of collaborating synchronized radio observatories. Since none of the observatories had enough information (“intelligence”) to complete the job on their own, each contributor in the system is a “population.” Smartness involves harnessing all the contributors’ partial knowledge.

Furthermore, the smart technologies used to conduct the astronomical research need lithium. The rub here is that the lithium extraction process has created tensions between the Chilean government and a Chilean chemical company—one with Chinese citizens on its board who allegedly are motivated by Chinese interests. This complex governance structure (governmental, corporate, and global) creates interesting power dynamics, and is an example of what Halpern and Mitchell call an “experimental zone.”

Finally, because lithium extraction is a water-intensive process, it’s further diminishing the local water supply. This is a major problem because water is scarce in the desert; its depletion negatively impacts the locals. Halpern and Mitchell contend that corporate engineers are emphasizing a bright side, making the proverbial lemonade from lemons. Specifically, the engineers see the problem as a catalyst for refining their expertise in order to create more efficient technology. In Halpern and Mitchell’s language, the engineers’ attitude—seeing the current problem as leading to future value—is an instance of “optimization/derivation.” And the engineers’ confidence in adaptation is a component of “resilience.”

It is hard to tell if these are idiosyncratic rabbit holes—on the order, say, of fieldwork memories that Halpern cherishes a bit too much. Don’t get me wrong—I found the descriptions fascinating. They’re generative enough to make my mind spin with ideas and associations! And yet, the analysis is so compact that I feel like I never quite get what the authors are trying to convey. My summary seems plausible, but I am not sure if they would say that I accurately represented their views.

Additionally, to capture what smartness is and could be, Halpern and Mitchell cover a lot of territory: for instance, diving into Ernst Mayr’s evolutionary biological account of speciation, Friedrich Hayek’s take on markets as information processors, Holling’s views of resilience, Nicholas Negroponte’s understanding of the “demo or die” business model, and the Black-Scholes model of “noise” in markets. Here is a representative passage:

We have thus sought, at various points, to open up smartness to traditional understandings of history, memory, and historical trauma. In some cases this was via an appeal to alternate understandings of natural evolution: for example, Simard’s understanding of the smartness of forests as in part a function of their capacity for extended memory. In other cases we emphasized the long lineages of trauma of those deemed “losers” by a market-oriented vision of smartness […] In addition, we excavated alternative visions of computation and living, such as the Japanese architect Arata Isozaki’s City in the Air project, which was like, yet differed significantly from, Negroponte’s vision of soft architecture. Had these past experiments been embraced, they would have led smartness in different directions. They are in this sense especially important to our project, for they underscore that the smartness mandate was not inevitable but instead resulted from contingent connections and alliances.

The book is like a complex indie film that may turn some readers off while inspiring others to revisit it repeatedly. The repeat readers will be those who are deeply curious about its contention that we are living through a profound historical shift: computation is becoming the common denominator linking nature, technology, and culture.

The idea I found most fascinating is Halpern and Mitchell’s assertion that smartness is not really about the allure of a silver bullet vision of technological solutionism. Frankly, that is often where my mind goes these days when I encounter AI enthusiasts enraptured by the promise of tools like ChatGPT to magically transform everything from healthcare to education. Supposedly, students will excel thanks to bot tutors, and lower-income people will have better health outcomes after getting quality medical advice from chatbots. In this utopian account, AI boosters imagine the future unfolding through simple linear pathways. AI tools produce knowledge quickly and cheaply—and then boom: long-standing systematic problems are fixed.

By contrast, Halpern and Mitchell insist that we are pivoting to a wicked problems worldview. Smartness advocates no longer expect problems to be solved once and for all. Instead, they embrace an ethos predicated on anticipating persistent global threats and disasters that can only be managed through ongoing technologically driven adaptation. The authors keep leaning into this sense of “resilience”—the idea that smartness means acknowledging that knowledge is provisional and finite; even when collectively pooled, it is never omniscient. Therefore, the smartness mandate perhaps comes down to the belief that we must constantly use smart technologies to manage our ever-changing circumstances. Otherwise, given the stakes of issues like climate change, we will “go extinct as a species.”

I just wish, even after reading the book three times, that I had a clearer understanding of how Halpern and Mitchell want us to embrace “learning processes that take place largely within computer systems.” I’m not sure if I should read it again. Or if it’s time to accept my own adaptation away from high theory.


Evan Selinger is a professor of philosophy at the Rochester Institute of Technology.

LARB Contributor

Evan Selinger (@evanselinger) is a professor of philosophy at Rochester Institute of Technology.


LARB Staff Recommendations

Did you know LARB is a reader-supported nonprofit?

LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!