What Can Enlightened Coders Really Do?
Evan Selinger reads Darryl Campbell’s “Fatal Abstraction: Why the Managerial Class Loses Control of Software” with the realities his students face in mind.
By Evan SelingerApril 10, 2025
:quality(75)/https%3A%2F%2Fassets.lareviewofbooks.org%2Fuploads%2FFatal-Abstraction.jpg)
Fatal Abstraction: Why the Managerial Class Loses Control of Software by Darryl Campbell. W. W. Norton & Company, 2025. 320 pages.
Support LARB’s writers and staff.
All donations made through December 31 will be matched up to $100,000. Support LARB’s writers and staff by making a tax-deductible donation today!
FOR OVER TWO DECADES, I’ve taught philosophy at a technologically oriented university. At the beginning of each term, I brace myself for the moment when an earnest computer science major asks, “Don’t most companies value making money more than doing good?” The profit-driven nature of life outside the classroom makes students wonder why we even bother discussing the ethics of technology in my class. Sure, it’s fun to imagine that enlightened coders can make the world better while being compensated well for their technical expertise. But if this goal is nothing more than pie in the sky, why set students up for disappointment?
Darryl Campbell feels the weight of this question. He spent 15 years in the tech industry, working at “almost every level in the org chart,” and made the familiar disaffected transition from initially being “blinded by utopian zeal” to ultimately feeling “guilty” and “resentful of the person that [he] had become.” Once upon a time, Campbell wanted to believe that tech companies could run on higher values, like “don’t be evil” or “think differently.” But having worked at giants (Amazon, Uber, Expedia, and Tinder) and small start-ups alike, Campbell’s experience left him disgusted by the prevailing corporate ethos of “managerialism”: the imperative to “prioritize profits over safety, or over people, or over human civilization itself.” He contends that managerialism, alongside a misguided sense that software is the solution to all our problems, fuels a “toxic culture of technology” that desperately needs to be stopped.
If dark times like ours call for courage, who should step up? According to Campbell, my students can and should become the change agents they aspire to be. His new book Fatal Abstraction: Why the Managerial Class Loses Control of Software argues that only “everyday tech workers” can make change happen. Unfortunately, his appeal to socially conscious tech workers reads like a Hollywood movie trailer in which a booming voice proclaims, “In a world where corporate greed has gone too far, only one group had what it takes to step up and save us all.”
The Tyranny of Managerialism
The tech disasters of the last 20 years have all “shared a common origin,” according to Campbell. That origin lies in what he calls “managerial software,” which “has warped the decision-making process of executives as well as the internal politics of the companies over which they preside. It has allowed considerations of safety and quality to be ignored in order to balance the almighty financial statements.” Most importantly, Campbell laments, managerial software “has given too much power to those who have the most faith in, but most meager grasp of, software—and it silences those who can clearly see the only safe way through the minefield of technological ignorance.”
The business world, continues Campbell, became “homogenized” by doing two things: (1) embracing “financialization” as a “consistent and universal framework for making strategic decisions,” and (2) centralizing “power” with a “command-and-control layer” that places C-suite executives atop the company’s hierarchy while using middle management to standardize operations all the way down the org chart. Since managers have marching orders to optimize operations to maximize profit, Campbell contends that they have one-track minds. “You can try to […] get them to see anything beyond the financial statement,” he writes. “But I suspect you will have better luck explaining quantum physics to a dog.”
The Managerial Approach to Software
This is well-trodden territory. Indeed, Campbell’s analysis of managerialism and discussions of why governance can’t tame it are both implicated when my students say, “Because #capitalism.” They don’t need Campbell’s hard-won experience to believe Big Tech runs on the same managerial mentality as “Big Oil, Big Pharma, and Big Banks.” Nor is it a surprise to them that boards of directors are biased to preserve rather than challenge managerialism, and companies influence politicians by spending a fortune on lobbying. But Campbell doesn’t just wax theoretical about managerialism. Much of Fatal Abstraction consists of case studies that Campbell presents as evidence that the “managerial class” lacks an incentive to care about how harmful software can be. Let’s start with the chapters that exemplify quality business-tech reporting. Chapter one recounts the deadly 2018 and 2019 Boeing 737 MAX crashes and builds upon Campbell’s earlier Verge reporting about Boeing’s fall from grace. In themselves, the airplane accidents are tragic, occurring less than half a year apart and killing 346 people total. In bigger-picture terms, Campbell argues that the fundamental business “failures” that led to these fatalities are the same ones that corrupt the tech sector.
Much has been written about why the planes crashed, and to his credit, Campbell provides a detailed account. In broad strokes, Campbell’s view is that to compete with Airbus’s more fuel-efficient A320neo, Boeing prioritized designing a plane for quick market entry and thereby compromised on engineering rigor, which impacted the Maneuvering Characteristics Augmentation System (MCAS) software in particular. Relying on input from a single sensor, MCAS created a dangerous single point of failure.
Campbell attributes the key decisions that led Boeing to change its culture and lower its standards to CEO W. James McNerney Jr.’s leadership. To capture McNerney’s managerial mindset, Campbell briefs us on his background as a former GE employee and Jack Welch protégé. Because McNerney adopted Welch’s approach to business, Campbell insists that he saw airplane production as fundamentally the same as manufacturing light bulbs despite aircraft failure posing “catastrophic risks.” More specifically, Campbell insists that McNerney’s abstract, financials-first outlook (hence the title of Fatal Abstraction) “created overwhelming pressure to overlook quality in favor of budget and timelines.” McNerney’s elevation of “cost discipline” as the “foundational element” of business, combined with his mandate that “all product decisions would require a financial as well as a technical justification,” drove Boeing to outsource some of the software work on MCAS. This outsourcing, Campbell emphasizes, compromised Boeing’s ability to maintain rigorous quality control.
Chapter three covers Campbell’s time at Uber, unpacks its managerial corporate ethos, and reconstructs the key issues and events that led to Elaine Herzberg being killed by a self-driving Uber prototype car in 2018. Campbell recounts why the software had difficulty classifying Herzberg as a pedestrian. He also reviews the factors that prevented the human Uber operator, Rafaela Vasquez, from stopping the car in time. Ultimately, Campbell—predictably—blames Uber’s managerial drive to “push technology beyond its limits” in order to “show its investors that it had a path towards profitability.”
While these chapters have enough supporting detail to captivate a general audience, the one on PowerPoint is disappointing. It begins with Campbell criticizing Bill Gates’s “visual maximalist” vision of PowerPoint—his plan to make a ton of money for Microsoft by democratizing the software’s design so that anyone, not just specialists, could “display any kind of chart in any visual format” and easily add “gimmicky transitions between slides.” For most of the chapter, Campbell complains about people using PowerPoint to “lull the audience into a kind of intellectual stupor with slick computer-generated visuals.” He builds his case by showing how McKinsey consultants weaponize PowerPoint presentations for “egregious” ends, how Sam Bankman-Fried used it as “part of his longer [crypto] con,” how Elizabeth Holmes used it to hype her fraudulent start-up, and Colin Powell used it “to make the case that Saddam Hussein was making weapons of mass destruction.”
Campbell’s wholesale condemnation of PowerPoint is, in short, absurdly unnuanced. “No matter how it is used,” Campbell argues, “PowerPoint cannot help but create barriers to understanding.” This sweeping claim, which suggests the tool is best suited for obfuscation and manipulation, overlooks its demonstrable educational value. Many teachers know the dangers of lapsing into PowerPointlessness and have learned to skillfully present information to help students grasp complicated information.
Campbell is more sympathetic to the original design for PowerPoint because it included “restraints” like “the prohibition of animations and designs.” Still, the fact remains that thoughtful people can put these bells and whistles to good use. For example, if a teacher wants to help students pay attention, they can strategically use visual elements—including the ones Cambell sees as conducive to peddling snake oil—to ease their cognitive load, much like authors use paragraph breaks and section headings.
By suggesting that “there is almost no limit to what you can make someone believe with a good-enough PowerPoint,” Campbell is ascribing magical powers to the tool; he is conveniently ignoring the fact that Elizabeth Holmes and Sam Bankman-Fried had carefully curated public personas designed to enhance their authority and persuasiveness. Indeed, he has to imbue the tool with fetishistic powers in order to convince us that they won over their audiences by showing one mind-controlling slide after another. Ironically, throughout his book, Campbell laments that too many people “deceive” themselves into believing software is “magic.”
Chapter five starts off in a more promising vein by addressing an important question: should the military use autonomous, lethal AI systems? Thirteen years ago, in the International Review of the Red Cross, Peter Asaro advanced the bold position that such technology should be banned. Asaro, a philosopher, makes a detailed moral argument that has direct policy implications; Campbell tries a different approach.
Campbell begins the chapter with several pages of fiction that describe “what it is like” to be a computer program designed to kill a target without understanding the meaning of its behavior or the moral consequences of following its programming. Clearly, he intends to help the reader think about the gap between the purely mechanical execution of instructions and a human being’s visceral understanding of how those instructions relate to morality and mortality. But instead of describing what it is like for people to live in constant fear of drones or for someone to be targeted and killed, he hopes to shock the reader with descriptions like this: “You have no choice but to believe your programming when it tells you that it is right to detonate your bomb (even though you don’t know what that is) with the intent to kill (even though you don’t know what that means, either).”
Some readers might benefit from this rhetorical approach, and, crucially, Campbell follows it up with a straightforward discussion of substantial issues, including thorny ones concerning moral agency and responsibility. He also makes clear connections to the earlier chapters—for example, how software in Boeing’s MCAS, self-driving cars, and autonomous lethal weapons can all have problems understanding and distinguishing vitally important things. But the analysis falls short when it comes to the matter that should most concern Campbell.
If we skip to the conclusion, where Campbell finally offers guidance, he contends that “programmers of military software should always force their products to obey the international law of armed conflict.” In the abstract, this is great advice! But practically speaking, what does this obedience require? Unfortunately, Campbell leaves this question unanswered. Not providing an answer is a problem because, in the fifth chapter, Campbell observes that countries have varied, not uniform, policies on lethal autonomous weapons. In this complex and conflicting landscape, he notes that only Germany has a “blanket ban” on the military using lethal autonomous systems, with other countries being “content to leave their positions ambiguous.” Since Campbell doesn’t offer precise guidance on what standards coders should follow, how can he expect them to know when to disobey orders and how to correct mistakes? After all, they aren’t lawyers and may face serious professional or legal consequences for making the wrong choice. Simply put, it isn’t enough for Campbell to mention the disconnect between how software and actual people view matters of life and death. He also should be sensitive to the disconnect between his detailed negative criticism and his vague-to-the-point-of-useless call to action.
Finally, in his discussion of generative AI in chapter seven, he credibly reviews some of its limitations, hype, bad applications, and potential future risks—but again, the claims are familiar to anyone who has been following the mainstream conversations, and his interest in the managerial approach to software doesn’t elevate the topic. What does become apparent, however, is that Campbell’s concerns in Fatal Abstraction go far beyond worries about physical safety (plane and car crashes and deadly missiles) and con jobs (PowerPoint). For example, he criticizes the app Replika, which peddles virtual companions, for being emotionally and financially exploitative, and he lambasts generative AI for making “it harder than ever before to be a creator in any field.” The scope is important, for reasons I’ll address next.
Only a Republic of Builders Can Save Us
In the final chapter, Campbell finally offers insights on how tech workers can save us. For example, Campbell claims, “the outsourced avionics shop that Boeing used could have made its own independent assessment of the software’s hazard rating instead of trusting Boeing’s internal one.” Likewise, he contends that the computer vision team at Uber could have “independently tested the decision-making processes for common failure scenarios.”
These are big swings, with few detailed recommendations for how to accomplish such goals. Campbell admits that the change he’s hoping for—coders leading the charge on testing and verification processes, and in preventing harms in general, including “tuning” TikTok’s algorithms to not “serve people an infinite amount of hateful content”—will be “difficult.” After all, he reminds us that managerialism “keeps everyone alone and exploitable” and has made some companies so rich and powerful that they can afford to take risks at our expense. Still, Campbell insists that he isn’t looking for the impossible and certainly isn’t demanding that software workers become heroically self-sacrificing. Unfortunately, Fatal Abstraction isn’t just light on how tech workers can ignite systemic change; it also sets the bar too high.
Campbell points to Timnit Gebru as someone who had the temerity to boldly criticize Google while co-leading its Ethical AI team. But the thing is, she was fired for doing so. He also praises the whistleblower Frances Haugen. But by “smuggling out a trove of internal documents” from Meta, she took a heroic risk that could have resulted in prosecution. These two examples suggest Campbell is indeed asking for heroism. Yes, Campbell concedes that, ideally, coders would not have to resort to such clandestine measures. But, of course, the very premise of Fatal Abstraction is that we don’t live in an ideal world.
The contradictions between Campbell’s call for nonheroic resistance and his reliance on heroic examples points to a deeper issue. Because Campbell has so few things to say about systemic change and collective action, one can’t help but wonder why he didn’t do more to investigate the lessons learned from rebellious and committed tech worker movements. What tactics and strategies have worked? How can their good work be amplified and improved?
Researchers have addressed these questions. For instance, William Boag, Harini Suresh, Bianca Lepe, and Catherine D’Ignazio have examined three “successful tech worker campaigns”—getting major tech companies to publicly commit to not participating in a Muslim registry, ending Google’s AI drone Maven project, and pressuring companies to suspend the sale of facial recognition technology to law enforcement—while analyzing 150 collective actions documented in 139 entries in the Collective Action in Tech (CAIT) archive. Their analysis shows that tech workers aren’t condemned to bend the knee, and, crucially, it covers effective organizational strategies, including proven measures for how tech workers can pressure management and build connections with external allies, including civil society groups and stakeholders like customers. This last point is critical because it suggests that tech workers don’t have to go at it alone.
Campbell is right to be frustrated, and asking what coders can do is a pressing question. But to help those in the industry and students who hope to join them, he should have done more to answer his question. Instead, by emphasizing his own personal awakening, Campbell leaves the reader with a vague and aspirational rallying cry.
LARB Contributor
Evan Selinger is a professor of philosophy at Rochester Institute of Technology.
LARB Staff Recommendations
Keeping Humans in the Loop: On Hilke Schellmann’s “The Algorithm”
Evan Selinger reviews Hilke Schellmann’s “The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now.”
Can “Tech Criticism” Tame Silicon Valley?
Evan Selinger lauds Gary Marcus’s new book for its clarity on how to stop the madness and greed around generative AI. He questions the power of “tech criticism” to translate into actual reform, however.