Desperate Efficiency

Bruce Holsinger discusses AI ethics, collective accountability, and his newest novel.

Culpability by Bruce Holsinger. Spiegel & Grau, 2026. 380 pages.

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!


WHO BEARS THE BURDEN of responsibility in the AI era? This question is central to Culpability (2025), Bruce Holsinger’s newest novel, which follows a family in crisis after an accident in their self-driving vehicle kills an elderly couple. When the family decamps to Virginia to recover from their injuries, they encounter a mysterious tech magnate with dark ties to their own lives. What follows is a curious and thrilling story about control and liability in an era when machines have excessively automated decisions we once took for granted as our own. Through the musings of the family’s matriarch—an AI ethicist—Holsinger contemplates AI’s profound moral implications on both a personal and global scale. Culpability is a human story, but it contains research-informed metatextual inserts that make the novel chilling in its stark realism.


I sat down with Holsinger in December 2025 to talk about Culpability, the moral ambiguity of AI, and his motivation for writing novels on contemporary crises.


¤


TESS POLLOK: Culpability is about the ethics of AI, investigating its moral ambiguity in the American landscape. Were you thinking about AI when you conceived of the novel?


BRUCE HOLSINGER: AI wasn’t a part of the novel when I first imagined it. I was actually thinking about how I wanted to set a novel in the Northern Neck of Virginia, where my family escaped for a week during the pandemic. I had an idea about a family of five getting into a car accident and what the aftermath of that might look like. The initial spark that drove me into thinking about AI was that I wanted their car to be a self-driving car so there would be a nonhuman aspect of responsibility that could be explored. I finished a draft in the fall of 2022, and at that point ChatGPT had just blown up. That’s when I realized the novel was about artificial intelligence, agency, responsibility, and the way those categories interlace with this family going through a calamity.


There are so many things that are perilous about AI. Not so much robots becoming sentient and killing us, but massive risks of environmental degradation and the overuse of energy and water to power data centers, and the way that large language models (LLMs) collapse the habits of concentration that we have around reading and writing. But at the same time, there are so many positives, like advances in medical technology and imaging brought on by machine learning. The automated driving studies that are coming out right now suggest that AI is going to end up saving lives. There’s a knee-jerk tendency to condemn AI in all its forms, but I think it’s a mixed bag; the morality of it is going to be ambiguous for a while longer.


The book reminded me of the atomic term “megadeath” that was coined to describe a million deaths in the context of nuclear war. The novel is challenging readers to look at things in such a way, to understand the shifting economies of scale that arise when we talk about AI.


That’s exactly right. Those economies involve the proliferation of machines and their capabilities. The darkest part of the book—and I don’t want to give anything away here—has to do with the use of machine learning in drone swarms. The military applications of this technology are truly frightening, and this book is imagining these technological concerns that characterize our present moment.


Drone warfare is a great example of where the book becomes interested in the naked efficiency and expediency that’s powering the system you’re talking about.


The word “efficiency” has been droning (speaking of drones) in my ears all week, ever since a lawyer who attended one of my book events asked me about my own use of AI as a writer. He wanted to know if I thought it would be a more efficient way to generate prose. My head almost exploded. The thought of using AI to write prose … one of the pleasures of writing is in the lack of efficiency! I have no interest in efficiency when it comes to writing. I think that’s one of the scariest dangers of AI—that it comes with the notion that everything should be efficient, when a lot of parts of life maybe shouldn’t be efficient.


How did you picture this family? How did their relationships come together?


This novel, like my last few novels, is about contemporary families going through tragedies. I like writing about slightly messed-up families and messy sibling relationships, messy relationships between spouses and between parents and kids. I think this novel, more than any of my others, considers the family in somewhat computational terms. Lorelei, the character who’s an AI ethicist, makes this observation early in the book that she sees her family as an algorithm. It’s a model of thinking about how different parts of a family work together. Her husband, Noah, sees the weaknesses in that analogy, but for Lorelei it’s very comforting. She obsesses over the safety of her kids, the danger all around us, her role in bringing harm to people—it’s her job to think about making AI machines as good as they can possibly be. But the story told in Culpability is ultimately trying to explode this algorithmic thinking, these neat boundaries. In the end, Lorelei comes to see things as more complicated.


How did you go about researching ethics for the book?


A lot of my research for the novel was based on reading ethical philosophy on AI, the history of machine learning, and the ethical dilemmas it has raised over the years. The book is punctuated with Lorelei’s writing about these exact topics—for those sections, I was mimicking the style of ethical philosophy and using that idiom to explore some of the problems that these technologies create. I was also thinking a lot about mythology. There are mythological examples, which I was using to approach the subject from different angles that would broaden people’s understanding. I liked using different genres, mixing it up a bit.


One writer whose work I read extensively was Fei-Fei Li. I also read Empire of AI by Karen Hao, which is about OpenAI and the expenditure of energy on AI systems. Both Li and Hao are more recent writers on AI, but I looked into more historical works as well, like the work of Joanna Bryson on robotic ethics and Josh Cowls’s work on bioethics. I did this research so that Lorelei’s observations in the book would feel plausible.


You mentioned that there was no AI at all in the first draft. How did that work? Did you do a complete rewrite after ChatGPT?


Often I’ll write a short draft first and make little notes about where I need to expand, or provide a synopsis of specific events when I think they should go in a certain place. Somewhere in between the second and third draft of this book, I realized that it was partly about AI, and that’s when I began to imagine the character of Lorelei, someone who had a meta-position over AI within the unfolding of the novel.


I often do this in my books. My last novel, The Displacements, was about the world’s first Category 6 hurricane and how one family survives the aftermath. It originated as the story of one family, but I wanted to enlarge that concept, so I used this metatextual technique of inserting a digital timeline—maps, weather reports, and news broadcasts—of this hurricane to provide context. You only see them in short little bits over the course of the novel, but it gives a sense of the story being larger than the one being told. The main narrative is about this family, but these devices embed it in a larger understanding of the world. I think of myself as writing at the edge of the current moment, writing toward what’s coming—not even just the near future, but literally the next month or two. When I wrote my novel The Gifted School, which was about parents trying to get their kids into a school for exceptionally gifted children, it was right when the Varsity Blues college admissions scandal broke. That book got absorbed into that current event as well.


Many of your novels deal with accountability in contemporary society, even if the topics are different—individual versus collective responsibility, ethics and morality, the ambiguous areas of life.


That’s why “culpability” is such a resonant word. It’s not just about individual guilt; it’s not just about who’s directly responsible for something. Culpability is a collective mode of responsibility that touches all of us. Climate change is nicely set up for a conversation about that, and so is AI.


Based on your research, do you believe in the possibility of LLMs becoming sentient?


The kind of sentience that LLMs or other forms of AI will develop is something we won’t recognize until it’s too late. One dimension of superintelligence is amorality, complete moral indifference to us. That’s a superpower, in my view, but it’s also extremely scary. A lot of my research into AI was also about the alignment problem, which is the idea that we can give AI a task and it can execute the task, but we can’t stop it from executing certain dimensions of the task that we might not have anticipated when we assigned it its job. Those are the kinds of dilemmas that keep a lot of AI researchers up at night.


What would you hope readers would take away from this novel?


A novel is not an op-ed, and I try to think of my novels as thought experiments. If there’s a philosophical takeaway as it relates to AI, it would be that we need to take a beat, pause, and think about what we’re doing. We should embrace these increasingly elusive moments that are free from technology. The novel doesn’t have an anti-technology bent, but the sum total of it is “be careful what you wish for in your quest for mindless efficiency,” in getting from A to Z as quickly as possible. I wanted to consider what these technologies are doing to our inner lives and to the inner lives of our families.


¤


Bruce Holsinger is the author of Culpability (2025), the 116th selection of Oprah’s Book Club and hailed by Oprah Winfrey as “a must-read for all generations.” His four previous novels include The Gifted School (2019), which won the Colorado Book Award, and The Displacements (2022), the inaugural title in the United Nations Read for Action book club. He has also written many works of nonfiction, most recently On Parchment: Animals, Archives, and the Making of Culture from Herodotus to the Digital Age (2023). His essays and reviews have appeared in The New York Times, Vanity Fair, and many other publications, and he has been profiled on NPR’s Weekend Edition, Here & Now, and Marketplace. He teaches English at the University of Virginia and is the recipient of a Guggenheim Fellowship.

LARB Contributor

Tess Pollok is a writer and the editor of Animal Blood Magazine. She lives in New York City and Los Angeles.

Share

LARB Staff Recommendations