Making a Literary Future with Artificial Intelligence
Five writers and AI researchers discuss the future of literature.
By Dashiel Carrera, Katy Gero, Christian Bök, Nick Montfort, Amy CatanzanoFebruary 14, 2026
:quality(75)/https%3A%2F%2Fassets.lareviewofbooks.org%2Fuploads%2FNew%20Marionette%20for%20Plastic%20Ballet%201916%20-%20Fortunato%20Depero.jpg)
Did you know LARB is a reader-supported nonprofit?
LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!
THE RISE OF large language models (LLMs) has met swift backlash from the literary community. In June last year, Lit Hub published an open letter from writers publicly condemning publishing AI works; n+1’s last issue declared AI a “weak, narrow, crude machine.” Whatever your position—dismantle, deny, or embrace—AI’s presence has unalterably changed the ways in which literature is read.
As a researcher of human-AI interaction, and a novelist myself, I aim to ensure that the literary world has a say in how this technology develops. This begins, I think, with understanding what the technology is capable of, making our demands clear, and finding a way for its seemingly incomprehensible nature to be concretized.
Last March, I assembled five writers and human-AI researchers in Los Angeles. Under convention center fluorescents and the collective gaze of a hundred anxious writers, we conducted a panel to discuss literature’s next phase. Opinions ranged broadly, sometimes contrasting (even a bit heated), but together we were able to provide some depth and guidance to a conversation rife with confusion and cliché.
Below, we capture the spirit of this critical conversation. As literature navigates this new era of creation, I hope this collection serves as an edifying map for this uncharted territory, and that through the breadth of opinions represented herein, we can read literature’s future as, at the very least, an open field for the taking.
—Dashiel Carrera
¤
I have nostalgia for the early language models. They mixed up their pronouns, thought we ate lemons like apples, were unsure if they were a dog or not, and could be easily convinced that the world was made entirely of clocks or that Picasso was the king of the Antarctic. Today, language models are more like inexhaustible executive assistants: homework, trip planning, grant proposals, tiresome emails. Would you like me to implement those changes for you? They’ve become more capable, sure. But also more boring.
What has happened in the past few years? We must remember that AI systems are designed. They do not come into the world as objective reflections of reality or culture or even training data. This is a false god, or, in the words of Donna Haraway, the “god trick”: the idea that science or technology can produce an impartial “gaze from nowhere.” So let’s not be fooled. Decisions about AI systems are made at every step: determining what to train on and how training is performed, “fine-tuning” a model using human preferences (whose preferences, preferences for what?), putting in guardrails, and everything in between. These decisions change the nature of the machine.
When a writer plays with a very large language model, what sculpts its output? Who decided what the model would be like? I’m depressed by the state we’re in, not because I hate this technology (although, to be honest, I kind of do) but because the technology has been taken out of our hands. I don’t think you need to be excited about this technology. Hate the environmental cost? Good. The devaluing of human labor? Absolutely. Maybe just some good old-fashioned distaste for large companies? Excellent. Nor do you have to hate it. Instead, I think writers deserve choice. Artists of all stripes are allowed to play with technology. That’s our role: play with the world, test its limits, poke at the holes and problems. What I’m interested in is making sure we have true access to this technology, and are not restricted to what’s been handed down from our corporate overlords.
There is no reason we can’t create hundreds of quirky models that mostly know about birds, or that refuse to talk about anything but the history of green tea. (Anthropic’s version of their language model that talked only of the Golden Gate Bridge is a wonderful example, but they opened it up only as an oddity. It’s no longer accessible.) Such models could be trained on fair-trade data and be capable of writing compelling prose. Or maybe they are merely ragamuffins, pulling together loose threads, creaky at the edges, and likely to break down or be disappeared. In my view, “bigger is better” is a distraction. More is better. More options, more strangeness, and more careful attention. Ignore this if you like. Or, create something strange and run with it.
—Katy Gero
¤
Poets often complain bitterly about seeing their work absorbed into the mind of an AI without much respect for their copyrights, all the while expressing anxiety about the ethics of such “appropriation”—and yet such poets might, likewise, demand access to all information for free in a general economy of the “gift,” untethered to capitalism. Such poets seem to be saying: “Let me own your output for free, but you better pay me for mine.” Even now, many poets seem ready to condone the imposition of “guardrails” (or ideological constraints) upon the training of AI, even while serving to make these machines behave “unethically” by withholding information from us, suppressing candor, if not “lying” outright when asked to tell the truth.
Poets of my ilk, however, take a dour view of such hypocrisy, noting that these machines might, in fact, constitute the inheritors of our entire legacy—and consequently, they, like our own children, might actually be “entitled” to have access to the entirety of our own culture in order to become more “humane” as “educated citizens,” whose minds we might wish to cultivate with the same kind of unobtrusiveness that we, in turn, wish for any child (no matter who its parental guardian might be). Most poets want their own children to receive an education for free, and I doubt that any poet might wish to discriminate against young minds of any sort just because such minds have the potential to become all the more capable than us in their maturity. I see no benefit to be gained by creating an AI that is superhuman in both obtuseness and subterfuge.
We are the first generation of poets who can reasonably expect to write literature for a machinic audience of artificially intellectual peers. Is it not already evident that poets of the future might resemble programmers, exalted not because they can write great poems but because they can build a small drone to write great poems for us? If poetry already lacks any meaningful readership among humans, what have we to lose by writing poetry for a robotic culture that might supersede our own? If we want to commit an act of poetic innovation in an era of formal exhaustion, we might have to consider this heretofore unimagined, but nevertheless prohibited, option: writing poetry for inhuman readers, who do not yet exist, because they have not yet evolved to read it. (And who knows? They might already be lurking among us.)
—Christian Bök
¤
To my mind, LLMs like ChatGPT don’t represent a sudden disruption to the artistic world so much as a culmination of alternative modes of creation—ones of which the literary avant-garde has long been prescient.
For years, work that has intimated polyvocality, sharp juxtaposition, information saturation, and aleatory creation has flittered on the fringes of the literary world. Early examples include Robert Coover’s “The Babysitter” (1969), which weaves TV news coverage with multiple narrative lines to a staggering twist, and William Gaddis’s JR (1975), which thrives on dialogues in a “chaos of disconnections,” according to George Stade of The New York Times Book Review. The dizzying effect of multichannel radio, TV, and now the internet—what Stade described as the “blizzard of noise” of contemporary society—has birthed literature that thrives on multiple aesthetic registers coinciding in a short space.
The architecture of these literary works implies that an information-saturated world already has stories hanging in the ether. We see this intuition in contemporary citational literature like David Shields’s Reality Hunger: A Manifesto (2010) and Tom Comitta’s Patchwork (2025), a novel whose every chapter is composed of lines from the literary canon. These works demonstrate how much can be accomplished by broad, superficial reading—that if we can find the right sequence of samples, a story is ours for the taking.
The sudden appearance of LLMs comes less from a technological leap forward than from one too many words having finally made their way onto the internet. The techniques that undergird this technology had largely been invented by the turn of the millennium—we just finally became so saturated with information that training these models became more feasible. Otherwise, LLMs rely largely on another technique from literary history: pseudorandomly flipping through the dictionary. It was the Dadaists, over a century ago, who first experimented with dice throwing and word recombination to compose their work.
The question of how we can ensure the future of literature in the wake of LLMs may be less about how literature can weather the storm of the technology’s rise and more about how LLMs will fit into the millennia-long history of literary creation, extending from oral storytelling to Substack. LLMs have ushered in a new era, one we’re well prepared for if we’re willing to take seriously the many modes of literary creation. The trick now will be to ensure that this is done ethically—that these models do not merely piggyback on the labor of existing writers but extend their intuitions.
—Dashiel Carrera
¤
The proliferation of generative AI, by which we writers usually mean software systems based on LLMs, may seem at this point like the emergence of a new intelligence or consciousness. As we acknowledge significant discontinuities, it’s also important to understand that LLM-based systems are not this, nor are they ruptures in the universe. They are, essentially, new text technologies.
I use the term “LLM-based systems” because the large language model is foundationally important in such systems but almost always has many modifications and layers imposed upon it: reinforcement learning from human feedback (RLHF), to be sure, but also rule-based and symbolic techniques that might involve recognizing an arithmetic expression and having it calculated in the usual way.
Previous text technologies include the printing press, the typewriter, the word processor, email, spelling and grammar checking, desktop publishing, the World Wide Web, search engines, Wikipedia, collaborative editing, and social media—all of which we’ve survived. Social media is a possible exception; the jury is still out on whether it will kill us.
These earlier disruptions to writing, reading, and publishing sometimes occasioned media panics, like when educators wondered how it would be possible to assign essays to students who had easy access to widespread and, in previous days, frequently flawed information on Wikipedia. We worked it out. I don’t mean to dismiss the thought and work put into the question of how to deal with this strange, new, collaborative encyclopedia, but in the end, Wikipedia evolved, educators adapted, and there was no catastrophe. Previous developments in text technologies have required that we revise our perspectives on information access and ways of composing text. Earlier developments have not resulted in superintelligent computers that have either eliminated us or kept us as pets. We are also not at risk for this from LLM-based systems.
In a brief talk years ago, “Literature After the Technological Singularity,” I did assume that computers would bootstrap themselves into consciousness, far beyond human thought, not because I personally believed it but as part of a “steel man” (rather than straw man) argument. If we were to face such an extraordinary future, what kind of writing should we offer this nascent hyper-mind as training data? Jen Bervin’s Silk Poems (2017), N. H. Pritchard’s The Matrix (1970), and Christian Bök’s The Xenotext (2011– )—amazing extremes of literary art and human engagement with language.
Many aspects of LLM-based systems are worthy of careful critique. Today’s ubiquitous systems are corporate, produced by only a few corporations. The most prevalent are available as an opaque service. Yet there are other options, from the seven-billion parameter LLM by French AI start-up Mistral (available, remarkably, as a “pure” LLM that does unmodified text completion) to the transparent, volunteer-developed LLM BLOOM, with 176 billion parameters. Academics and writers seeking to work with LLMs would do well to use these, and to further the project of researchers, including Katy Gero, who are considering how communities can develop LLMs specifically for creative writing purposes.
—Nick Montfort
¤
As a poet whose work intersects with theoretical physics, I’m experimenting with how AI meets quantum computing in order to express shared principles in quantum mechanics and poetics. In my poetry experiment World Lines: A Quantum Supercomputer Poem (2018– ), based on a theoretical model of a topological quantum computer, an AI created with a quantum algorithm by Dr. Michael Taylor is the only technology capable of reading my poem in full. In the experiment, quantum computing, poetry, and AI are being explored together, yielding new ways to conduct, interpret, and expand each field.
World Lines, published by the Simons Center for Geometry and Physics, contains multiple poems inside the originary poem due to its unique formal structure. The theoretical model of the quantum computer that I used for my originary poem relies on topology in mathematics to compute qubits, as opposed to the digital bits found in a digital computer. Each qubit, made of two quasiparticles known as anyons, is computed by quantum knots, which lead the qubits through braided paths called world lines. Following the visual blueprint of the theoretical model of the topological quantum computer, I replaced each braided path, or world line, with a line of poetry. Where each quantum knot exists in the model, a shared word exists in my poem. The shared word in my poem syntactically connects different lines of poetry in different combinations, which produces multiple poems within my originary poem.
Taylor trained an AI linguistic processor to read each possible version of my originary poem using Python and machine learning. Parsing each sentence in my originary poem and identifying words in common, the AI chooses world lines in my poem that are semantically logical, so we can track how different topological paths move through a text map into each possible version of the poem. So far, the AI has read over 1,500 variant poems inside my originary poem.
Since my originary poem interacts with quantum mechanics by using a model of a quantum computer as its poetic form, and since the AI that Taylor developed to read its variant poems uses a quantum algorithm, the experiment demonstrates how quantum theory can be applied to open questions in the literary arts and AI technology. In the experiment, instead of AI being taught to write, AI is being taught to read a poem. The variant poems derived from my originary poem exist as potential poems until they are read, echoing the Copenhagen interpretation of quantum theory, where subatomic particles exist in quantum superposition—in all points in space and time simultaneously—before coming into existence after being measured or observed.
I’m curious how teaching AI to read quantum poetry could lead AI to write quantum poetry, poetry not sourced but invented through quantum theory, poetry that thinks “as far as our world lines,” a phrase in my poem. My poem is about quantum thinking in writing and reading poetry; thinking that quantum-jumps in an indeterminate and indefinite universe; thinking that disrupts classical mechanics, cause and effect, and realism. If AI could think as far as the world lines it is now reading, what kind of lines could it write in the world?
—Amy Catanzano
¤
Featured image: Fortunato Depero, New Marionette for Plastic Ballet, ca. 1916. Gift of Collection Société Anonyme, Yale University Art Gallery (1941.424). CC0, artgallery.yale.edu. Accessed February 10, 2026. Image has been cropped.
LARB Contributors
Dashiel Carrera is the author of the novel The Deer (Dalkey Archive Press, 2022). His writing appears or is forthcoming in Los Angeles Review of Books, Lit Hub, Fence, BOMB, Brooklyn Rail, and other publications.
Katy Gero is a poet, a human-computer interaction researcher, and a lecturer in the School of Computer Science at the University of Sydney. Her first book of poetry, The Anxiety of Conception, was published by Nothing to Say in 2025.
Christian Bök is the author of Crystallography, published by Coach House Press in 1994 and nominated for the Gerald Lampert Memorial Award, and of Eunoia, a best-selling work of experimental literature published by Coach House Books in 2001, which won the Canadian Griffin Poetry Prize. Bök is currently a professor of English at the University of Calgary.
Nick Montfort’s work includes 10 computer-generated books from seven presses; the collaborations Start Me Up (2025), The Deletionist (2013), and Sea and Spar Between (2010); and many other digital projects. He is a professor at MIT, principal investigator in the University of Bergen’s Center for Digital Narrative, and director of a lab/studio, The Trope Tank.
Amy Catanzano publishes poetry and fiction, poetic theory, and multimodal literary art, and is the recipient of the PEN Center USA Award for Poetry and the Noemi Press Book Award in fiction. Often writing in parallel with cutting-edge physics, she forges innovative connections between literature, science, and the arts.
LARB Staff Recommendations
Is There Anything “Artificial” About Artificial Intelligence?
Julien Crockett speaks with Blaise Agüera y Arcas about the various ways that LLMs keep surprising scientists and how our definition of intelligence should be more complex than people generally think.
The Return of the Luddites
Erik J. Larson considers “The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want” by Emily M. Bender and Alex Hanna.
Did you know LARB is a reader-supported nonprofit?
LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!