Out of the Mouth of Bots: Child Genius and Generative Machines

Out of the Mouth of Bots: Child Genius and Generative Machines
This is a preview of the LARB Quarterly, no. 41: TruthBecome a member to get this issue plus the next four issues of the LARB Quarterly.

¤


MINOU DROUET WAS a child-poet. Born in 1947, she captivated postwar France with poems that the newspaper Le Figaro, as quoted by Time in 1955, called “sparkling with spontaneous sensations, new tingling images.” At the time, much of the French public was skeptical that Drouet’s poems could have been crafted by a child. Even experts in cultural production were drawn in. André Breton speculated that “[b]etween the physico-mental structure of Minou and what is published under her name there is an incompatibility of structure.” And a simple examination of the texts concluded that they exhibited a maturity of expression and experience of life unavailable to a child. The girl was forced to prove—through a series of tests by journalists, police, and the French Society of Authors, Composers, and Publishers of Music—that it was she, and not her overbearing mother, who was the “true” author of the poems. Those who admired or speculated about the authenticity of Drouet’s poetry did so under the presumption that it was of some quality—but by any critical standard, it wasn’t.

Today, Minou Drouet is perhaps best known thanks to an essay in Roland Barthes’s Mythologies (1957), which explores the consumerist mania that surrounded the child-poet as a desirable object of attention. Barthes classifies Drouet’s work as “a docile, sugary poetry, entirely based on the belief that poetry is a matter of metaphor, its content nothing more than a kind of elegiac, bourgeois sentiment.” Her poetry is, in his analysis, predictable fluff and formula. Barthes concludes that Drouet was successful at replicating the signs of poetry, not poetry itself: she gave French culture what it believed poetry to be. By focusing on whether or not Drouet was “genuine,” and thus a “genius,” Barthes said her readers took part in “valorizing simple […] production” and thus obscured the fact that she was essentially regurgitating the clichés of the culture around her—like the postwar fetishization of childhood itself. In the publicity photographs taken of her as a child, Drouet looks like a model for a Normal Rockwell painting: playing the piano, writing poetry, petting a cat.

Drouet’s contemporary readers, by marveling at the mere fact of her literary production, often neglected to ask: Is this any good? With our contemporary critical distance, the consensus matches Barthes’s original assessment: Minou was not, in fact, a good poet. But the focus on Minou was rarely on the content of her poems but rather the mere possibility that a child could create works that seemed to express the feelings of an adult, implying both innate genius and seemingly limitless future development. The idea that a child could become a poet, rather than having poetry imposed upon her, presupposes a conception of literary genius that relies on certain normative notions both of childhood and of poetry. As Barthes put it, “The arguments made about the case of Minou Drouet are by nature tautological, they have no demonstrative value: I cannot prove that the verses shown to me are really a child’s if I do not first know what childhood is and what poetry is.”

In the end, it was not Drouet whom Barthes condemned—it was the bourgeois machine that created her: “It is society which is devouring Minou Drouet, it is of society and society alone that she is the victim,” he writes. “Minou Drouet is the child martyr of adults suffering from poetic luxury, she is the kidnap victim of a conformist order which reduces freedom to prodigy status. […] A little tear for Minou Drouet, a little thrill for poetry, and we are rid of literature.”

As an adult, Drouet wrote children’s novels, one adult novel, and a memoir, Ma vérité (1993), which was described by The New Yorker as “reticent and skimpy.” She did not, as is so often assumed of “genius” children, continue to accelerate for the rest of her life, let alone surpass conceptions of what is possible from a poet. Save for a few counterexamples such as Mozart, the child genius is nearly always disappointing as an adult. They age, like the rest of us, to diminishing returns.

In late November, Google released a video demonstrating its new conversation-bot software, Gemini, fluidly interacting with a user. Not long after its posting, spotters noticed that the video had been carefully edited to support the impression of a more naturalistic give-and-take between user and bot. When Google admitted that the video was edited, the resulting mass indignation recalled multiple such instances from the recent past: supposedly self-driving cars actually remote-controlled by humans, the previous controversy over Google’s Duplex voice assistant software, Meta’s serially confabulating “scientific assistant” Galactica. Given this checkered history, one couldn’t blame society for being skeptical.

But as with Drouet, the impulse to examine the authenticity of a product rather than its quality reveals as much or more about the inquirers as it does about the subject. For Barthes, the myth of the child-poet was reflective of a 1950s French middle class who, by dint of income or vocation, thought of themselves as closer to aristocracy than to the rest of the working class. For these individuals, Barthes argues, it was important for history—social, educational, cultural—to “evaporate,” such that virtuous talents like Minou’s could be seen as inborn. A natural hierarchy of merit justified the greater rewards they were reaping. But embracing this mindset had broader implications: to divorce the product from its production entails a myth not just about the past but also about the future. Society wanted to marvel over the fundamental wonder of Minou creating poetry without dithering over the quality of that poetry; the poems would inevitably improve, as children always do. But with the benefit of perspective, it is clear that, despite (or perhaps because of) its rapid ascent to passable productivity, her work lacked not just quality but most importantly the central impulse of childhood—the urge to identify oneself among the jumble of experiences. Her poetry (first collected in the volume Tree, My Friend in 1957) is perhaps best thought of as a distillation of the idea of a poem, constructed to satisfy rather than express: “Tree / drawn by a clumsy child / a child too poor to buy colour crayons. / Tree, I come to thee. / Console me / for being only me.” Jean Cocteau, though enamored with Minou, gave perhaps the most withering summary: “All children nine years old have genius, except Minou Drouet.”

A similar magical thinking about the relationship between production and progress is evident in the discourse over so-called generative artificial intelligences—in particular, the chatbots that allow their users to simulate a conversation. Such programs have been around since at least the 1960s, when Professor Joseph Weizenbaum created the ELIZA therapy simulator using a set of handcrafted rules. More recent advances have been supercharged by the availability of massive text datasets, encompassing nearly the entirety of written language. These are fed into so-called “neural networks,” which extract from the trillions of examples a set of statistics summarizing socially transmitted speech—the likelihood that a given word, word part, or phrase might follow from others, in a certain context. After this step, workers repeatedly interact with the program, providing fine-tuning feedback, until the result satisfactorily resembles human discourse. Though even simple ELIZA enraptured its users so much that Professor Weizenbaum became concerned, the current generation of chatbots has taken hold of the popular imagination. Part of the reason is that these systems no longer use scripted rules, meaning that their inner workings are, for the moment at least, largely inscrutable, even to their developers. This lends an air of mystery that encourages our mythmaking sensibilities.

Over the past several years, the public has engaged these machines in the same way French society did Minou, on the terrain of authenticity. Are they “intelligent” in any meaningful sense? Can such machines be said to “create?” Do their apparent conversational skills signal a genuine “understanding” of self or other? This discourse, too, was anticipated by Barthes, who noted that one of the features of bourgeois myth is that it “economizes intelligence” by “reducing any quality to quantity.” Never mind that we do not have foolproof definitions of these concepts even when applied to humans, that such concepts may resist operationalization into mechanistic forms, that they may more accurately describe the experience and expectations of the observer than the function of the subject of study.

To question authenticity in this context is to take as self-evident the inevitability of progress toward unlimited potential, thus invoking the metaphor of the “child genius.” This mythologizing extends to the software’s makers as well: a recent failed coup over the leadership of the OpenAI corporation, which produces the ChatGPT conversation-bot service, is rumored to have been spurred in part by concerns over the development of a product that, absent the assumption of inevitable, boundless development, might best be understood as a calculator. OpenAI’s chief scientist, as well as the rest of the board, was apparently spurred to action by the fact that new software designed to perform arithmetic was now doing so, as reported by Reuters,  “on the level of grade-school students,” which to them implied the potential for “escape” towards potentially disastrous “superintelligence.” Others, however, were elated by the same possibility: to their minds “acing such tests made researchers very optimistic about […] future success.”

Both utopian and dystopian fantasies of generative artificial intelligence rely on the presumption of massive acceleration from where we are now, and similarly equate (re)production with invention. Why else would the steady stream of factually incorrect responses generated by chatbots be so frequently categorized as “lying” or “hallucinating”—instead of, simply, failing? Just as the mania over Drouet’s authenticity obscured the question of quality, so the current discussion evades serious thought about the question of “intelligence.” The criteria usually employed in popular discourse to assess this slippery, likely vacuous concept often rely on comparisons to social norms, usually chosen to suit the arguer. While publicly professing to support the fantastical implications of immeasurable intelligence, the corporations producing these machines tend to hold themselves internally to more sober goals—OpenAI explicitly defines “Artificial General Intelligence” as a machine that can “surpass humans at most economically valuable tasks”—in other words, it is an “intelligence” as defined by the demands of the labor market. This may be one reason why the company’s new, explicitly accelerationist, post-coup board of directors includes the economist Larry Summers, who is infamous for advocating the exportation of polluting industries to countries with lower life expectancies and for publicly doubting the suitability of women for scientific professions. During the period he was invited onto OpenAI’s board, he had been enduring another consequence-free scandal over his promotion of a cryptocurrency firm that is now the subject of a lawsuit by the Office of the New York State Attorney General over alleged fraudulent representation of its products. OpenAI overlooked his checkered past when they sent the invitation to join their board, presumably to gain his insight into how to properly value an immortal, environmentally catastrophic technology of immeasurable intellectual capacity.

That the trajectory of technology is ever accelerating has long been presumed by the technocracy’s chosen public intellectuals. Tellingly, they often have little to say about social factors, instead embracing a quasi-naturalization of technology as an evolved—and evolving—organism. In his 1999 book The Age of Spiritual Machines: When Computers Exceed Human Intelligence, eccentric inventor Ray Kurzweil proposed “The Law of Accelerating Returns,” explicitly casting technology as an “evolutionary process” that will always grow, exponentially, with each breakthrough spurring proportionally more advances. Any barrier faced by a given innovation would, he argued, inevitably be overcome by another. The result would be a technological “singularity,” a “rupture in the fabric of human history” beyond which seemingly any possibility becomes inevitable, including “immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light.” He expanded on these ideas in a later book, The Singularity Is Near: When Humans Transcend Biology (2005), which became a seminal text for the technofetishist community that saw itself inventing the future the book described. Among many questionable line graphs, this tome includes a formal illustration of the Law of Accelerating Returns, relating “technology” to “time,” the former following an exponentially increasing curve along the latter. The graph raises several interpretive questions—e.g., what are the boundaries of “technology”? What does it mean for it to “increase” (and what could it possibly mean for it to “decrease”?)—but the implication is clear: buckle up, the future will come faster, and the signs are everywhere if you only care to look.

For these fantastical futures to exist demands a break with the past. The authenticity many seek in chatbots is also what Drouet’s contemporaries sought in her: some evidence that the object was “made” by something other than an adult “person.” Missing from most discussions, however, is the fact that these chatbots are most decidedly records of human intervention. They are “fast” in that they produce responses to a prompt quickly, but this outcome required billions of human-created examples and untold hours of programming, and will require untold more if their responses are to improve—if we can even agree on what it would mean to do so.

Yann LeCun, the head of Meta’s AI Research lab and a pioneer of the machine-learning techniques that support many of the recent headline-grabbing demonstrations, recently pointed out in a thread of posts on the social media website X that the immense amount of information required for language models to produce text at even their current level is vastly greater than that encoded in human DNA or to which a child is exposed in the first several years of their life. He concludes that the analogy to human learning is fundamentally flawed. And he is not the first to make this observation: Professor Iris van Rooij and her colleagues have mathematically proven that, to achieve the quality of human learning, such machines would require vastly more data than has been thrown at them already—indeed, quite possibly more than could ever exist. Where will all this new data come from? A flurry of recent changes in privacy policies suggests that surveillance will continue to expand in order to feed these machines, allowing them to generate ever more personalized reflections. At the same time, businesses looking to cut labor costs have adopted these technologies to rapidly produce public-facing text and images that now litter the web, and which are already beginning to dramatically reduce the quality of search results. It’s possible that a coming generation of these chatbots may be trained in large part on the output of its predecessors—an ouroboros of asymptotic mediocrity.

When most people talk about language models, they don’t know about these constraints and caveats, and so can be forgiven for hallucinating an animating force behind the seeming flood of de novo content. But for the technorati orchestrating the production, there may be something altogether more simplistic, less sublimated, driving this discourse. Predictably, the production cycle of language machines has become a site of contestation for oligarchs who stand to benefit. One head of this hydra is the array of large corporations that choose to avoid labor regulations by exploiting armies of low-wage, mainly offshore workers to perform the laborious instruction needed to shape language models into passable conversation partners. These laborers are in some cases paid the equivalent of two dollars an hour to screen the (often horrific) raw output of the machines and nudge them towards the sensibilities of “mainstream” society (a feat of externality redistribution that would make Summers proud). The fact that this training even needs to be performed in order not to reveal an ugly mirror of our society is rarely discussed directly—except by right-wing billionaires who appeal to the culture wars and lament that the machines are being “restricted” in their ability to generate “truth.” Though described as opposites—“doomers” versus “accelerationists”—both parties are seeking the same goal: the automation of human labor. Where they appear at odds is, as ever, only at the point of presentation.

Barthes, summarizing the mania over Minou, wrote that the “bourgeois notion of the child prodigy (Mozart, Rimbaud, Roberto Benzi) [is] an admirable object insofar as it fulfills the ideal function of all capitalist activity: to gain time, to reduce human duration to a numerative problem of precious moments.” Never mind that Drouet, as the magazine Elle reported, “does not know the meaning of words used in her poems” since, according to Barthes, “[t]he more a poem is stuffed with ‘formulas,’ the more it passes for successful.” Let us be spared the analogous titillations surrounding language models and abandon the fantasy that the future will save us from the present.

LARB Contributors

Emily Wells is the author of a memoir, A Matter of Appearance (Seven Stories Press, 2023). She teaches writing at UC Irvine.
Aaron Bornstein is an assistant professor of cognitive sciences at UC Irvine. He has written about bias and misunderstanding in artificial neural networks for Nautilus.

Share

LARB Staff Recommendations

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!