The Artifice of AI: Why Machine-Generated Text Seems So Fake

May 5, 2023   •   By Jeffrey M. Binder

IN A 1992 episode of Star Trek: The Next Generation, the opening scene features the intelligent android Data reciting one of his own compositions at a poetry recital. The first few lines give us a taste of how the writers imagined a machine-authored poem:

Felis catus is your taxonomic nomenclature,
An endothermic quadruped, carnivorous by nature;
Your visual, olfactory, and auditory senses
Contribute to your hunting skills and natural defenses.

Data’s human friend, Geordi La Forge, reports that the android’s poems are “clever” but lack the emotion needed for good poetry. Data’s poem fails, as the show has it, because it is too logical and literal; its language reflects the cold precision that makes Data something other than human.

Thirty years later, machines are writing emotive poetry. Late last year, to widespread publicity, the private company OpenAI released ChatGPT, a conversation bot trained to obey user commands: “Write me a rhymed poem about a cat,” for example. The system works by predicting what word will come next based on patterns in a massive collection of text gathered mostly from the internet. Unlike Star Trek’s android, who speaks in scientific terminology and never uses contractions, ChatGPT can readily adopt the diction of a range of dialects and genres, from Shakespearean to slang. It produces startlingly accurate imitations of human writing.

As this comparison shows, the issues raised by artificial intelligence are not quite the ones science fiction anticipated. Although it is linguistically fluent, ChatGPT is an unreliable source, often failing to distinguish truth from falsehood or recognize contradiction. As a result, researchers have worried that text generators will flood the internet with misinformation. Teachers have also braced themselves for an influx of machine-generated papers, which students can conjure up with little effort or cost. With each advance, it is indeed becoming harder to tell machine-generated text from the real thing.

But how should we judge which types of writing are real? Are the words of a poem not the same whether they come from a human or machine? We can better understand the stakes of this judgment by considering what we mean by calling AI artificial. Intelligent or not, ChatGPT is undeniably “artificial” in the sense that it mechanically mimics the human writing practices on which it is trained. But calling generated text “fake,” as many have come to do, betrays something more—unease not just with machines but with artifice. The directness or authenticity of expression to which fakeness stands opposed is a construction: the appearance of authenticity, after all, can itself be contrived.

Long before ChatGPT gave students an easy way to construct customized term papers, writing manuals provided templates for assembling a thank-you, apology, or student essay. Such methods have been dubbed insincere: they enable people to argue positions and express sentiments not their own, and even to use words they do not understand. They turn them into charlatans and shills, or so it would seem. Yet they are also democratizing tools, promising to make eloquence accessible to those who do not otherwise have access to the education or refinements of the middle and upper classes. Generative AI is an inheritor of this tradition insofar as it has few scruples about mechanizing—sometimes literally—creative practices like poetry.

Certainly, there are differences—ChatGPT departs from these older practices in its speed, in the opacity of its algorithms, and in its dependence on massive amounts of data. Many aspects of the technology are genuinely new. Still, the following example from over three centuries ago illuminates how our unease with artifice has traveled across the centuries.

In 1677, an English physician named John Peter published a short pamphlet titled Artificial Versifying or, The School-Boys Recreation. A New Way to Make Latin Verses. In this pamphlet, which went through at least four pressings, Peter presents a foolproof method “to make [almost 600,000] Hexameter Verses.” Peter’s method involves counting by nines through a set of “Versifying Tables,” which contain jumbled Latin words. The result, he promises, will be “True Latin […] and good Sense.”

For a programmer like me, it is tempting to view this method anachronistically, as an algorithm. A computer can easily perform Peter’s procedure, as I demonstrated by creating an online simulation of the process. It works as advertised. The hexameter line corresponding to the number 123456 is “impia facta inquam portābunt crīmina sola”: “Godless acts, I say, will bring only accusations.” These verses take shape through a clearly defined procedure that, as Peter points out, can even be performed by “he that cannot Write or Read.”

Calling this method an algorithm may be anachronistic, but comparing it to computation is not. In the preface, Peter compares his method to the invention of numerals, which made mathematics accessible to “illiterate Artificers” by turning operations like addition and multiplication into simple procedures. He offers his work as a challenge to mathematicians, artisans, and musicians who “Triumph over the Latinist and the Poet” because their work is entirely mechanical. His versifying system, he tells us, will put an end to their “Mechanical Ostentation” by showing that the composition of verse, too, can be reduced to an “Instrumental Operation.”

The key to understanding the thought behind this work is the title: Artificial Versifying. Notwithstanding the hype about artificial intelligence, artificial retains negative connotations. Food manufacturers proudly tout “no artificial flavors” and cosmetics brands tout “all-natural” products. In the 1600s, the situation was quite the reverse. To make something “artificial” was to elevate it into an “art,” which meant doing it in a controlled, systematic, repeatable, and reliably teachable manner.

Artifice did, to be sure, have its long-standing critics. The sophists of ancient Greece, practitioners of rhetoric, had always faced charges of dishonesty. Yet there was a popular fascination with what was then called “artificial verse,” which included intricate rhyme schemes, poetry made into shapes, and poems that could be read in both directions. Such practices would have been seen as “artificial” because they provided a repertoire of learnable poetry-writing techniques that one could use to create impressive works of craft or art, much as a carpenter could learn techniques for ornamenting a cabinet or chest. Peter’s versifying system takes this approach to its limit: the entire poem takes shape through a rule-based process that virtually anyone can learn to perform.

It is hard to say exactly what Peter was trying to accomplish with this project. The subtitle “The School-Boy’s Recreation” suggests that, much like ChatGPT, the book might have given students a shortcut for completing their composition assignments, although scholar Carin Ruff has pointed out that pupils were not typically asked to compose Latin verse at the time. The remark about “illiterate Artificers” might be read as a parody of 17th-century artifice, mocking uneducated craftsmen for following procedures they did not really understand.

Then again, Peter made similar arguments in other, apparently sincere, works, such as the almanac The Astral Gazette, and A Treatise of Lewisham (But Vulgarly Miscalled Dulwich) Wells in Kent, which contains a bizarre argument about how the divinely instituted harmony between macrocosm and microcosm means that water circulates through the bowels of the earth in the same way that blood circulates through the human body. Serious or not, Peter’s “artificial versifying” system was consonant with the spirit of the time, which valued the development of systematic methods as an end in itself. The story of how machine-generated text came to seem fake is also the story of how this mindset broke down.

Although Peter’s work was always obscure, it was never forgotten. But a generation later, in the early years of the Enlightenment, the tide had turned against the valuing of artificiality. In a 1711 issue of the short-lived but influential periodical The Spectator, poet John Hughes describes Peter’s versifying system as an example of what Joseph Addison had, in an earlier issue, called “false wit.” Taking the idea to its logical conclusion, Hughes, with tongue in cheek, recommends constructing “a Mill to make Verses,” which would churn out verses so cheaply that wit could be enjoyed by everyone.

Hughes’s letter is typical of the intellectual turn that effectively broke down the early-modern valuing of artifice. In his withering series of essays on “false wit,” Addison set his sights on the clever devices of artificial verse: anagrams, acrostics, doggerel rhymes, puns, and poems “cast into the Figures of Eggs, Axes, or Altars.” Once thought the height of poetic art, such practices were now gimmicky tricks. In the emerging Enlightenment worldview, shuffling letters around through a mechanical process was at best frivolous and at worst irrational; the fact that the results made sense was no longer seen as evidence of a genuine intelligence at work behind the scenes.

These same Enlightenment values persist in some reactions to ChatGPT, which has likewise inspired anxiety about the encroachment of mechanical methods into the sacrosanct domain of poetry. But there is a key difference that can tell us about the present situation. Although Peter characterized his method as “mechanical,” the crucial issue was not the distinction between human and machine. The procedure, after all, was originally meant to be performed by people. What made it mechanical was the fact that the uneducated could do it. It was, in other words, its resemblance to the working-class labor of the mechanic.

This class dimension is implicit in Hughes’s letter. With the construction of the versifying mill, he writes, “Wit as well as Labour should be made cheap.” His use of the word mill, as opposed to machine or automaton, evokes industrial production, which was in its infancy in 1711. If a mill reduces the cost of manufacturing grain, it also reduces the skill of the labor required, which is precisely what Peter’s versifying method aimed to do. Writing poetry would thus, once Hughes’s imaginary mill was set in motion, be degraded to the level of the mechanical trades.

In this regard, ChatGPT is not so different from the writing aids that have long helped people to churn out large amounts of text. Both contribute to the cheapening of writing by providing easier access to it; the concern about students cheating on their assignments is only one facet of the broader issue that ChatGPT might reduce writing’s value as a signifier of education, refinement, knowledge, authority, or expertise. The telos of OpenAI’s poetry mill is to make wit worthless by granting it to everyone.

No one can reliably predict the outcome at this point. Perhaps we will return to something more like the 17th-century conception of art in which repeatability and control matter more than claims to originality. Perhaps, too, a progressive potential is at work in generative AI’s assault on cultural prestige. Yet the sheer concentration of capital that undergirds organizations like OpenAI makes it hard to ignore the economic inequality that separates ChatGPT’s users from the owners of the mill. As long as this remains the case, there will be something disturbing in the new artifice that, almost 350 years after Peter published his versifying system, enables everyone with an internet connection to make poems.


Jeffrey M. Binder is the author of Language and the Rise of the Algorithm (University of Chicago Press, 2022).


Featured image: Kazimir Malevich. Tochilschik Printsip Melkaniia (The Knife Grinder or Principle of Glittering), 1912–13. Yale University Art Gallery, Gift of Collection Société Anonyme. Photo: Yale University Art Gallery. Public Domain. Accessed May 2, 2023.