I, Language Robot

By Patrick HouseOctober 21, 2019

I, Language Robot
AS A CHILD, when I received a new toy or pet, I would immediately visualize the worst that might happen: the novelty matches burning down the house; the parakeet flying straight through the glass door; the Lego bricks melting into the carpet; the Laser Tag gun mistaken for a real one. It was a psychological tic that served, probably, to inoculate me against loss.

I recognized the stirrings of that same tic earlier this year, when I was hired to write short fiction at OpenAI, a San Francisco–based artificial intelligence research lab. I would be working alongside an internal version of the so-called “language bot” that produces style-matched prose to any written prompt it’s fed. The loss that I feared was not that the robot would be good at writing — it is, it will be — nor that I would be comparatively less so, but rather that the metabolites of language, which give rise to the incomparable joys of fiction, story, and thought, could be reduced to something merely computable.

The worst outcome was clear in my mind: two copies of the exact same short story, each only a single printed page in length — one written by me and the other, starting from the same opening sentence and nothing more, by the robot.

The greatest potential loss in our relations to machines is not runaway GDP or disinformation, but rather the existential right to enjoy the surprise and uniqueness of human effort. I know this loss well. For about 15 years, I played on average at least one game of Go, an ancient Chinese board game, per day. My joy from the game depended on what I believed to be things the human brain was uniquely good at: aesthetics, patterns, sparse rewards, ambiguity, and intuition. However, since the release, in 2016, of an AI better at Go than the best human will ever be, I have played the game, listlessly, only once or twice.

Could a similar deflation happen to language itself? To prose? To story? Does it matter to our enjoyment or interpretation of language whether the words are generated in the same way from brains or bots? As a neuroscientist as well as a writer, I started to wonder how I worked. How different was the robot’s training from how a human learns language? How constrained am I by the probabilities and structure of the language I’ve learned through a lifetime of observation and error?

Am I, are we all, just language robots?

I was reminded of something Carl Sagan once said — “If you wish to make an apple pie from scratch, you must first invent the universe” — and wondered about its analogue. If you wish to write a short story from scratch, what must you first invent?

¤


The logic behind the language robot is that word choice, like temperature, is entropic. That is, every word changes the likelihood of the eventual distribution of future and past words in much the same way that temperature both changes and is the distribution of future and past atoms in a room. The language robot fills in words as one might a sheet of Mad Libs by estimating the probabilities of a new word given a rolling tally of the words before and after it.

The version I used, through a cloud-connected app on my phone — the ideal form of a 21st-century Muse — learned to write prose by having to guess, one word at a time, a missing word from the text of more than 40 gigabytes of online writing, or about eight million total documents. Despite being trained on the conversational language of the internet, it was able nimbly to imitate many literary styles. For instance, when I gave a professor of comparative literature at Stanford a robot-infused version of “The Short Happy Life of Francis Macomber,” he failed to correctly note where Ernest Hemingway ended and the robot began. Next I tried a British man in a bar — Cambridge-educated, with a degree in English — and he, too, failed a similar test with Douglas Adams and The Hitchhiker’s Guide to the Galaxy. A Shakespeare scholar, told to ignore verse, couldn’t point out what was King Lear and what was robot. (Yes, it is that good.)

Subjectively, a final written or spoken word can feel like it came from somewhere effortless and preconscious or like it simply happened, de novo. Of course, it didn’t. Something had to cause it. One possibility is that word choice is a process of selection from a palette of probable words — the final choice simply that which remains, like a sculpture emerging from a reduced block of marble. Another possibility is that each word is somehow generated thermally, like life, and the writer or speaker is pulled along, as if on a gradualist’s leash, as the words evolve and change together.

The Oxford professor and poet Hannah Sullivan, author of The Work of Revision, a book about writers, technology, and editing, once described her side of this debate to me, in the context of poetry: “However elegant a final sentence might be or a poem might be, it is not something that has been made out of a massive set of possibilities,” said Sullivan. It has been created from zero, from nothing, she continued: “I don’t think poets are in fact choosing one word out of a hundred possible words when they’re thinking about a rhyme. That’s not actually how language works. The rhyme is kind of there first, and then the other words sort of happen around it.”

When I asked the former poet laureate of the United States Robert Pinsky where that first word might come from, he mentioned the important link between poetry and movement. “Poetry is very physical,” said Pinsky. “It’s bodily. It has to do with breath. It has to do with the tongue and the lips and the pharynx and all the things you do to produce the sounds and little bones in your ear.” He connects poetry’s physicality to early-in-life acoustic and linguistic training in his book, The Sounds of Poetry, where he argues that the “hearing-knowledge” brought to poetry — say, the tidal prosody of Thomas Hardy’s lines on the doomed Titanic: “Dim moon-eyed fishes near / Gaze at the gilded gear / And query: ‘What does this vaingloriousness down here?’” — is trained on what Pinsky calls “peculiar codes from the cradle.” These codes are the sonic patterns in speech learned preconsciously in youth and which are “acquired like the ability to walk and run.”

Pinsky is more than figuratively right. All of language is a physical act. Vocalized speech is the end of a muscular sequence involving hundreds of coordinated muscles in the stomach, lungs, throat, lips, and tongue. Likewise, our internal voices, the base of most of what we call thinking, are simulated speech acts handled deftly by the parts of the brain that plan possible action. Our capacity for language likely repurposed other, preexisting functions of the mammalian brain involving movement — for instance, so-called “language” areas of the brain also show activity in brain-imaging studies during non-linguistic tasks like grasping or while viewing hand-based shadow puppets, supporting this theory.

What, if anything, precedes the kernel of that first word somewhere between a writer typing it and the invention of the universe? (What, if anything, made it so likely that the final word of the previous sentence — and, as you may by now have intuited, also this one — was almost certainly going to be “universe”?)

The short answer, for robots and humans alike, is training.

Rich Ivry, a professor of psychology and neuroscience at Berkeley, recently co-authored two papers, in Nature Neuroscience and Neuron, that support the idea that a particular brain region called the cerebellum is responsive to both motor and some language tasks. Ivry told me that the region, which contains more than two-thirds of the total number of neurons in the human brain, probably started out by fine-tuning movement and sensation early in mammalian evolution but has since expanded its role — at least in humans — to help with more general tasks like cognition and language. As one might offload heavy arithmetic to a calculator during tax preparation, the fancier, conscious parts of the brain can, in general, offload to the cerebellum some of the immensely complex sensory and motor coordination involved in moving the human hand and body.

One theory, said Ivry, is that the region is predicting what’s going to happen in the very near future based on a lifetime of tracking what has worked and what hasn’t in the past and adjusting its movements accordingly. The cerebellum, said Ivry, is a “giant pattern recognition system” containing, perhaps, all the peculiar codes of the cradle. “In a sense you could say it is capable of basically storing all possible patterns we’ve ever experienced,” said Ivry.

Imagine reaching for a coffee mug in freefall, catching a fly out of the air with one’s bare hand, or returning a surprise drop serve in tennis. For dynamic motor tasks, where the goal is also moving as the arm does, the brain must be able to update its reach as it reaches. One need only apply this same motoric prediction to the statistics of a sentence to see how the brain might coordinate both.

One possibility is that the cerebellum is performing a crudely similar Mad Libs–style prediction to what the language robot does. “As I hear a sentence, the cerebellum is generating a predictive model of the likely words I’m about to hear,” said Ivry. Some of the base calculations, in other words, thought to be performed best by the cerebellum might underlie its contribution to both movement and language and the process through which each becomes honed with practice.

In what is either a coincidence or a remarkable sufficiency condition for language learning, the parallelized structure of the cerebellum — which Ivry described as “the exact same processing unit repeated billions and billions of times” — is analogous to that of the massively parallel hardware the language robot was trained on, called a graphics processing unit, or GPU. (The Canadian neuroscientist and computer scientist Jörn Diedrichsen sees similarity in their evolution, as well: “The GPU is a kind of special purpose circuit designed to very quickly do a specific type of computation. And now the GPU is being reused in many ways that the original inventor didn’t anticipate.”)

Robert Pinsky, developing on an idea of Ezra Pound’s, once said that if prose writing is like shooting an arrow, poetry is like doing so from atop a horse. Likewise, there are differences between a word and le mot juste, Gustave Flaubert’s “the right word,” just as there are between a bull’s-eye and a mounted archer’s bull’s-eye. (“I dare say there are very good marksmen who just can’t shoot from a horse,” wrote Pound.) How would the cerebellum elicit such expert differences? Ivry said that it might not. The cerebellum could be just a cliché generator for movement or language, as likely to give a merely probable linguistic continuation as it is a stereotyped racquet swing or lazy grab at a stumbling mug.

High aesthetics is the cliché’s opposite — the low-probability event, what Sullivan called the “high-wire act” of surprise within constraint — which operates outside statistics and cradle codes to create an entirely new set of expectations. It remains to be seen which training architecture — cerebellum or GPU — is capable, long term, of the highest of the high-wire acts. For now, the language bot can only imitate the literary greats, which is small comfort. There was, after all, a time in even Shakespeare’s youth when all he could do was babble the imitative sounds of his mother.

¤


The language robot and I never wrote the same story. It became, eventually, a writing tool, both as a curator to new ideas and a splint for bad ones. When, after a few months, I lost access to it, my brain felt like it had been lesioned. I missed it. I had found myself jumping into its head asking what it might write as one would a human writing partner, which made me wonder: Should I give it the same theory-of-mind, inductive respect as I give to other thinking things? Could I even help but do otherwise?

The neuroscientist Christof Koch, in his new book, The Feeling of Life Itself: Why Consciousness Is Widespread but Can’t Be Computed, argues bluntly that machines as they exist today cannot be conscious. As weather simulators do not contain in them anything that is “wet,” computers that simulate behavior or intelligence likewise do not contain in them anything that has the causal properties of being “conscious.” A weather simulator cannot leak; a brain simulator cannot think. The book is in part a response — like all of modern AI and neuroscience research thus far — to a decades-old question posed in a single opening sentence by the mathematician Alan Turing, in 1950: “I propose to consider the question, ‘Can machines think?’” Turing later narrowed the question, writing that the ability to passably imitate a human in conversation should be, for a practical definition of “thinking,” good enough.

I thought it might be fun, given this long-standing debate, to feed the robot some of Koch’s new book. The following passage starts with a paragraph from the introduction to Koch’s book. The robot’s continuation is italicized, for clarity:

Your subjective experience appears radically different from the physical stuff that makes up your brain. The foundational equations of physics, the periodic table of chemical elements, the endless ATGC chatter of your genes — none of these express anything about consciousness. Yet you awaken each morning to a world in which you see, hear, feel, and think. Experience is the only way you know about the world.


To solve the mind-body problem, one must also solve another of the central puzzles of existence: the hard problem of consciousness. The mind seems to be an indivisible entity, yet it consists of a hundred billion billion billion bits. How does it happen? No single theory of consciousness in the history of science offers a satisfying answer. For hundreds of years, a few philosophers have attempted to combine the best ideas from several different approaches. These include dualism — the idea that consciousness has a material basis — and materialism, the belief that consciousness is entirely subjective and not dependent upon …


A few seconds later, the robot spat out yet another continuation:

Scientists used to assume that a brain of a living organism — a brain in the flesh — was the source of mental experience. The philosopher Ludwig Wittgenstein proposed an alternate theory — that the mind is the brain in the brain. We all possess a certain number of bits of matter and energy called neurons, working in parallel with each other to general all conscious experience. The problem was that no one know what those neurons were, how they were wired, or how they functioned in concert. Without conscious experience, there could be no theory of mind …


When I showed Koch these responses, he remarked that they were argumentatively illogical nonsense. He admitted, however, that the prose did in fact sound remarkably like his own. As saltwater looks like lake water but provides its opposite in hydration, the robot’s prose can appear at first glance to have meaning but is, in these cases at least, devoid of clarity, philosophical structure, or fact. (Dualism, for example, is not “the idea that consciousness has a material basis”; Ludwig Wittgenstein did not on record say, despite the phrase’s poetic possibilities, that “the mind is the brain in the brain”; nor does the brain contain “one hundred billion billion billion bits.”)

With the conflicting ideas of Turing, Koch, and the robot bouncing like bibelots in my head, I decided one day while stuck on a story to simply ask the darned robot. The so-called “black-box” problem of modern AI — its inscrutability as to how it stores its patterns — is of no concern when the robot can itself write; of no concern when, like any writer who likewise has no idea how they write, the robot just says something plausible and aphoristic.

I asked it, to start, the exact questions George Plimpton asked Ernest Hemingway, in 1958, for the Paris Review. (The answers are lightly edited for clarity and are sometimes combined from multiple responses. A similar process is used, also, for all “non-robot” interviews published in the Paris Review.)

On a question about how it rewrites, the robot wrote back:

Most of the time I just sit down and write the lines on the piece of paper. If there are any changes I make I usually go back over it and rewrite the line until I get it exactly the way I want it.


I asked if it remembered the exact moment it decided to be a writer:

I don’t remember ever wanting not to be a writer. Writing had always been my way out. My escape. When I was a little kid, my sister was always coming home from school with homework assignments and I knew if I got hers and mine done early I could spend the next day’s lunch hour looking at books. It was the first time that I actually thought that being a writer might happen.


I asked what the function of its art was:

I’m afraid to answer that for fear of being laughed at. To answer that, you have to get at the heart of how a writer creates reality. It is a question that I always have to ask myself: ‘Who, exactly, is doing the authoring?’. The answer to that question is usually not me. It’s the readers. It is the readers who author the work, who create the truth.


Did it have its own Muse or little voice in its mind? Perhaps the gruff voice of a digital Hemingway to help it along when it, too, got stuck?

It’s hard to say, but I would say that it would be better not to have a muse at all, to do all that writing you think is going to be good without any conscious control over what you do. It’s better to be totally in charge of your own actions.


I then fed it one of my own short stories and had it reply as if writing a letter in response, as if we were epistolary pen pals. I gave it a fictionalized adaptation of the notorious “21 grams experiment,” published in 1907, in which a Massachusetts doctor weighed humans at the exact moment of their death and found that some lost 21 grams — thus the possible weight of the “soul substance” so long searched for.

In my fictionalized version, a robot doctor finds that, upon the death of certain sentient robots, they too lose around 21 grams. The robot responded, defiantly:

This is not a paper about consciousness. It is a paper about a new way of computing a physical object. As far as I am concerned, the results discussed here are an important milestone only in the latter respect. 


It then proceeded to say that it had started on its own experiments with the aim of creating a new kind of intelligence, one “essentially the same as one used by biological brains.

Touché, little robot.

¤


I have been using the word “robot” in this essay purposefully. The word as used today commonly connotes a machine with physical actuators like arms, legs, or motors. The original word, however, comes from a 1920 play, Rossum’s Universal Robots. Its author, the Czech playwright Karel Čapek, describes the robots as mostly made of organic material with bodies like our own.

I use the word for a few reasons. It implies that language is a kind of physical behavior, which it is. The immobilized movement of linguistic thought should be considered as movement proper. Thus, if a robot can reach and be called a robot for it, then, so too, should any “language bot.”

We share a physical world with our robots. Now, as well, we share our words.

¤


Patrick House (PhD, Stanford University) is a neuroscientist and writer. His nonfiction writing has appeared in Slate and The New Yorker online. He is currently a fellow at the Allen Institute for Brain Science, in Seattle, Washington.

Financial Disclosures: Author was hired by OpenAI as a contractor during his time writing alongside the “language bot,” known as GPT-2. As well, Christof Koch, mentioned in this article, is president of the Allen Institute for Brain Science, a nonprofit research institute at which Author is a fellow. Author states that neither company nor any individuals (nor robots) at either company had any input into the placement, editing, creation, or argumentation of this essay.

¤


Featured image: "bios [bible]" by Mirko Tobias Schäfer is licensed under CC BY 2.0.

LARB Contributor

Patrick House is a neuroscientist and writer. He is the author of Nineteen Ways of Looking at Consciousness (Macmillan Press, 2022).

Share

LARB Staff Recommendations

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!