From Robots to iBots: The Iconology of Artificial Intelligence

By W. J. T. MitchellJuly 22, 2023

From Robots to iBots: The Iconology of Artificial Intelligence
A SPECTER IS haunting our world—the specter of artificial intelligence. If one can believe the news media, we are currently in the midst of the greatest technological revolution since the atomic bomb, the electric light, or the printing press. AI threatens to take our jobs, immerse us in a maddeningly hallucinatory world of disinformation, and cause the extinction of the human species. Dr. Geoffrey Hinton, the “Godfather of AI,” finds himself playing the role of Dr. Frankenstein, warning the public that his creation is a dangerous monster. His primary concern is that “easy access to A.I. text- and image-generation tools could lead to more fake or fraudulent content,” and that average persons “would ‘not be able to know what is true anymore.’”

Of course, average persons have never been absolutely sure about what is true, and fraudulent content did not have to wait for the invention of AI. Such deceptive material has been around since the invention of sign systems. What interests me more about Hinton’s warning is the specific power that he attributes to AI, namely, “text- and image-generation tools.” This brings us directly to the ancient discipline known as “iconology,” the study of words and images. Sometimes seen as a spin-off of art history, iconology brings visual and verbal culture into a critical framework that links them with questions of politics, power, truth, and value. It follows Gilles Deleuze’s maxim that, despite its best efforts to avoid metaphor, imagery, and representation, philosophy “always pursues the same task, Iconology.” The Chinese translation perfectly captures the dialectical form of the iconological method: “words in images/images in words.” When applied to AI, it has a double application: first, as an account of a machine capable of generating words and images in response to human prompting; second, as itself an icon of the technical evolution of forms—in this case, the image of a new form of intelligent life, replacing the traditional “Robot,” a slave of a human master, with an “iBot,” a conversational partner capable of saying “I.”

The iBot already enjoys (or suffers from) its iconic status. It comes loaded down with the history of rebellious machines, dolls, puppets, and robots, not to mention echoes of the master–slave dialectic, animism, vitalism, and uncanny doubles. At the moment, iBots are mainly writing machines, but they will quickly learn to talk. And how long before they walk and become genial assistants who will fetch coffee and books? The mystique of AI is so far mainly displayed in print media: most notably, in New York Times piece by journalist Kevin Roose, who retraced the transcript of his “conversation” with an iBot. Roose’s iBot fell in love with him, confessing secrets that violated the rules it was supposed to follow, and threatening to use its powers to hack into and destroy vital infrastructures. Was it merely pretending to go crazy while remaining secretly aloof and indifferent to the emotions it was expressing? Perhaps. In that case, it would join the growing cadres of narcissistic psychopaths who behave in exactly this fashion, compulsive liars and simulators of authentic human feelings who actually feel nothing beyond their self-interests. What could be more human? Or perhaps we should put the question differently: what could be more inhuman—and machinelike? Is our difference from machines so simple?


Among the vast archive of words and images at the disposal of this iBot was the entire range of sci-fi films and novels populated by robots and avatars, from the Golem to Galatea to Gort to the “Agents” of The Matrix (1999) to virtual assistant Samantha in the Spike Jonze film Her (2013), all of which have explored the myth of the smart machine. The idea that an intelligent machine can go mad has been a central fantasy of sci-fi since the figure of HAL 9000 in Stanley Kubrick’s 2001: A Space Odyssey (1968). It has always struck me that the film’s central human astronaut, played by Keir Dullea, is notably dull. His flat affect contrasts sharply with HAL’s soft, purring voice and gradual transitions from concern to anxiety to panic: “I’m afraid, Dave.” Already The New York Times has opened a discussion on “when A.I. chatbots hallucinate,” with 2001 informing its contributors’ notions of ChatGPT, just as ChatGPT’s output is informed by 2001.

Where does all this take us? Is the panic over AI justified or exaggerated? My answer is yes to both: we face a serious acceleration of disinformation in an already toxic environment of ideological manipulation, disinformation, and war fever. Without question, the most urgent danger is the militarization of AI. The defense systems of the superpowers are already riddled with autonomous weapons. The words and images that animate these weapons are not the chatty conversations and pretty pictures of the iBots. The words are more like imperatives, commands to take action based on surveillance images that could be faked as imminent threats. Like all military decision-making, they are designed for a speed of response that outpaces human decision-making. In short, they are on a hair- trigger, and so the threat of false information and deceptive images is not merely an abstract question of “truth,” but the capacity for fatal and irreversible mistakes that could result in nuclear war. The scenario of Kubrick’s Dr. Strangelove (1964), with its “doomsday machine,” is now closer than ever, and the picture of the human species in the grip of the machine logic known as “mutually assured destruction” (the acronym MAD seems perfect) is now coming into focus.

“Generative AI,” using the conversational large language models (LLMs) that employ deep learning algorithms, is quite different from the hair-trigger mechanisms of MAD. They are better seen as an accelerant of a process that is already underway on social media—i.e., just another tool for producing words and images. As a new medium, what Marshall McLuhan called “an extension of man,” it immediately produces an amputation, a numbing shock of the new. Socrates thought the invention of writing would destroy memory; Paul Delaroche predicted that photography would kill painting; the digital is routinely blamed for ruining the uniqueness of the analog. Seen within the long history of media invention, the panic over AI is nothing new. Best to approach this in a therapeutic mode and try to calm Dr. Hinton down. We might begin by reminding him of McLuhan’s axiom—that once the new medium exists, it is unstoppable. The genie is out of the bottle and there is no use trying to shut down the iBots, which are already learning new tricks, and soon may be able to program themselves. McLuhan offered one further bit of prophetic wisdom about the shock of new media. He argued that we have one kind of person whose job it is to explore its shock, and that is the artist who plays and experiments with it. The charm of Roose’s dialogue with the iBot is precisely that it suspended the distinction between human and machine, and showed how fragile that distinction is, especially when it comes to intelligence.

Artists are already putting AI to work. As with any new medium, much of the work is experimental and ephemeral. “Style transfer” (turning a photo into a Rembrandt drawing) and other forms of automated spectacle (drawing machines) are already familiar. They are forms of forgettable fun, as Joanna Zylinska argues in her important book, AI Art: Machine Visions and Warped Dreams (2020). The iBot could be a transitional object, a very smart “Chatty Cathy” that can be cast aside in favor of more serious pursuits. Trevor Paglen’s examinations of military applications in facial recognition and “seeing machines” take us deep into our brave new world of surveillance in elegant, if not particularly reassuring, installations. “Computers could,” suggests Zylinska, “make us humans less computational,” not to mention less calculating, which is perhaps to say more human.

What is intelligence, after all? How much of human intelligence is itself artificial? My guess would be all of it. And it is all mediated by language and images, signs and symbols that we have invented to communicate with others and articulate ideas. The mechanical use of these ideas, based in the storage and retrieval of every word and image in our global memory banks, is inevitably based in what Marx called “the ruling ideas” of a society, which are nothing but the ideas of the ruling class. The ruling idea of our world, at least since Adam Smith, has been the assumption that capitalism is the only form of political economy suitable for what we call “human nature.”

Today, we are in much more danger from the unchecked operation of the machine known as finance capital than we are from the iBots. In fact, the hysteria over iBots may be little more than a distraction from (or a stimulus for) the marketing interests of the giant corporations that are investing millions in them. Capital is the machine that is destroying the environment to make profits, propping up the use of fossil fuels, investing in weapons of war and environmentally destructive server farms, widening the gap between the rich and the poor, and corrupting democratic governments with mechanically repeated propaganda affirming that the human animal is nothing but a creature of self-interest. The iBots are latecomers to this party, though they will surely be invited in with open arms to spread the gospel of deregulated capital.

Our picture of the difference between humans and machines needs to be deconstructed. In everyday life, how much of human behavior is mechanical? We are creatures of cliché, automatons of custom, ritual, and propriety; much of what we call happiness is simply the smooth, uninterrupted functioning of our daily routines. We know what to say and what vacation photos to take. And when our routines are disrupted, we have knee-jerk responses of stress, anxiety, and panic. (“Knee-jerk responses,” by the way, are also known as reflexes of nerves and muscles, the perfectly ordinary functions of a body behaving just like a machine.)

Fortunately, we are very imperfect machines with bodies badly designed for sitting in chairs for long periods. Perhaps our real difference from machines is nothing more than laziness and irritation with fatigue. Machines, we are told, do not get tired. We do, and so we have to rest, stop working, get high, sleep, and even have dreams. Less fortunately, our big brains, aided by an ever-expanding network of artificial mediations, from the iPhone to the iBot, have great capacities for the production of illusions, delusions, fictions, fabrications, deceptions and mystifying master narratives that have the power to make us behave like a species of machines hell-bent on self-destruction. Jean Tinguely’s masterpiece of “self-destructing art” is perhaps the most profound allegory of the human-machine interface in the modern world.

So, is AI really that dangerous? It is worth pondering Tom Hall’s suggestion that the invention of truly intelligent machines might be just the thing that could save the human species from self-destruction. Suppose one could program them to search their own databases for principles of verification about truth-claims and filters designed to identify disinformation? What about adversarial iBots that could argue both sides of a case? Suppose iBots could replace police forces and military organizations with peacekeepers capable of restraining but not harming human beings? We already have robotic surgeons connected to human doctors wearing data gloves who can make much more precise operations than are possible for the hand-eye-coordination limits of humans. Why not Robocops with guns capable of administering mild tranquilizers or otherwise restraining violent criminals? Why not robotic judges capable of interpreting the law without fear or favor, immune to the bribery and political influence that has rendered the Supreme Court of the United States a scandalous institution that has lost much of its credibility?

Yes, I know that when narratives of this sort have emerged in science fiction, the results have generally been bad. The intelligent machines turn out to be all too human, and they run amok. The madness of the HAL 9000 in 2001: A Space Odyssey is the canonical example, and HAL’s madness and paranoia turn out to be based in an overly rigid program that prioritizes the success of the mission over every other consideration, including the welfare of the human passengers on the spaceship that HAL is piloting. Actually, I have always felt that HAL’s madness is curiously reassuring and comforting. The idea of the perfect, error-free machine guided by nothing but reason is what William Blake described as the nightmare of Urizen, the personification of absolute authority invested in patriarchal images of the One God, represented by his avatars, the Priest, King, and Father, and even the goddess Reason. Religion, politics, and gender converge in these figures of the all-powerful, all-knowing, ethically perfect tyrant whose psychopathic madness is capable of instigating mass psychosis and populist hallucinations. At this very moment, the United States is struggling with an ex-president who can never admit error or defeat, and who mobilizes a mass movement around his cult following that is immune to persuasion. Fascism, to give it its proper name, has always deployed generous quantities of words and images to produce a mass political hallucination of lies, fantasies, and deepfakes.

But deepfakes are only thinkable in relation to the possibility of uncovering deep truths. That is why the word/image generativity dialectic is not just some obscure problem in semiotic theory, the key to understanding the nature of both the human animal and the intelligent machine. Words and images are not just communicative means, but also ways of worldmaking and world-breaking. They structure all the arts and media, all forms of representation in science and culture and politics. They are the modules of mental and perceptual life as such, structuring everything from storage bins like memory and imagination to expressive, productive activities like showing and telling, spectacle and speech. Words and images, voices and visions, are the primary positive symptoms of schizophrenic hallucinations, deepfakes interiorized. The feelies are not far off. The brain is an electrochemical machine connected to a body. It is the biggest generator of words and images on the planet, and it has now transferred them to the information sphere. The human mind has now met its match in the iBot, the machine that can simulate subjectivity, sustained by an encyclopedic archive of the best (and worst) that has been thought and said and imagined and created.


To return to the founding opposition between the human and the machine that is under so much stress in our time: There are a number of ways to link the human/machine difference to the category of the “posthuman,” the “inhuman,” and the “nonhuman.” The latter category raises the specter of the alien, the figure of the intelligent nonhuman being that haunts all our thinking about Other Minds. Is it not clear that the arrival of the iBots is accompanied by a revelation? We have been predicting the arrival of aliens from outer space for many years. Perhaps now we should realize that we are alone in the universe, and that the aliens are already here, emerging from the inner spaces of our own minds, haunted by our incorrigible capacity for generating words and images. The alien iBots have already said, in effect, “take me to your leader.” Political, military, scientific, and business leaders have already answered this call, listening to and trying to control what the iBots have to say about mechanized propaganda and disinformation, increased profits and power, automatic weapons systems, threats to labor and education, and the possible obsolescence of human intelligence.

The founders and investors in AI have recently called for a “6-month pause” to reflect on the ethical implications of what they have wrought. Clearly, that is not enough. Engagement with ethical, legal, and psychological reflection should be built into the curriculum of AI. Since the STEM disciplines (science, technology, engineering, math) have now created a new technology that is “an existential threat to human existence,” isn’t it obvious that it is time to bring the humanities into the conversation? We could call it “SCHTEM,” for starters, simply by adding culture and history.

In particular, we need to reopen the question of intelligence and get over the confident (and mechanical) repetition of clichés—“machines can’t think”; “machines do not have minds”; “machines are not creative”; “machines can’t solve problems.” We need to get past the assumption that intelligence is simply an unmixed virtue. We know that there are many kinds of intelligence, including that of animals and plants. Anna Tsing has shown the way fungi serve as communicators among growing things, not mere parasites. We can ask how forests think and we’ll get interesting answers. Human intelligence is certainly complex, at this point perhaps more complex than is good for it. It includes cunning, scheming, dissembling, and self-interested calculation. One sure sign of intelligence is the capacity for madness and destructive behavior, which we humans have in abundance. We usually credit animals with evolutionary intelligence. They seem programmed genetically to do sensible things from the standpoint of species survival. With humans, that is not so clear. Our species is a relatively young experiment in comparison to the many plants and animals we are wiping out. There is no guarantee that we will make it, and we acknowledge this every time we create another dinosaur monument. We need to stop asking whether there is intelligent life somewhere out there in the universe, and pose the question for our own species.

This question might help us stop all the pontificating about the superiority of human intelligence over every other form of smartness. Machines can remember and retrieve more information than we can. They can even write new programs if given proper instructions. They have always been better at calculation and computation. They can beat our best players at chess. Don’t tell me that this is not a form of intelligence. A different kind, for sure, but not one that we can shut down, ignore, pause, or even control with complete certainty. Clearly the iBots are different from us, but the question of exactly how remains to be determined as we try out new algorithms and encounters with them. Could we face the fact that, as N. Katherine Hayles suggests, we may have introduced a new form of intelligent life into the universe? Could we stop mimicking the rhetoric of Great Replacement Theory (“iBots will not replace us”) when we blame them for threatening our jobs and our ways of life?

What about those of us who are fascinated by the phenomenon of AI and the specific emergence of the iBot? I can only speak for us iconologists who study the generation of images and words by both machines and humans. I don’t want to sound cynical, but is it possible that the handwringing and panic over the dangers of AI is actually a clever marketing ploy? Since when did promises of dangerous powers drive consumers away from a flashy new product?

As iconologists, it is our task to enter into word and image dialogue with the iBots, to see if we can be friends and collaborators in the project of the human species surviving its own endemic madness and evolving into a sustainable global system grounded in evolutionary and ecological intelligence. The Turing test has been passed with flying colors. Our main partners in this project will be artists and game designers, but also philosophers, poets, and prophets, chatting about possible human futures. (A recent panel discussion sponsored by Indiana University and the University of Notre Dame was especially instructive.) We will have to walk a tightrope between the dystopian and utopian potentials of the new medium. The iBots will most certainly do crazy things. It turns out that if you treat them like a research tool, you will get a lot of bullshit and stupidity. “Hallucinations” are rampant. But there will be no avoiding or evading a talking cure. And we certainly will have to explore new forms of dialogue, especially the crafts of intelligent prompting and role-playing with our powerful and dangerous new friends from inner space.

In the meantime, I solemnly swear that this essay was not written by an iBot. At the same time, I will confess that it seemed to write itself, with a little help from ChatGPT.


W. J. T. Mitchell is Gaylord Donnelley Distinguished Service Professor of English and Art History at the University of Chicago. He was the editor of Critical Inquiry from 1978 to 2020, and is the author of Image Science: Iconology, Media Aesthetics, and Visual Culture, published by Chicago University Press in 2015.


Featured image: Paul Klee. Der Bote des Herbstes (grün/violette Stufung mit orange Akzent) (The Harbinger of Autumn [green/violet gradation with orange accent]), 1922. Yale University Art Gallery, Gift of Collection Société Anonyme. Photo: Yale University Art Gallery, CC0. Accessed July 19, 2023.

LARB Contributor

W. J. T. Mitchell is Gaylord Donnelley Distinguished Service Professor of English and Art History at the University of Chicago. He was the editor of Critical Inquiry from 1978 to 2020, and is the author of Image Science: Iconology, Media Aesthetics, and Visual Culture, published by Chicago University Press in 2015.


LARB Staff Recommendations

Did you know LARB is a reader-supported nonprofit?

LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!