Intelligent or Communicative?: On Elena Esposito’s “Artificial Communication”

By Paul J. D’AmbrosioMay 24, 2023

Intelligent or Communicative?: On Elena Esposito’s “Artificial Communication”

Artificial Communication: How Algorithms Produce Social Intelligence by Elena Esposito

WE FREQUENTLY HEAR about spectacular new innovations in artificial intelligence and how they have come to affect nearly all aspects of life. In some areas, such as policing or courtroom sentencing, we have learned to be cautious about the potential for bias. But in many other areas, including important fields like medicine, and everyday applications from Google Search to driving directions and flight bookings, our world is more convenient, richer, and safer because of AI. As much as we may fear these technologies—and their varied future in self-driving cars, autonomous weapons, robot companions—there is one thing we cannot deny: AI is already quite intelligent, and only becoming more so.

In her new book Artificial Communication: How Algorithms Produce Social Intelligence (2022), Elena Esposito begs to differ. She offers an entirely different framework for thinking about the machines we interact with every day: we are better off conceiving of them as communication partners than as “intelligent” beings.

The latest AI sensation is GPT-3, a chatbot that uses deep learning to produce “humanlike” text. When my co-author and I fed it our latest philosophy monograph, You and Your Profile: Identity After Authenticity (2021), and asked it for a review, it turned out some remarkable prose. The entire text read as if it were composed by someone well informed and relatively skilled at writing. For example: “The authors argue that social media platforms have fundamentally changed the way we communicate and interact with one another, and that this has had a profound impact on our sense of self and our relationships with others.” It would be difficult to argue that whoever—or whatever—came up with this sentence is not intelligent. Indeed, it seems like such a foregone conclusion that we no longer question “whether” computers are intelligent or not. Our focus has now turned to “just how intelligent are machines?” and, relatedly, “can they become more ethical as they become more intelligent?”

In her book, Esposito takes the view—a view in fact shared by many of the computer scientists who build AI systems—that computers are not intelligent, at least not in the way we normally think they are. She uses flying as a guiding metaphor. Humans learned to take to the skies when they stopped trying to fly like birds. With computers, we have a similar phenomenon. AI technologies truly took off when they stopped trying to mimic human intelligence. Google’s PageRank algorithm, which completely revolutionized the internet and AI, is a good example. Rather than trying to understand what someone is looking for, Google Search filters for what people in general are, or might be, interested in. It does not “understand” questions or search queries. Rather, it reflects choices made by previous users, essentially summarizing widespread preferences. According to Esposito, this demonstrates communicative competence, not intelligence.

Translation is another great example. As a translator of philosophical texts, I often deal with fairly difficult material. A few years ago, online translation technologies were basically useless. Now I can plug in pages of complex philosophical Chinese and get a decent AI-generated translation. The AI output still needs to be edited, and there are often more than a few minor mistakes, but overall, the translation is remarkably useful. How is this possible? As Esposito explains, AI no longer seeks to “understand” either the Chinese input or the English output. And this is precisely how AI applications can be such “intelligent” translators. Once AI stopped translating like humans, it became quite proficient in translation.

Again, the core issue with AI is not “intelligence,” or at least that’s not asking the right question. What is unique about AI is its ability to communicate. As Esposito notes, we are so used to communicating with humans or other forms of “intelligence” that we judge AI communication as “intelligent.” When we look closer, however, we find that AI has perfected an ability to communicate while sidestepping the need for intelligence. The much-touted Turing test—which proposes that we can call a machine intelligent when it can communicate with a human who fails to realize they are conversing with a nonhuman—is in fact a test of communication, not of intelligence. It tests whether a person thinks their communication partner is another human; it does not ask about intelligence per se.

Another way to think about AI is to reflect on its need for big data. Human behavioral data is an increasingly valuable resource. The algorithms that comprise AI are often all but useless without data sets, with which machines, aided by complex-enough algorithms, can dominate chess, drive cars, and write reviews of philosophy books. On its own, AI cannot do much. Moreover, the most impressive examples of AI, often taken as the height of its intellectual abilities, are by and large those which rely most heavily on data derived from humans.

This also means that the bias in AI is on some level necessarily engendered therein. There are biases in the ways the systems are built since they are constructed by humans. One example is the presence of confirmation biases, evident because humans tend to confirm the results of AI. Perhaps the biggest issue—and one that can probably never be remedied—is that data is biased because it is human data. Esposito notes that while we can correct many of the more obvious aspects of these biases, such as the skew towards white male preferences, some level of human bias will likely always exist.

If we shift our understanding of AI away from “intelligence” to “communication,” as Esposito proposes, we can both correct and deal with bias much more efficiently. We can recognize the development of AI as oriented toward efficacy in communication, and as Esposito’s subtitle suggests, the production of social intelligence. We can work with AI, but we do not have to rely on it or assume it has higher-than-human objectivity.

Esposito’s insights can be further transferred into ethical reflections. Currently, developers need to focus on making AI itself more ethical. If we are going to make decisions with AI, we want to diminish as much bias as possible in these technologies. The problems go far beyond biases embedded within the programmers themselves or the structure of AI. As mentioned above, AI requires big data to run, and data drawn from humans is inherently biased. Esposito’s focus on communication also exposes another layer to AI ethics: as a particular type of communication partner, AI can only pick up on certain aspects of the world and mark specific inputs or relations as relevant, and these operations do not always comply with how we think about ethics.

Figuring out who should get a ventilator, or whether to prioritize the safety of pedestrians or passengers in self-driving cars, might be translatable into ones and zeros—we may reasonably design clear and precise rules around these issues. But the sense of moral acuity we hope to instill in children when we teach them to say “thank you” or “sorry” in certain situations is different. In fact, most of our ethical issues are rather mundane and require acute attentiveness to specific elements of unique situations: how to deal with a problematic boss, a flirtatious spouse, or students who keep staring at their cell phones during class. Ethical problems often involve ambiguity, and not in an entirely unambiguous way. If we really are attentive to our situation and to others around us, drawing on “big data” and well-defined procedures can only be useful when heavily modified by the person.

Even if we can, or do, end up writing algorithms for saying “thank you” or dealing with everyday interpersonal problems, our reliance on AI to solve ethical issues would only mark a loss. Esposito writes: “In pursuing the goal of algorithmic justice, then, the most difficult problems are communicative, not cognitive.” So, just as we can appreciate that we communicate intelligently with AI, we might come to recognize that we can use AI ethically. But demanding that AI itself become perfectly ethical is like demanding that planes flap their wings. Like most tools, AI can be used for ethical or unethical ends—it all depends on who is yielding it, how they do so, and for what purpose.

Elena Esposito’s book doesn’t just serve as a corrective for labeling algorithms as “intelligent.” Artificial Communication critically undermines and expands how we think about AI on the most fundamental levels. It helps us move away from assumptions that computers and technology are always correct, or that they even could be. Viewing AI primarily as a type of communication helps us retain humancentric thinking, even when powerful technologies are employed.

¤


Paul J. D’Ambrosio is professor of Chinese philosophy at East China Normal University in Shanghai, China. He mainly writes on Daoism, medieval Chinese thought, contemporary profile-based identity formation, and the relationship between humans and AI/algorithms.

LARB Contributor

Paul J. D’Ambrosio is a professor of Chinese philosophy at East China Normal University in Shanghai, China, where he also serves as dean of the Intercultural Center. He mainly writes on Daoism, medieval Chinese thought, contemporary profile-based identity formation, and the relationship between humans and AI/algorithms.

Share

LARB Staff Recommendations

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!