The Vices of Virtuality

August 15, 2021   •   By Kieran Setiya

Inwardness: An Outsider’s Guide

Jonardon Ganeri

Touch: Recovering Our Most Vital Sense

Richard Kearney

Intervolution: Smart Bodies Smart Things

Mark C. Taylor

THE CORONAVIRUS PANDEMIC is, among other things, a vast uncontrolled experiment in social psychology. Billions have lived for more than a year with drastically limited in-person interaction, many of them working virtually, if at all. Those forced to labor in proximity to others — the essential workers — have faced persistent bodily risk. We have leaned hard on Zoom and FaceTime to connect with family and friends, distant or otherwise. The digital mediation of human life has intensified abruptly as tactile contact is repressed. What will be the fallout of isolation on a scale unimagined 18 months ago?


We can expect a wave of books predicting the imminent future. What we might not expect is that the wave began before the pandemic hit, in a prescient new series edited by Costica Bradatan for Columbia University Press. According to its somewhat enigmatic blurb, the “No Limits” series “brings together creative thinkers who delight in the pleasure of intellectual hunting, wherever the hunt may take them and whatever critical boundaries they have to trample as they go.” By coincidence or design, three of the first four volumes take up questions of virtuality: the technological interpolation of human bodies; the centrality of touch; and introspection divorced, so far as possible, from the exigencies of the physical world.


The least timely of the three is also the most recent: Jonardon Ganeri’s Inwardness, a work of dazzling compression and eclectic research. Ganeri explores the nature of selfhood through the metaphors we use to capture it, testing these metaphors against arguments from the Upanishads, Buddhism, Daoism, and Simone Weil, as well as fictions by 20th-century authors such as Ryūnosuko Akutagawa, Jorge Luis Borges, Kōbō Abe, and Fernando Pessoa, among others. All this in little more than 100 eloquent pages.


For the luckier among us, the pandemic has afforded time for introspection. But what do we find when we look within? Some thinkers use metaphors of physical space, the “fields and […] grand palaces” of St. Augustine’s “inner world,” also pictured as a library. Ganeri is skeptical: “[I]f the library of memories is the self,” he asks, “how can go one go inside it — who is the ‘I’ that walks around inside itself?” The spatial metaphor is skewed. Equally misguided, for Ganeri, is the Buddhist metaphor of the self as a lamp that illuminates itself as it illuminates the world. The problem, raised by Nāgārjuna in the first century CE, is how to illuminate what was never dark. In Ganeri’s words: “We can’t describe something as ‘self-cleaning’ if it never gets dirty, and we can’t describe interiority as ‘self-illuminating’ if it is never concealed.”


There are other dubious metaphors on the way to one Ganeri finds more promising. With nods to Kierkegaard, Borges, and the Daoist parable of Zhuang Zhou (who dreams he is a butterfly and on waking can’t be sure he’s not a butterfly dreaming it’s Zhuang Zhou), Ganeri finds a “doubling” in the self, as we confront a constructed persona, or more than one. In this way, our self-relation is like a relation to someone else. The most insistent champion of this model is Fernando Pessoa, the Portuguese poet who wrote not under pseudonyms, but as multiple selves, which he called “heteronyms”: “Assuming one heteronym, Pessoa is a pantheistic naturalist poet Caeiro; assuming another heteronym, he is a neoclassical poet, Reis; assuming a third, he is the hedonistic Campos.”


There are less exalted enactments of self-division in the work of stand-up comics who perform under their own names. Ganeri cites Mae Martin, Amna Saleem, and Liam Williams, but he might have turned to the British comedian Stewart Lee, who berates his comedic self in Content Provider (2018):


I wish I could appear on Sky for the money; I wish I could, right, but I can’t. Because the character of Stewart Lee that I’ve created would have smug, liberal, moral objections to appearing on Sky. And I’m coming to hate the character of Stewart Lee. […] I even hate this, what I’m saying now. Pretentious, meta-textual, self-aware shit. “What’s wrong with proper jokes?” That’s what I say to me.


On the model Ganeri floats, the offstage Lee is no more real than the onstage character: each is a heteronym. “I” is no more than a term for the occupant of the subject position, whoever that might be: “Pessoa’s remarkable discovery was that there is no constancy in the fact of who is there at the center.” The doubled self is polymorphous, reflexive, playful.


These are exhilarating thoughts. They radicalize scenarios in which the first person is unmoored from the human being one seems to be. Thus, I can dream that I am Babe Ruth, swatting a home run, without dreaming, incoherently, that Kieran Setiya is Babe Ruth. Does it follow that I am not a particular human being but “something whose essence is inwardness, something which is just as much there in dreams inside of dreams as it is in waking experience”? The medieval philosopher Avicenna asks us to imagine blazing into being in mid-air, without sensations of touch or taste or smell or sight or sound. If we can think of ourselves as we fall, Avicenna argues, we do not refer to any material thing, since nothing is physically given to us.


These arguments are provocative, but I don’t think they quite work. Yes, the concept “I” leaves open who or what I am. But unless we assume that my true nature is revealed by introspection — and why should we? — we can’t conclude, with Avicenna, that what I am is not a human being. Nor is it safe to reduce what is given in ordinary life to what is given introspectively in dreams, or in freefall ex nihilo. As philosophers have known for at least a century, the phenomenology of ordinary introspection is “transparent”: when we “turn inward” on experience, emotion, or thought, what we find is just the field of objects around us, the threats and promises they make, the facts themselves. We “see through” our minds to the world they represent.


This idea goes back to the beginning of analytic philosophy. Here is G. E. Moore in “The Refutation of Idealism” (1903):


[T]he moment we try to fix our attention upon consciousness and to see what, distinctly, it is, it seems to vanish: it seems as if we had before us a mere emptiness. When we try to introspect the sensation of blue, all we can see is the blue: the other element is as if it were diaphanous.


You can experience this yourself: looking at a blue wall, try to focus on the mental sensation of color, not the wall itself. You’ll find there’s nothing there. The point survives in contemporary work on self-knowledge inspired by Gareth Evans, an Oxford philosopher who died in 1980 at the age of 34. If you want to know if you believe there will be a third world war, Evans advised, ask yourself if there will be a third world war — your answer to the second question, a question that is not about you, provides your answer to the first. Again, we “look through” mental states to what they are about. This is the “transparency” of self-knowledge.


To be fair to him, Ganeri is alert to these ideas. He develops them in relation to Akutagawa’s “In a Grove” (1922), a story now best known as the source for the movie Rashomon (1950), directed by Akira Kurosawa. In the story, and to a lesser extent the film, subjectivity is manifested not in thoughts about oneself but in how one represents the world: “Off the world’s particular appearance,” writes Ganeri, “you read your own state of mind.”


In the metaphor of transparency, Pessoa’s multiple subjective heteronyms can only be distortions of reality; to toy with them is to mock objective fact — a vice diagnosed by the French philosopher Simone Weil in the mid-20th century. Ganeri leaves us torn between Pessoa’s playful inwardness and Weil’s “decreation”: “[T]he destruction of self that makes room for the possibility of consenting to what is real.”


I side with Weil and the transparency of introspection: it is through knowing other things, and other people, that we come to know ourselves. What we suppose in sensory deprivation or in dreams is not a useful guide to the nature of anything, not even us. In the same way, the sensory deficit of the COVID lockdown is not an asset to introspective knowledge: we lose touch with ourselves as we lose touch with the rest of the world.


¤


It’s not an accident that touch becomes a guiding metaphor here. It’s through our tactile senses that we find ourselves embodied as the animals we are. Cue Richard Kearney’s Touch, which tackles “the crisis of touch in our time — an age of simulation informed by digital technology.” If the crisis lay in waiting 18 months ago, now it is acute. We have lived through an unparalleled diminution of social and physical contact, a wild escalation of life online: virtual, audiovisual, incorporeal. What has this done to our sense of ourselves?


Kearney traces both the theory and practice of touch, contesting the “optocentrism” of Western philosophy since Socrates and Plato, who denigrate the body and incline to metaphors of sight and sun. Yet, Kearney argues, touch is our first, most basic sense, infusing all the others. He enlists on its behalf the teachings of Aristotle in De Anima (On the Soul), contending that, for Aristotle, “human perfection is the perfection of touch.” But this is hard to square with Aristotle’s celebration of theoria — contemplation, though the word means “sight” — as the highest human good. A more plausible front man for somatic philosophy is Diogenes the Cynic, a contemporary of Socrates who called Plato’s airy dialogues “a waste of time,” who lived in a storage jar in the streets of Athens, and whose philosophy took the shape of performance art. When a follower of Zeno claimed to refute the existence of motion, addressing a rapt and baffled audience, Diogenes got up and walked away — an act of performative refutation.


Kearney’s book invites us to philosophize with the body, drawing on the phenomenology of Edmund Husserl and Maurice Merleau-Ponty. At the root is what Husserl called “double sensation”: the experience of touch is always an experience of being touched. When the object is another person, “touch serves as the indispensable agency of intercorporality — and, by moral extension, empathy.”


The practical upshot is a model of healing — even psychic healing — in which touch plays a pivotal role. Kearney recounts the myth of the centaur Chiron, who was mortally wounded by Herakles. Unable to cure himself, Chiron used his experience of pain to cure the pain of others, teaching his student Asclepius “the art of healing through touch.” Kearney contrasts this with the intellectualism of Hippocrates, the “father of medicine.” Thus Freud, credited by Kearney as the founder of trauma therapy, “opts for a Hippocratic model of psychoanalytics over a more Asclepian model of psychohaptics,” thereby missing “the key role of tactility in therapy.” Kearney presents compelling evidence for the therapeutic power of touch, from the treatment of torture victims to the physiology of childhood trauma and more quotidian effects on stress, immunity, and sleep.


Kearney tends to focus on the reparative role of contact, not the ecstatic pleasures of touch — though we may be missing both these days. The closest Kearney gets to sex is Saint Teresa’s vision of an angel with a “spear of gold […] thrusting it at times into my heart.” “The pain was so great, that it made me moan,” she writes, “and yet so surpassing was the sweetness of this excessive pain, that I could not wish to be rid of it.” But the book concludes with a seminar at Boston College, where students explore the “flight of erotic-romantic behavior […] from communal rituals to digital fantasies” as pornography proliferates and dating goes online.


The sole disappointment of Kearney’s study, which begins with rousing turns of phrase — “as cyber technologies progress, proximity is replaced by proxy”; “excarnation is flesh becoming image” — is that it ends in capitulation. “Like the hair of the dog,” Kearney wonders, “might the best response to digital abuse be digital reuse? […] [D]igital technology putting itself in question and reopening spaces where we might invent new ways to reinhabit our world.” A coda mentions VR “telehaptics,” technologies that create a simulacrum of touch. There is no attempt to argue for the merits of this techno-future, just a muted air of inevitability.


Kearney’s case for tactility might animate a rallying cry for resistance to excarnation, a call for lives of interaction face-to-face and hand-in-hand. This is not where Kearney finally winds up — though I couldn’t say why.


¤


It’s a different story with Mark C. Taylor, whose book Intervolution is about the “Internet of Bodies” — the digital networking of the human animal, modeled on the Internet of Things. Its talisman is the artificial pancreas on Taylor’s belt, a device that monitors his need for insulin, delivers it without injections, and helps him regulate his type 1 diabetes.


Taylor narrates two overlapping histories. One is the history of diabetes, from its description in the first century CE as “a melting down of the flesh and limbs into urine” to the discovery of insulin in 1921, the first insulin pump prototype in 1963 (“as big as a heavy backpack”), and the cell-phone-sized device that Taylor wears. The other is the history of artificial intelligence from Wolfgang von Kempelen’s mechanical chess-playing Turk in 1769 — in fact, an elaborate fake — through algorithmic AI to machine learning and its applications in the present. “My artificial pancreas brings together all the technologies and functionalities I have been considering,” he concludes, “artificial intelligence, high-speed computers, neural networks, deep machine learning, wireless networks, sensors, transmitters, Big Data, mobile devices, wearable computational machines.”


Taylor is clear-eyed about the dangers of these technological marvels: “[I]n industrial capitalism, those who own the means of production hold the power. In today’s world, those who own the means of gathering information and processing data hold the power.” In the age of surveillance capitalism, the might of corporations like Amazon, Facebook, and Google not only concentrates wealth but disrupts the fragile purchase of democracy. And yet, Taylor observes,


[T]he same image-processing and voice recognition technologies, as well as tracking devices that are being used for political, economic, and even criminal surveillance, and apps that are being used for real and fake targeted political ads and customized marketing are also being used to monitor patients and deliver precision medical care.


We are left to weigh the pros and cons. On one side, the prediction that nearly half of American jobs could be replaced by AI within 15 years, as inequality soars. On the other, 34 million Americans with diabetes now, and a CDC projection that by 2050 the rate will be more than one in three. Again, on one side, accelerating threats to privacy as we are surveilled by new technologies. On the other, the need for access to big data in medical AI. Taylor’s own assessment is clear: “While prudent oversight is undeniably advisable and necessary, it would be a serious mistake to curtail the development of technologies that hold enormous promise for improving our lives and alleviating suffering.”


But he never actually does the math. It is acquired or type 2 diabetes that drives the epidemic forecast by the CDC. It would be perverse to take this as a given, pushing technological “solutions,” instead of trying to prevent or mitigate the problem with interventions in public health. The new biotech will only be available, anyway, to those with adequate health insurance — which mass unemployment would preclude. Nor does Taylor pause to ask why Americans should want to protect their medical privacy. Part of the answer is that they are afraid they’ll be denied insurance: a function of privatized health care.


Taylor puts the critic in a difficult position: no one wants to block life-saving innovations or impede the technologies on which his health depends. But the advance of technology is almost entirely unmoored, at present, from its social value — unless you buy the preposterous alchemy in which economic growth is good for everyone. There is an urgent political need to regulate and restrain the tech giants that increasingly control our lives. That may prove impossible; and if it happens, there will be costs.


It is less difficult to dissent from the speculation that preoccupies the end of Taylor’s book, which veers into “techno-futurism.” Taylor is inspired by the performance artist Stelarc, who


has transformed his body into a constantly morphing work of art by swallowing miniature cameras, then projecting their movement through his intestines, and by adding prostheses like a robotic third arm activated by electromyography (EMG) signals transmitted from his abdominal muscles.


Taylor quotes an interview with the artist: “Technology is what defines being human. It’s not an antagonistic alien sort of object, it’s part of our human nature. […] My attitude is that technology is, and always has been, an appendage of the body.” For Taylor, this does not go far enough:


And, I would add, the body is an appendage of technology. While Stelarc’s performances effectively stage the activities of an extended body, he pays insufficient attention to the way neural networks, deep learning, and artificial intelligence create an extended mind. […] [He] does not acknowledge the likelihood that the interevolution [sic] of smart things and smart bodies will lead to new forms of life that surpass human being.


The title of the book, “intervolution,” is not a neologism: Taylor traces the word back to John Milton in 1667 and Nathaniel Hawthorne in The Scarlet Letter (1850), as if to suggest that we were always already cyborgs, technologies and bodies wound together. But dreams of “surpassing” the human are both disturbing and unfounded.


Taylor repeats the fantasies of Silicon Valley millionaires such as Ray Kurzweil: “[I]f information and, by extension, intelligence could be separated from matter and biological bodies, the ancient dream of immortality might become technologically feasible.” “Remember,” he cautions with unintentional comedy, “Kurzweil is not some crazed crackpot […] he is the director of engineering at Google.” Kurzweil’s dream of immortality for the 0.1 percent is the apex of inequality. It is also impossible: as a simple argument shows, you can’t cheat death by “uploading” your mind to a machine — even if machines someday become conscious. Imagine you are uploaded but by chance continue to live, the “data” in your brain preserved through the copying process. There are two thinkers now: you, as you were, and the machine. It is at best a mental duplicate of you, not you. But then the same is true if you are uploaded as your body dies. Digital immortality is a myth.


Two faults converge here. One concerns the nature of the self: dreams of independence from the body, like those explored by Jonardon Ganeri, are far from harmless. There is a direct line from ideas like these to the projects of Kurzweil and his ilk through the work of the “transhumanist” Max More, who wrote a doctoral dissertation on the philosopher Derek Parfit. The second fault is technological determinism: “learned helplessness” about the development of AI, as at the end of Kearney’s book. In a 2019 interview with The New York Times, the philosopher David Chalmers spoke wistfully about the human brain: “At some point I think we must face the fact that there are going to be many faster substrates for running intelligence than our own. If we want to stick to our biological brains, then we are in danger of being left behind. […] Ultimately, we’d have to upgrade.” It’s as though the coming of superior AI were out of our control.


One of the risks of the pandemic, with its enforced virtualization of human life, is to cement this sense of inevitability: the belief that we can only accommodate ourselves to the march of technology, never stall or redirect it. The result is a kind of Stockholm Syndrome in which we embrace our digital captors. But the changes of the last 18 months — isolation, loneliness, tactile deprivation, the growing mediation of social life — need not be permanent. Think back to Diogenes, living in a storage jar, performing his bodily functions in public space. Diogenes reminds us that technologies — even the basic technologies of shelter and plumbing — are matters of human choice. The point is not that we should reject them, one and all, but that we ought to choose.


While the main incentives to innovation remain financial and the regulation of new technologies is poor, we should expect the applications of AI to exacerbate social and economic inequality, hinder democracy, and distort our conception of ourselves. I am not sure we can stop this, or how to preserve the good — for instance, Taylor’s artificial pancreas — without the bad. But I do think we should try.


¤


Kieran Setiya teaches philosophy at MIT. He is the author of Midlife: A Philosophical Guide and is working on a new book, Life is Hard.