The Tyranny of Neutrality in “AI 2041”
By Virginia L. ConnOctober 30, 2021
AI 2041: Ten Visions for Our Future by Kai-Fu Lee and Chen Qiufan
At the heart of AI 2041 is fiction’s presumed ability to imagine, predict, and, at some level, shape the future. Outlets like Wired report that Chinese science fiction authors like Chen Qiufan have been “elevated to the status of oracles,” with his work described as “science fiction realism,” a term used since the 1950s to characterize science fiction that critically evaluates the present level of technology and society. The idea that SF does something is increasingly being taken up in science and technology studies, too, with a special issue of Osiris dedicated to science fiction and the history of science, including two articles devoted to Chinese science fictional forms and their real-world effects. International policy historian Julian Gewirtz has written a well-regarded article on how Alvin and Heidi Toffler’s Future Shock influenced Chinese science and technology policies in the period of reform and opening-up that began in the 1980s; today Chen is widely seen as the successor to this type of science fictional speculation in the tech world. Chen’s quasi-prophet status is central to AI 2041’s conceit that it might not be long until these stories become all too true.
As such, the selling point of the book is that, while its science fictional visions are themselves only imaginary, they draw on real technology and represent real possible paths for the development of AI. And not in a distant future, either: AI 2041 takes its name seriously. Lee claims that the technologies in the book have at least an 80 percent likelihood of coming to pass by 2041, with the book representing “a responsible and likely set of scenarios.” This prediction may seem far-fetched, but readers are likely to recognize many of the technologies already present in their own lives here in 2021: in the first story, “The Golden Elephant,” for example, Chen introduces an insurance company that uses deep learning technology to make predictions affecting individual premiums, while in “Contactless Love,” players find love through participating in online roleplaying games amid the ravages of COVID-19.
AI 2041, then, with its unique marriage of factual explanation and fictional short stories, joins the rank of a genre that will be familiar to most readers: the techno-utopian promise. But it is perhaps more legible through the lens of “technological solutionism,” a term coined by tech writer Evgeny Morozov to describe the idea that complex social phenomena — everything from politics to education to health care to agriculture — can be understood as “neatly defined problems with definite, computable solutions or as transparent and self-evident processes” subject to easy optimization: “[I]f only the right algorithms are in place.” The ideological framework of solutionism shifts our view of the world to redefine things such as inefficiency or human relations as problems with a technological solution. The technology itself, Lee and Chen claim, is neutral; it is human behavior that is unethical or inefficient, requiring technological intervention. We see this shift toward “neutral” technological solutionism in “Twin Sparrows,” for example, a story in which a character’s autistic behavior is mollified not through a change in social or kinship structures, but through the introduction of an AI companion that can uniquely understand him. Lee’s exuberant promotion of AI technologies as the solution to most existing social problems not only addresses those problems themselves, but also redefines human behavior as a problem in and of itself that can be solved through the application of neutral or even benevolent AI.
In fact, AI is explicitly and frequently described by Lee throughout the text as “neutral” — an objective technology that only acquires ethical value through its use by humans. This is a popular idea in technological innovation spheres, and it is also a wrong one. Because such technologies are developed by private corporations and their IP holdings are heavily guarded, it can be almost impossible to discover how a particular artificial intelligence or algorithm was designed, what dataset or corpus helped build it, or even how it works at all. The act of developing an AI involves training it on massive quantities of data, and as Lee points out in his explanation of deep learning, the program comes to recognize patterns that guide its predictive abilities. When the predicted outcomes or patterns replicate human biases inherent in the original data, the algorithm recreates systemic, repeatable errors resulting in the perpetuation of discrimination. Lee and Chen recognize this — the very first story in the collection, the aforementioned “The Golden Elephant,” deals with it explicitly, in fact — but they attribute this to human bias only and do not engage with the possibility of bias being an inextricable part of the technology itself.
If there is a problem in society, Lee and Chen posit, some form of AI will solve it. And, well, if another problem arises as a result of this intervention, that’s the fault of the human developers and users, not the technology. “AI,” then, despite all of Lee’s careful explanations of various forms of technological processes, functions as a curiously abstract object — it is an object and a concept that seems completely divorced from its social reality. If AI does something good, it is because AI is good; if AI does something bad, no it didn’t — the user did. The text transports AI from the realm of a socially embedded tool to the realm of conceptual cure to all social ills without recognizing it as a product of those same societies. This is not at all to say that Chen or Lee ignores problems that develop from the use of increasingly sophisticated AI technology, but rather that those problems are blamed on people while successes are attributed to the technology itself.
As a result, AI 2041 can be read as an attempt to validate the expansion of data-exploitation technology of the type represented by Google — which, of course, as former head of Google China, Lee is especially prone to rhapsodizing. As a speculative industrial technology, the reification of AI demands the optimization and financialization of everyday operations in the interest of big tech market growth while continuing to insist that this benefits everyone. Yet, it is individual behavior that is called to account for the subsequent problems arising from its use, not the creators of the technology, nor even the technology itself.
In “The Golden Elephant,” for example, Ganesh Insurance — a deep-learning-enabled insurance program — optimizes every action and interaction in India to obtain the best premiums for its citizens. But as has been well documented, relying on deep learning algorithms that have been trained on prior demographic data results in a retrenchment of existing social divisions — in this case, the caste system. In this particular story, for example, the protagonist is warned away from even talking to her love interest, who lives in a historically lower-caste neighborhood. The Ganesh algorithm used by her family’s insurance company has been trained on a corpus of demographic data that conflates data about crime, income, location, etc. to arrive at the “objective” conclusion that interacting with people from this neighborhood is likely to result in adverse outcomes. The algorithm itself cannot act as an activist to correct for (or even identify in the first place) the redlining policies that have led to this inequality and can only make predictions based on historical data. This type of algorithmic bias has been written about extensively by scholars such as Ruha Benjamin, Safiya Umoja Noble, and Rumman Chowdhury, but Chen and Lee do not address this entrenched bias here except to present it as an attribute of human behavior, not a problem with the technology itself.
Despite a fundamental flaw with the book’s approach to the role of technology, the collaborative format juxtaposing short stories with a layperson’s explanation of AI concepts is interesting for literature fans and futurists alike. One can easily see how this project fits into similar experiments, such as the Center for Science and the Imagination’s monthly Future Tense Fiction (which pairs a piece of short fiction about developments in science and technology with a response by experts working in that field) or the work of the futurist forecasting company SciFutures, which employs a rotating stable of SF authors to “prototype futures and accelerate innovation” (including, most notably, Ken Liu, Chen’s first translator, mentor, and friend). As a supposed oracle of future development, though, one has to wonder if Chen is writing himself out of the very narrative he’s constructing: elsewhere, Lee has developed an algorithm that replicates Chen’s voice, and the short story “The State of Trance” (not included in this collection), which incorporates passages generated by the Chen-AI, won first place in a Shanghai literary competition. The competition itself was moderated by an AI that had been trained on the same sort of corpuses and datasets discussed here in AI 2041, and deemed the artificially generated text superior to writing submitted by the Nobel laureate for literature, Mo Yan. Lee has gone on record — both in this book and elsewhere — to claim that AI is fundamentally incapable of creativity, but if it is capable of reproducing an author’s voice so exactly that it is indistinguishable from the real thing, it may very well be that the next collaboration will be between Lee and an AI co-author.
Is all this to say that artificial intelligence, as a concept and as presented in AI 2041 specifically, is bad? No, but neither is it a net good, as Lee posits, or even neutral, as when he writes, “Technology is inherently neutral — it’s for people who use it for purposes both good and evil.” Artificial intelligence can only do what humans have trained it to do in the way that they have trained it to learn; it cannot be separated from its sociocultural and technological embeddings. As current CEO of Sinovation Ventures and a senior executive at Microsoft, SGI, and Apple, Lee has much to gain financially from the promotion of AI as a set of tools, and much to protect in his assertions that its enormous problems are “not inflicted by AI, but by humans who use AI maliciously or carelessly.” It is hardly reactionary to be suspicious of the promotion of items that can be developed by and profited off of by companies with investments in education (as we see in the story “Twin Sparrows,” which mirrors the rise of online surveillance software for education like Proctorio or Microsoft’s own partner, Pearson VUE), the internet of things (as with several of Sinovation’s investments, such as Megvii and 4Paradigm, and which is endemic throughout each of these stories), autonomous vehicles (as in AI 2041’s “The Holy Driver” and, in real life, Sinovation’s investment, Momenta, alongside Google’s Waymo), and more.
Perhaps the most egregious of this preciousness toward AI can be seen in how the authors conceptualize risk and harm, as well as the role AI technology plays in mitigating them. Even as Lee’s own venture capital firm invests in the development of Bitcoin and cryptocurrency supercomputing hardware, he writes in the intro to “Quantum Genocide” that it is actually “AI-enabled autonomous weapons” that are the “greatest danger from AI.” This, maybe more than anything else, illustrates the limited scope of a project aimed at promoting the “hugely beneficial [role AI and AI development companies will offer] to humanity […] despite its [human] costs,” which can conceive of weapons as threats but not the existential threat of automation in insurance, transportation, and medical decisions that kill far more people in far less flashy ways. Within this same story, Bitcoin — a technology we increasingly know has caused environmental degradation at a staggering rate, accelerating climate change worldwide — is lauded as a desirable technology even when the exact same story describes climate change as an existential threat to humanity. It is not Lee’s explanations of the technology — which are clear, rigorous, and detailed — that are the problem, but his framing of AI technologies as separate from the financial interests behind their development and which posit companies like his own as simply the providers of unproblematic technological solutions.
So then is all this to say that AI 2041, as a text, is bad, as in literature, or bad, as in conceptual ethics? No to the first, but perhaps to the second. Fans of Chen’s science fiction will find much to love here, but there’s something distinctly uncomfortable about reading a collaboration between someone who predicts the future in fiction alongside a representative of major corporate interests who claims to be building such a future. One gets the impression that Chen is attempting to remain neutral in his descriptions of how AI can (and already is) shaping our world even as Lee describes this same technology as, itself, ethically neutral. But in the world we have right now, the one poised on the brink of an AI revolution that is itself the product of human biases and uneven application, who can afford to be neutral?
Virginia L. Conn is a lecturer at Stevens Institute of Technology.
LARB Staff Recommendations
Hao Jingfang’s “Vagabonds” considers the reflexive matrix of Chinese and Western cultural perceptions.
Hired to write stories alongside an AI writing bot, neuroscientist Patrick House reflects on how the bot can — and can’t — write the same story that...
Did you know LARB is a reader-supported nonprofit?
LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!