The New Artificial Intelligentsia

In the fifth essay of the Legacies of Eugenics series, Ruha Benjamin explores how AI evangelists wrap their self-interest in a cloak of humanistic concern.

Support LARB’s writers and staff.


All donations made through December 31 will be matched up to $100,000. Support LARB’s writers and staff by making a tax-deductible donation today!


This is the fifth installment in the Legacies of Eugenics series, which features essays by leading thinkers devoted to exploring the history of eugenics and the ways it shapes our present. The series is organized by Osagie K. Obasogie in collaboration with the Los Angeles Review of Books, and supported by the Center for Genetics and Society, the Othering & Belonging Institute, and Berkeley Public Health.


¤


IN THE FALL OF 2016, I gave a talk at the Institute for Advanced Study in Princeton titled “Are Robots Racist?” Headlines such as “Can Computers Be Racist? The Human-Like Bias of Algorithms,” “Artificial Intelligence’s White Guy Problem,” and “Is an Algorithm Any Less Racist Than a Human?” had captured my attention in the months before. What better venue to discuss the growing concerns about emerging technologies, I thought, than an institution established during the early rise of fascism in Europe, which once housed intellectual giants like J. Robert Oppenheimer and Albert Einstein, and prides itself on “protecting and promoting independent inquiry.”


My initial remarks focused on how emerging technologies reflect and reproduce social inequities, using specific examples of what some termed “algorithmic discrimination” and “machine bias.” A lively discussion ensued. The most memorable exchange was with a mathematician who politely acknowledged the importance of the issues I raised but then assured me that “as AI advances, it will eventually show us how to address these problems.” Struck by his earnest faith in technology as a force for good, I wanted to sputter, “But what about those already being harmed by the deployment of experimental AI in healthcare, education, criminal justice, and more—are they expected to wait for a mythical future where sentient systems act as sage stewards of humanity?”


Fast-forward almost 10 years, and we are living in the imagination of AI evangelists racing to build artificial general intelligence (AGI), even as they warn of its potential to destroy us. This gospel of love and fear insists on “aligning” AI with human values to rein in these digital deities. OpenAI, the company behind ChatGPT, echoed the sentiment of my IAS colleague: “We are improving our AI systems’ ability to learn from human feedback and to assist humans at evaluating AI. Our goal is to build a sufficiently aligned AI system that can help us solve all other alignment problems.” They envision a time when, eventually, “our AI systems can take over more and more of our alignment work and ultimately conceive, implement, study, and develop better alignment techniques than we have now. They will work together with humans to ensure that their own successors are more aligned with humans.” For many, this is not reassuring.


In March 2023, the Future of Life Institute published an open letter, now signed by over 33,000 researchers, policymakers, CEOs, and professors, calling for a six-month moratorium on AI system training until “we are confident that their effects will be positive and their risks will be manageable.” AI popularizer Eliezer Yudkowsky argued that such a short pause does not match the existential risk posed by continued AI development, warning that “the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.” In the end, there was no pause, and the race to create AGI charged ahead. Many who raised alarms about a near-future AI superintelligence extinction event got busy raising billions in capital to outpace their competitors. This is not paradoxical so much as profitable. Elon Musk, for example, initially called for a moratorium on developing AGI, only to start a new company pursuing it. OpenAI founder Sam Altman advocated for more government regulation of AGI but then lobbied European lawmakers to minimize AI protections. Janus, the two-faced Roman deity, seems to be their god of choice.


Behind this duplicity lies a familiar calculus: if AI evangelists can convince us that AGI is possible, imminent, and dangerous, we might be compelled to entrust our fate to them. Hype and doom, in other words, are two sides of the same (bit)coin. “AI safety” principles, teams, and initiatives are proliferating in part to hoard power and resources and, ultimately, to engineer the future in their own image. The aim: To lull us into acquiescence so that they can carry on business as usual.


I have analyzed elsewhere the harmful impacts of AI. In the rest of this essay, I focus on AI’s insidious inputs—the eugenic values and logics of those who create these systems, cloaking them in the rhetoric of salvation. Breathless buzzwords like “efficiency,” “novelty,” and “productivity” conceal a self-serving vision: the future imagined by AI evangelists is meant only for a small sliver of humanity. The rest of us will be left clamoring to survive on a boiling planet. Take the autonomous weapons systems raining hell on Palestinians—why else give them saccharine-sounding names like Lavender and the Gospel if not to make these deadly inventions seem benevolent? If we listen carefully to the good word of tech evangelists, we can hear the groans of those buried under the rubble of progress.


Many of the same people behind the technologies wreaking havoc today—workplace algorithms intensifying the pace of work, facial recognition software leading to false arrests, automated triage systems rationing quality healthcare, and “smart” weapons ripping through flesh with a click—also brand themselves as humanity’s saviors. Amazon founder Jeff Bezos pronounced that “it is a golden age. […] We are now solving problems with machine learning and artificial intelligence that were … in the realm of science fiction for the last several decades.” In a 2015 essay for an interfaith magazine, PayPal and surveillance tech firm Palantir Technologies co-founder Peter Thiel argued,


Science and technology are natural allies to this Judeo-Western optimism, especially if we remain open to an eschatological frame in which God works through us in building the kingdom of heaven today, here on Earth—in which the kingdom of heaven is both a future reality and something partially achievable in the present.

Likewise, the home page of Singularity University, a company founded by Ray Kurzweil and Peter Diamandis, affirms the organization’s belief that “technology and entrepreneurship can solve the world’s greatest challenges.” As a group, the artificial intelligentsia promises to guide us into the Future™, positioning themselves as Guardians of the Galaxy, even as they engineer the crises against which we must guard.


¤


In the fall of 2019, I attended a student-organized conference at Princeton called Envision. The guest of honor was the aforementioned Diamandis, who beamed onto the big screen addressing a packed auditorium filled with eager undergrads. His talk was titled “Abundance in the Digital Age,” and he began by running through a list of impending “transformations” across various industries—healthcare, education, finance, real estate, entertainment—that unprecedented technological “convergences” would enable. His excitement was contagious.


But when the Q and A opened, it was clear that not everyone had caught the bug. A student asked about the risks involved in incorporating emerging technologies across all these industries at such a fast pace. Diamandis responded: “So, first of all, I think it’s important to realize that it’s not a choice that we have. […] These technologies are going to be incorporated.” Yet, immediately after this, he celebrated the fact that “we are always looking to digitize, dematerialize, demonetize, and democratize products and services.” Apparently, we have no choice but to opt into this digital democracy. I have learned that a key tenet of the artificial intelligentsia’s gospel is inevitability, their own version of predestination. As philosopher Émile P. Torres observes, one way to understand the worship of AI is as a religion for atheists: “[S]ince God does not exist, why not just create him?” When Diamandis finally addressed the student’s question, he identified the “risks” of a company adopting a technology too early or too late, and “that’s putting aside all the social elements of technological employment and any moral and ethical issues that will pop up.” Here again, the “social elements” are an afterthought, those pesky problems that “pop up” now and again to wreak havoc on people’s lives.


As when, in 2013, venture capitalist turned Michigan governor Rick Snyder adopted a $46 million dollar program called MiDAS (Michigan Integrated Data Automated System) to efficiently identify residents committing insurance fraud. After his administration laid off the majority of state employees who would normally review insurance claims, the state’s new automated system wrongly accused over 20,000 people of fraud. Not only did people lose access to unemployment payments; after heavy fines as high as $100,000 were levied, individuals’ tax refunds and wages were garnished as well. About 11,000 people filed for bankruptcy, some lost their homes, and others disproportionately divorced or died by suicide. These are just a handful of the “social elements” that pop up when automated technologies are hastily integrated into every facet of life.


Whereas the artificial intelligentsia would like us to downplay these risks, we must instead do the opposite: soberly understand how emerging technologies create new social and existential trade-offs. As Jennifer Lord, a civil rights and employment attorney who successfully fought “for seven years to get that money back for those wrongfully accused” Michigan residents, reflected:


I’ve been surprised how little this worries most people. I don’t know if it’s the fact that in Michigan this story was breaking at the same time as the Flint water crisis, so if you’re looking at two debacles, the one with the brown glass of water is very disturbing. MiDAS just didn’t seem to ignite the imagination like I thought it would. The state of Michigan was basically stealing tens of millions of dollars from its citizens!
 
How to fix it going forward? One idea is having an independent, knowledgeable, and interdisciplinary group vetting technology before any purchase is made. I can only imagine a skilled salesperson selling the [state unemployment] agency on promises of efficiency, cost cutting, and profit increases.

A skilled salesperson like Peter Diamandis, perhaps, or any of the other tech evangelists, who preach inevitability to the next generation: “These technologies are going to be incorporated,” but in a democratic-ish way, so don’t worry.


The first thing I said when I took the Envision podium after Diamandis logged off was to remind those in attendance (and myself) that we do have a choice. Since then, artists, writers, and musicians have stood up to, and in some cases sued, AI companies for stealing creative work to train their models. As an open letter from the Center for Artistic Inquiry and Reporting, published May 2, 2023, explained,


AI-art generators are trained on enormous datasets, containing millions upon millions of copyrighted images, harvested without their creator’s knowledge, let alone compensation or consent. This is effectively the greatest art heist in history. Perpetrated by respectable-seeming corporate entities backed by Silicon Valley venture capital. It’s daylight robbery.
 

The protests that rocked Hollywood in 2023, led by SAG-AFTRA, also remind us that we have a choice. Union members launched a successful five-month strike, pushing back against the incursion of AI into their industry, their picket signs expressing growing defiance and anger: “AI is not ART,” “Wrote ChatGPT This,” “AI’s Not Taking Your Dumb Notes.” Whereas studio executives have become followers of the new tech gospel, looking to AI to generate scripts, automate acting, cut costs, and push down wages, writers and actors effectively said, “Hell nah! We refuse to be erased.” As Brian Merchant wrote in the Los Angeles Times, “At a moment when the prospect of executives and managers using software automation to undermine work in professions everywhere loomed large, the strike became something of a proxy battle of humans vs. AI.” For many, this is the real existential question—not an AI-generated apocalypse—that directly impacts people’s livelihoods and the stories we tell about our lives: “There was a palpable fear that tech products, built by rich and mostly white startup guys in Silicon Valley[,] would churn out content that would reflect exactly that,” wrote Merchant. And yet, when tech evangelists rattle off the industry-wide transformations of AI, they can’t seem to reckon seriously with the here-and-now risks that drive people to the picket line.


The same hubris that leads tech titans like Mark Zuckerberg, Sam Altman, and Bill Gates to assume that a high IQ grants them the license to build universal systems for all of humanity explains why those presumed to be geniuses believe they can lead us out of the virtual wilderness. Altman declared in 2019: “The most successful people I know believe in themselves almost to the point of delusion.” Maria Klawe, a board member of Microsoft from 2009 to 2015, says that Gates acted like “the usual rules of behaviour don’t apply to him” and that he was the “smartest person in the room.” The dominating personalities of arrogant “nerds” in the industry suggests that being uncool in youth—being removed from a sense of community, perhaps—fuels the desire to predict, surveil, and control society. As Altman puts it, “A big secret is that you can bend the world to your will a surprising percentage of the time.” Worshipping one’s own intelligence, it seems, leads to a dangerous desire to bring the rest of humanity to its knees.


The point here is that the artificial intelligentsia is dragging us into an archaic future where intelligence is quantified, fixed, and ranked, and smartness is fetishized. We would do well to remember that IQ is, above all, a eugenic concept, concocted to sort winners from losers and to justify the rules of the game. Eugenics … in the 21st century … among those who fancy themselves futurists?


¤


Lest we forget, eugenics was originally a philosophy and movement popularized by social progressives. Many prominent eugenicists saw themselves as caretakers of humanity, working to ensure human flourishing by cultivating good genes (e.g., Better Baby Contests) and rooting out bad ones (e.g., sterilization). They were, among others, inventors like Alexander Graham Bell, scientists like Nobel Prize winner Francis Crick, politicians like Teddy Roosevelt, beloved writers like Helen Keller and H. G. Wells, and far too many university presidents and professors to name. In fact, Harry Chandler, publisher of the Los Angeles Times from 1917 to 1944, helped popularize eugenics and advocate for sterilization to address social problems. Their main think tank, the Human Betterment Foundation, was headquartered in Southern California, not Alabama or Mississippi. In today’s parlance, the foundation was a policy “influencer” helping to spread the eugenics gospel, forging “links between major players in the private sector and state officials who carried out the work.”


Take the great Irish playwright, activist, and winner of the Nobel Prize for Literature, George Bernard Shaw, who described Jews as “the real enemy, the invader from the East, the ruffian, the oriental parasite,” arguing that “the only fundamental and possible socialism is the socialisation of the selective breeding of man,” and who mused that “the overthrow of the aristocrat has created the necessity for the Superman.” Or consider the outspoken advocate of women’s reproductive rights (and H. G. Wells’s mistress) Margaret Sanger, who claimed in 1921 that “the most urgent problem today is how to limit and discourage the over-fertility of the mentally and physically defective.” If eugenicists of the past understood their work as progressive, then self-proclaimed futurists selling us fantasies of artificial intelligence are the harbingers of a new eugenics.


Just as in the past, the Eugenics 2.0 of the tech evangelists is swaddled in the language of betterment, but they seek more than human betterment. In fact, their worldview is perhaps best understood as pro-extinctionist in that, as Torres argues, these far-futurists are enthusiastic about “legacy humans” giving way to a smarter, stronger, more enhanced species of posthumans. The artificial intelligentsia promise to guide us into this future where we shed our mortal meat suits, floating into a blissful digital reality, merging with technology (the “singularity”) and/or transcending it (transhumanism). They advocate relying on science and reason, a rationalism often narrowly construed as quantitative reasoning, if we are to create lasting and meaningful good in the world (effective altruism).


Whereas hard-liners prioritize far-future survival (longtermism) over pesky short-term concerns like poverty, pandemics, and genocides, William MacAskill, a co-founder of effective altruism and philosopher in Oxford University’s Global Priorities Institute, suggests, in his 2022 book What We Owe the Future, that “we can positively steer the future while improving the present, too.” That said, in his 2020 book The Precipice: Existential Risk and the Future of Humanity, Oxford philosopher Toby Ord, MacAskill’s fellow originator of the effective altruism movement, argues that,


because […] almost all of humanity’s life lies in the future, almost everything of value lies in the future as well […] To risk destroying this future, for the sake of some advantage limited only to the present, seems to me profoundly parochial and dangerously short-sighted. […] [I]t privileges a tiny minority of humans over the overwhelming majority yet to be born; it privileges this particular century over the millions, or maybe billions, yet to come.

But, speaking of tiny minorities, those represented by “we” in these foundational texts are building a movement defined by a familiar mix of hubris and saviorism. While encouraging more people to consider how our present-day actions impact the future is a worthy pursuit with roots in many Indigenous traditions, tech evangelists are drawing upon these philosophies to wrap their self-interest in the cloak of humanistic concern.


What exactly makes this bundle of beliefs—transhumanism, extropianism (boundless expansion and evolution), singularitarianism, modern cosmism (transhumanism with a focus on colonizing other planets), rationalism, effective altruism, and longtermism—specifically eugenic? According to Torres and his co-author, computer scientist Timnit Gebru (who coined the acronym TESCREAL to describe this set of ideologies), they “are inspired by utopian ideals similar to the visions of first-wave eugenicists […] and see AGI as integral to the realization of these visions. Meanwhile, the race to build AGI is proliferating products that harm the very same groups that were harmed by first-wave eugenicists.” Gebru and Torres explain that the discriminatory attitudes of past eugenicists (racism, xenophobia, classism, ableism, and sexism) remain widespread within the AGI movement, resulting in systems that harm marginalized groups and centralize power under the guise of “safety” and “benefiting humanity.”


Still from “Superintelligence: Science or Fiction? | Elon Musk & Other Great Minds,” Future of Life Institute.


In a statement many will recognize as racially charged and class-coded, Nick Bostrom, former head of Oxford’s Future of Humanity Institute and author of the New York Times bestseller Superintelligence: Paths, Dangers, Strategies (2014), links high fertility with low IQ, claiming that there is “a negative correlation in some places between intellectual achievement and fertility. If such selection were to operate over a long period of time, we might evolve into a less brainy but more fertile species.” In January 2023, he addressed racist remarks he made on a LISTSERV in the 1990s. Again, the controversy highlights a larger issue within the artificial intelligentsia: a focus on wealthy, predominantly white, male innovators as trustees of the future. Whether or not one believes Bostrom’s defense or cares about his past judgments, the reality remains that those shaping the “future of humanity” come from a narrow segment of society. Their views—be they racist, utopian, or both—are currently overdetermining our shared trajectory. As writer Leighton Woodhouse described effective altruism,


[It] is ultimately rooted in saviorism. It’s a top-down view of social change, appropriate for those who are accustomed to recreating the world in their own image from the perches of their high-status professions. It encourages high-net-worth individuals to use their wealth to impose their moral convictions upon the world, whether the world likes it or not.

Remember, it’s not a choice that we have.


What strikes me most about today’s eugenicist vision is the perverse calculus that values the “high IQ, high-achieving” artificial intelligentsia over those most affected by current multicrises because the former are cast as the deliverers of humanity’s digital redemption. Longtermist Nick Beckstead argued, in his 2013 PhD dissertation, that the survival of those who are “more innovative” should take priority:


Saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards—at least by ordinary enlightened humanitarian standards—saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.

At every turn in the rhetoric of the artificial intelligentsia, we hear echoes of a eugenic calculus: the weak must be sacrificed for the strong to survive.


The fetishization of intelligence is another key tenet of this redemptionist narrative. Eliezer Yudkowsky said the quiet part out loud in a 2008 blog post titled “Competent Elites”: “One of the major surprises I received when I moved out of childhood into the real world, was the degree to which the world is stratified by genuine competence.” He goes on to recount an experience at “a gathering of the mid-level power elite” in which his self-imagined elevated status was not so remarkable: “This was the point at which I realized that my child prodigy license had officially completely expired.” He describes his surprise that the venture capitalists and “CEO[s] of something” with whom he was mingling were not just “fools in business suits.” Instead, “these people of the Power Elite were visibly much smarter than average mortals. In conversation they spoke quickly, sensibly, and by and large intelligently. When talk turned to deep and difficult topics, they understood faster, made fewer mistakes, were readier to adopt others’ suggestions.” He comes to a similar conclusion after attending a meeting at the Singularity Summit: “The major names in an academic field, at least the ones that I run into, often do seem a lot smarter than the average scientist.” Throughout, we find Yudkowsky assessing people’s “auras” as more or less formidable—from “average mortals” to “world-class smart” to those who “sparkle with extra life force.” It’s an Ode to Meritocracy: stature reflects inherent capacity. He acknowledges his reader’s likely discomfort with such assessments, returning in the end to AI as the solution: “It’s easier for me to talk about such things, because, rightly or wrongly, I imagine that I can imagine technologies of an order that could bridge even that gap.” Ah, there it is. The purpose of AI is to allow average mortals to truly sparkle!


¤


AI evangelists also perpetuate a eugenic worldview in more insidious ways, particularly by downplaying the costs of economic and technological “progress” to the planet and its people. In this new era, survival is linked to superintelligence, the domain of those who “use advanced technologies to radically enhance themselves, thus creating a superior race of ‘posthumans.’” For those like Bostrom, humanity as we now know it—flesh-and-blood Homo sapiens—is old school. In Superintelligence, Bostrom envisions genetic selection for traits like “intelligence, health, hardiness, and appearance” becoming normalized such that “nations would face the prospect of becoming cognitive backwaters and losing out in economic, scientific, military, and prestige contests with competitors that embrace the new human enhancement technologies.” These evangelists also envision digital descendants in the deep future who will (or should) be granted the same moral standing as you and me. “[I]f the machines have conscious minds,” Bostrom suggests, then “the welfare of the working machine minds could even appear to be the most important aspect of the outcome, since they may be numerically dominant.”


From this perspective, humanlike digital beings could, in principle, live far richer and more varied lives than biological humans. According to Bostrom,


what hangs in the balance is at least 10,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 human lives (though the true number is probably larger). If we represent all the happiness experienced during one entire such life with a single teardrop of joy, then the happiness of these souls could fill and refill the earth’s oceans every second, and keep doing so for a hundred billion billion millennia. It is really important that we make sure these truly are tears of joy.

Alas, from the point of view of the artificial intelligentsia, the long-term potential of these imaginary future beings takes precedence over alleviating present-day human suffering—“feel-good projects” like eliminating poverty, addressing climate change, and preventing wars.


It’s imperative that we refute the fantasies of posthuman proselytizers like Bostrom, and of Silicon Valley elites who share his views. Many of them concur with Sam Bankman-Fried that the best thing they can do for society is to “get filthy rich, for charity’s sake.” Bankman-Fried started out touting lofty goals, signing Bill Gates’s Giving Pledge to donate the majority of one’s wealth to philanthropy. Soon after graduating with a BA in physics from MIT in 2014, he started working at Jane Street Capital, a proprietary trading firm, and then the Centre for Effective Altruism before getting into crypto. Three years later, he founded Alameda Research, a cryptocurrency trading firm, and in 2019 launched another cryptocurrency exchange, FTX, which quickly became one of the largest crypto companies globally. But by 2022, FTX had filed for bankruptcy; the company collapsed after losing investors billions of dollars. In the end, Bankman-Fried was arrested and extradited from the Bahamas, convicted of fraud and conspiracy, and sentenced to 25 years in prison. For all the talk of altruism and doing good, he ended up illegally using customers’ funds to cover his own expenses, including “purchasing luxury properties in the Caribbean, alleged bribes to Chinese officials and private planes.”


Perhaps most revealing is the statement made by the fraudster’s defense attorney, Marc Mukasey: “Sam was not a ruthless financial serial killer who set out every morning to hurt people […] Sam Bankman-Fried doesn’t make decisions with malice in his heart. He makes decisions with math in his head.” Yes, and that’s the problem. The artificial intelligentsia is trying to engineer a world predicated on the inherent morality (or at least neutrality) of math. But as Aubrey Clayton wrote earlier in this series, the eugenics movement deeply influenced the birth and development of statistics and significance testing. Turning to quantitative reasoning is laced with danger. Torres explains that EA, longtermism, and the TESCREAL movement more generally reduce ethics to a branch of economics. It is all about expected value calculations and number-crunching. When you include hypothetical future digital people in one’s EV calculations, then the far future wins every time.


But what explains the affinity between tech elites and eugenics? For starters, the artificial intelligentsia’s intense focus on optimization and enhancement spills over from engineering digital tools to the engineering of life itself. If, historically, eugenics sought to “improve” the human population through reproductive control, those with limitless resources today are desperate to cheat death by investing in technologies that help them “evolve” past this mortal frame. The faulty conflation of humans with computers is part of the problem. If brains are computers, then it is no wonder that those who are masters of silicon also fancy themselves masters of humanity. According to MacAskill, “Some entrepreneurs hope to abandon meat-based bodies altogether and live on in digital form through computer emulation of their brains,” including Sam Altman, who is a customer of Nectome, a start-up that “preserves brains with the hope that future generations will scan and upload them.” The stated aim of Neuralink, the biotechnology company founded by Musk, is to create a brain-computer interface “to restore autonomy to those with unmet medical needs,” like spinal cord injuries, but also to “unlock human potential tomorrow.” As many have observed, the company’s ultimate focus is not therapy but enhancement, brain-computer connections that are “superhuman.” Hubris disguised as beneficence, a feature of the old eugenics, is resurrected in the language of improvement, enhancement, upgrade. (But, as I have argued elsewhere, this zealous desire to transcend humanity ignores the fact that we have not all had a chance to be fully human.)


Whereas eugenics is often associated with government attempts to control and coerce human reproduction, today’s tech elite exhibit similar anxieties about falling birth rates among “high IQ” populations. Elon Musk’s 12 children and counting, and Sam Altman’s investment in reproductive technology start-ups focused on IVF and genetic screening, aim to populate the planet with genetically superior offspring. At an August 2019 conference about artificial intelligence, Musk said: “Assuming there is a benevolent future with AI, I think the biggest problem the world will face in 20 years is population collapse.” “Population collapse due to low birth rates is a much bigger risk to civilization than global warming,” he tweeted in 2022. Peter Thiel adds, “I think there is something very odd about a world in which people are not reproducing themselves. […] It’s probably somehow entangled with the stagnation, the decline […] the sense of pessimism around the future.” And although he says there is “no magic-bullet solution,” among his investments in fertility and women’s health, Thiel provided $3.2 million seed funding for a startup called 28, “which tracks users’ menstrual cycles and gives them tips on diet and exercise—all while encouraging them not to use birth control.”


As Emily R. Klancher Merchant pointed out in this publication, the symbiosis between the tech industry and the pronatalist movement worried about falling birth rates among the “high achieving” reflects a eugenic mindset. She began her essay with venture capitalists turned pronatalist advocates Malcolm and Simone Collins and their aim “to save humanity by having as many babies as possible.” Unlike their Christian counterparts, the Collinses put their faith in data and science and are “focused on producing the maximum number of heirs—not to inherit assets, but genes, outlook and worldview.” When asked how their data-driven pronatalism differs from eugenics, Malcolm Collins insisted it’s a new approach because it’s not state-sponsored selective breeding but parental choice. This laissez-faire approach, however, ignores the unequal value of choices. After all, the birth rate is not dropping for everyone. The worry is that more affluent populations—and their coveted genes and corresponding worldviews—are somehow in danger.


Thiel is also doing his part for the cause by trying to extend his own life and funding a version of the Olympics called the Enhanced Games in which participants are allowed to take FDA-approved performance enhancement drugs and use prosthetic technologies. The Olympics on steroids, literally. Until recently, Thiel’s public support for Donald Trump made him stand out in Silicon Valley, but his biographer Max Chafkin warns that he is not an outlier and that his worldview borders on fascism: “[T]he idea that companies should basically be able to do whatever they want, that democracy isn’t the most important value, these things are reflected in the decisions and actions that many Silicon Valley companies are making.” In recent months, Silicon Valley supporters of Trump’s presidency have grown more visible—in June, his campaign held a fundraiser at venture capitalist David Sacks’s San Francisco mansion that raised $12 million.


¤


Ultimately, it’s not enough to refute eugenic ideologies. Eugenic infrastructures—systems designed to sacrifice the lives and habitats of the global majority to ensure the flourishing of the oligarchic minority—need to be dismantled as well. This includes addressing the laborers performing digital ghost work that makes AI appear magical—click workers in Kenya and content moderators in the Philippines—and recognizing the ecological and human costs of producing sleek new devices. Every day, we wake up to headlines announcing the insatiable appetite of data centers powering large language models and AI applications—“Why AI Is So Thirsty: Data Centers Use Massive Amounts of Water,” “States Rethink Data Centers as ‘Electricity Hogs’ Strain the Grid,” “Data Centers Are Draining Resources in Water-Stressed Communities.” In May 2024, Google’s emissions jumped “nearly 50 [percent] over five years as AI use surge[d],” though the company’s annual Environmental Report emphasized how “scaling AI” is crucial for climate action. Similarly, in Microsoft’s 2023 Annual Report, CEO Satya Nadella said he believes “AI can be a powerful accelerant in addressing the climate crisis.” Bill Gates has also argued that AI will propel climate solutions, eventually outweighing the massive energy costs of data centers. Speaking to an audience in London this year, he “urged environmentalists and governments to ‘not go overboard’ on concerns about the huge amounts of power required to run new generative AI systems.” But a month earlier, “Microsoft admitted that its greenhouse gas emissions had risen by almost a third since 2020, in large part due to the construction of data centres.”


This faith-based futurism employs a eugenics calculus, asking us to prioritize hypothetical benefits over current costs to people and the planet. As Journalist Karen Hao noted after visiting a Microsoft data center in Arizona, “With more than 8,000 data centers whirring all around the world and venting heat, and many more on their way, that optimism may come off as nothing more than faith: Technology has gotten us into this predicament; perhaps technology will get us out of it.” This is the gospel of AI evangelism: trust us, because you have no choice.


But we do have a choice. There is no reason to trust that tech elites have any wisdom to offer when it comes to effective altruism or alleviating human suffering. Billionaires building bunkers to survive an AI apocalypse, attempting to “disrupt death” through cryopreservation, and scouting the planet for the best locations to create “pop-up cities” and “network states” are not reliable stewards of the collective good. Those most impacted by “algorithms of oppression,” “automat[ed] inequality,” “weapons of math destruction,” “artificial unintelligence,” “algorithmic coloniality,” and “technologies of violence” have greater insight into human flourishing because their lives depend on it.


In June 2024, I joined hundreds of organizers, academics, advocates, and workers in Chicago for the Take Back Tech conference, organized by Mijente and Media Justice. We strategized about how to reclaim technology for the collective good, addressing issues ranging from the use of AI in the genocide of Palestinians to the development of autonomous technology infrastructures that support grassroots movements. The goal was to contest the eugenic legacies coded into our digital world and cultivate legacies of solidarity by learning from past generations and preparing for future ones.


During my remarks on the last day, I expressed my exasperation with AI—how even in resisting the techno status quo, artificial intelligence dominates the conversation (and resources). The only AI I want to discuss is ancestral intelligence—the insights, experiences, and wisdom that grow under the rubble of progress. I’m fed up with the narrow conception of intelligence shaping our systems and societies. We need more investment in the knowledge coming from organizations and initiatives like Allied Media Projects, Athena, Data For Black Lives, Our Data Bodies, May First Movement Technology, No Tech For Apartheid, 7amleh, Algorithmic Justice League, and Data Workers’ Inquiry. We must urgently listen and learn from those whose labor and lands are indispensable yet treated as disposable.


Cover of “The Congolese Fight for Their Own Wealth” report.


On my way back from Chicago, I started reading a groundbreaking report, “The Congolese Fight for Their Own Wealth,” released in June 2024. It reminds us that, while cobalt, lithium, and coltan are essential to the Fourth Industrial Revolution, the Democratic Republic of the Congo (DRC) supplies most of these minerals. In 2022, the DRC’s exports were worth $25 billion, with untapped reserves worth $24 trillion. Yet Congolese workers live on less than $2.15 a day due to the “pillaging of the land and its resources for profit at any cost” by multinational companies like Glencore, in collaboration with corrupt oligarchies and militia groups.


This is a eugenic infrastructure that exposes the Congolese people to backbreaking work and the toxic particles that go into our iPhones and other technologies. The report notes that in 2014, UNICEF estimated that “forty thousand of these artisanal miners are children as young as eight years old, though figures from the Congolese government and mining companies suggest that this dramatically underrepresents reality.” The tech industry’s wealth is subsidized by the suppressed wages of workers in the DRC. The warped fantasies of some, in other words, continue to rely on the nightmarish labor of others.


As Amos Hochstein, Joe Biden’s senior adviser for energy and investment, put it, “An electric vehicle is essentially a battery, and what’s in the battery is Africa.” Even our efforts to create “greener” technologies are predicated on the pillaging of African resources and the immiseration of African labor. To counter such eugenic infrastructures, we must heed the calls of those on the front lines, like the Congolese youth who insist that “Congo is not for sale” and call for “collective intelligence in order to respond to the challenges that face us with clear ideas”—stewarding and demilitarizing the land, reinvesting wealth in local industries, rebuilding the social contract, and demanding just governance.


At the very least, we should recognize the hollow ethical commitments of the artificial intelligentsia and the deceitful machinations of those who fancy themselves to be humanity’s stewards. The choice is not between effective and ineffective altruism but between solidarity and indifference toward those whom long-term planners would abandon.


To move forward, we need to reckon with what I have called ustopia. Whereas utopias are the stuff of dreams and dystopias the stuff of nightmares, ustopias are what we create together when we are wide awake. It is not enough to refute the legacies of eugenics animating the faux futures of the artificial intelligentsia; we must also take it upon ourselves to inaugurate legacies of solidarity that reflect our intrinsic interdependence as a people and a planet. To riff on the late great Octavia E. Butler, whose prescient 1993 novel Parable of the Sower begins in summer 2024—it may be the end of the world, but there are other worlds.


¤


Featured image: El Lazar Lissitzky. [Announcer], 1923. Gift of Samuel J. Wagstaff Jr., Getty Museum (84.XP.983.29). CC0, getty.edu. Accessed October 16, 2024.

LARB Contributor

Ruha Benjamin is the Alexander Stewart 1886 Professor of African American Studies at Princeton University, founding director of the Ida B. Wells Just Data Lab, and award-winning author of People’s Science: Bodies and Rights on the Stem Cell Frontier (2013), Race After Technology: Abolitionist Tools for the New Jim Code (2019), Viral Justice: How We Grow the World We Want (2022), and Imagination: A Manifesto (2024). 

Share

LARB Staff Recommendations