Can “Tech Criticism” Tame Silicon Valley?

Evan Selinger lauds Gary Marcus’s new book for its clarity on how to stop the madness and greed around generative AI. He questions the power of “tech criticism” to translate into actual reform, however.

Taming Silicon Valley: How We Can Ensure That AI Works For Us by Gary F. Marcus. The MIT Press, 2024. 248 pages.

Keep LARB paywall-free.


As a nonprofit publication, we depend on readers like you to keep us free. Through December 31, all donations will be matched up to $100,000.


CONVERSATIONS ABOUT artificial intelligence are almost invariably polarized, especially those about the large language models (LLMs) that power generative AI products like ChatGPT. Prompt-engineering gurus swear that we’re living in the best of efficiency-hacking times. Others gleefully welcome artificial general intelligence (AGI) as the panacea that will help humanity meet its greatest challenges. On the other side of the aisle sit the skeptics—a group that includes a growing number of teachers lamenting that they can’t keep up with students’ AI-generated papers, as well as civil society groups warning that AI-generated disinformation further threatens our fragile democracy.


It’s clear where AI expert Gary Marcus stands. In recent years, he has repeatedly been called a critic—an “AI critic,” a “longtime AI critic,” “AI’s loudest critic,” and “AI’s leading critic.” An emeritus professor of psychology and neural science at NYU, Marcus co-founded two AI companies and has written extensively about AI, including co-authoring Rebooting AI: Building Artificial Intelligence We Can Trust with Ernest Davis (2019). Marcus’s new book, Taming Silicon Valley: How We Can Ensure That AI Works for Us, is exactly what you’d expect if you’ve been following his mentions in the news, scrolling through his social media posts, reading his Substack, or encountering his opinion pieces. Marcus is the cynical generative AI critic who wishes he could be the good-faith AI optimist.


The book’s fast-paced critique follows a familiar approach. Step one: Identify (often terrifying) problems caused by powerful, greedy, and deceptive big tech companies hyping black-box generative AI tools that don’t reason very well. Step two: Express hope that AI might offer “a net benefit to society” through committed activism and thoughtful policy. Those in the know will recognize this two-part structure from Marcus’s Aaron Sorkin–esque testimony before the US Senate Judiciary Subcommittee meeting on AI oversight, which, along with various TED Talks and publications, made him something of a rock star. The book reads like a greatest hits album complete with familiar tracks repackaged for a wider audience. The book’s cynical focus may turn some people off. Those who believe that the only credible way to make a fair argument is to consider “both sides” of a debate will find two things striking. First, Marcus rarely compliments anyone working at the companies that make the most widely used generative AI applications, and he acknowledges positive uses of the technology only in passing. Second, the critic’s sense of moral clarity can come across as arrogance, his impassioned tone cloying. Calling tech companies “evil”—a word and idea I’ll come back to—might be a hard sell for readers who prefer nuance. (To be fair, he is in part referring to Google’s initial but long-forgotten motto: “Don’t be evil.”)


Marcus has been ringing alarm bells for some time. He warned us in 2022 that tools like ChatGPT “can easily be automated to generate misinformation at unprecedented scale,” and further cautioned that misinformation is merely one of many ways in which the technology could be our “Jurassic Park moment.” And yet, as I write this review less than one month away from the 2024 US presidential election, commercial AI detectors still “don’t work very well,” and federal legislators have yet to regulate campaign ads’ use of AI. The urgency of the situation is palpable, and Marcus’s frustration and impatience read as justified.


I recommend reading Taming Silicon Valley if you’re unfamiliar with the main concerns about generative AI and its leading commercial developers. For those with a vague sense of unease, it will provide a detailed explanation of specific risks. Of course, Marcus’s ideas don’t take place in a vacuum, and considering the value and evolution of tech criticism provides broader, crucial context.


¤

Tech Critics Past and Present


What’s a tech critic? I started thinking about this question in the late 1990s. I’d recently read Don Ihde’s article “Why Not Science Critics?” and, until then, it didn’t occur to me that critical conversations about science and technology—or, to use Ihde’s preferred term, “technoscience”—were not a priori anti-scientific or anti-technological. Ihde, a preeminent philosopher of technology, made a compelling case that the best science and technology critics are “passionate” about their subject matter in the same way literature critics are passionate about literature. In other words, they’re not finger-wagging haters. For his part, Marcus establishes that he is a fan, and that he is passionate. He tells us he “loves AI” and “desperately wants it to succeed,” insisting that the potential upsides—which, as he frequently reminds us, shouldn’t be conflated with generative AI—are “just as enormous as Silicon Valley wants you to believe.”


Science and technology criticism is, then, no longer the province of people on the outside, of whistleblowers and the like. It is now “professionalized” in journalism, drawing on several disciplines—including philosophy, sociology, law, science and technology studies, and media studies—and taking a range of positions. It has warned us that technologically mediated practices may render fundamental human experiences extinct. It has explored how employers weaponize technology to create unfair and exploitative working conditions. It has confronted intersectional justice issues in tech research, development, and deployment. It has addressed how big tech companies threaten democracy and human rights. And it has exposed and rejected the ableist assumptions driving far too many technological “solutions” for disabilities. (This is merely a partial list.)


Because tech criticism is so capacious, it’s tempting to look for ways to narrow down the genre. Some might be tempted to separate criticisms of technology and capitalism; my preference is to look at tech criticism as an inclusive field. Doing so has two advantages. The broader view helps us grasp how deeply—inextricably, even—technology is linked to social, political, ethical, and economic issues. It also helps us develop comprehensive strategies for addressing complex and often interconnected challenges. Concerns about financial incentives are central to much of what Marcus has to say about the scale and scope of potential harm.


¤


The Cost of the Corporate Descent into “Evil”


Thirty years after Ihde, tech criticism has become so well established that it has been institutionalized. Consider Marcus’s career. Based on his long track record of AI skepticism, he was invited to testify, along with OpenAI CEO Sam Altman, before a US Senate Judiciary Subcommittee meeting in 2023. This meant that he didn’t have to crash the meeting and yell into a megaphone his demands for the truth (about AI oversight) to finally “speak” “to power.”


Ihde might have seen such platforming as a good thing: recognition of the value of tech criticism that was long overdue. When Ihde praised science and tech critics, he was struck by how rarely engineers and scientists acknowledged the social dimensions of their work, let alone saw themselves as having any responsibility for its social implications. After gaining hard-won technical knowledge and skills, they too easily and predictably became arrogant, reflexively driven to believe that nonscientists and engineers should stay in their own lanes.


Unfortunately, institutionalization is a double-edged sword. Having a platform doesn’t guarantee that anyone with power will care about what you have to say. While Marcus has had his moments in the limelight, Taming Silicon Valley was born of disillusionment with politicians and tech companies alike. This motivation reinforces an important political lesson that we forget at our peril: those with power don’t always repressively silence the opposition. On the contrary, letting critics feel heard while not ceding meaningful control or implementing substantial changes is a time-tested strategy for watering down and co-opting resistance. Yes, we may be living at a time of peak tech criticism, with an endless parade of panels, publications, and courses devoted to responsible AI. But this doesn’t mean the “techlash” has been successful.


Marcus blames the lack of progress squarely on Silicon Valley. Predictably, he singles out the companies with the most power, influence, and reach: OpenAI (which, Marcus emphasizes, has long abandoned its original nonprofit mission), Microsoft, Google, and Meta. If you’re wondering why Apple largely gets a pass, it’s not because Marcus sees it as more ethically minded than the rest. Instead, he sees Apple’s core business as “sexy productivity tools,” which leaves the company without a strong-enough financial incentive to gobble up our personal data.


Drawing from correspondence with Roger McNamee—one of Facebook’s early investors and, later, author of the critical Zucked: Waking Up to the Facebook Catastrophe (2019)—Marcus paints a familiar picture of moral decline. In the beginning, Silicon Valley leaders appeared to hold laudable goals like democratizing knowledge and enhancing social connection. Eventually, however, the reality of business pressures grew too hard to ignore. Founders abandoned world-changing idealism in favor of ruthless approaches to securing venture capital and maximizing shareholder profit. Enter surveillance capitalism, under which big tech companies have found ingenious ways to get internet users to pay for services with their data rather than subscription fees. They accepted that their approach would have tremendous downstream externalities—for which, crucially, they believed they would never have to pay. In this way, Marcus dramatically claims that “the eternal quest for growth” has led to “prosocial missions” being replaced by a descent into “evil.”


To Marcus’s dismay, tech elites now use their influence to shape regulation. “The most insidious thing coming from Silicon Valley right now,” he argues, “is a campaign to undermine regulation around AI altogether.” And when regulation isn’t being demonized as a threat to American prosperity and the very survival of the human species, companies are doing their best to co-opt it. Through carefully curated comments to the press and public, tech leadership gives lip service to the fundamental importance of creating responsible and trustworthy AI tools. Behind the scenes, though, the goal is regulatory capture: spending fortunes lobbying politicians to adopt business-friendly policies while simultaneously creating a “revolving door” that rewards government employees with high-paying jobs once they leave the public sector.


Marcus further accuses Silicon Valley of “manipulating” public opinion with hype about generative AI. He says that tech leaders like Altman and Mark Zuckerberg exaggerate, if not lie, about technical capabilities; downplay, if not obscure, technical limitations; and minimize, if not hide, risks and harms. They also promise to deliver goods that may never actually be available and maliciously characterize tech critics as dangerous, out-of-touch reactionaries.


In short, Marcus depicts Silicon Valley as a propaganda machine that runs on “gaslighting” and “bullying.” He argues that one of the most dangerous Silicon Valley tricks is persuading the public and politicians that advances in generative AI will likely, if not inevitably, lead to artificial general intelligence, and that this outcome may pose an existential threat to humanity. AGI tends to be defined as AI systems with human- or superhuman-level intelligence that can learn and apply knowledge across various tasks.


For Marcus, promoting this AGI narrative has three harmful implications. First, this disingenuous narrative leads people to overestimate the intelligence of current forms of generative AI. Marcus contends that this distorted and deceptive outlook benefits tech companies by “driving up stock prices.” Second, stoking speculative doomsday fears that AI might go rogue diverts attention from more pressing but less cinematically dramatic issues. Third, the narrative perversely positions the tech companies that are supposedly building AGI as our best hope for designing it safely. After all—as Silicon Valley companies like to remind us—they have the resources to hire the best engineers and scientists. Marcus thinks readers should find it ironic that the companies expressing such dire concern aren’t interested in slowing things down to mitigate risk.


Where does all this bad faith leave us? Marcus argues that it leaves the United States inadequately prepared to address “the twelve biggest immediate threats of generative AI.” These span a wide range of issues, such as bad actors using AI-powered content to spread political disinformation and manipulate markets, publishers accidentally spreading misinformation by sharing poorly vetted AI-generated news, and creeps continuing to create nonconsensual deepfakes. Marcus also highlights the problems of AI used to generate defamatory content, violate privacy and accelerate cybercrime, exploit malicious hacking, and create dangerous bioweapons. As if this parade of horribles isn’t enough, Marcus further raises the alarm over AI weakening intellectual property rights, amplifying social biases, creating environmentally destructive energy costs, and leading to overreliance in critical infrastructure where errors can have catastrophic consequences. Since Marcus makes a compelling case that each threat is substantial, it’s hard not to conclude that their collective impact may be overwhelming.


¤


Marcus’s AI Wish List and the Critic’s Dilemma


Marcus is crystal clear—albeit fairly light on the concrete implementation details—about how to stop the madness and greed. He offers a legal and political wish list of ideas that aren’t exactly new. Like most current advocates, and even a preponderance of legislatures, he argues that we need better laws to protect our data rights and privacy. We need to promote transparency that reveals how AI systems work. We need to hold AI developers more accountable for the harms that their tools cause and can be reasonably expected to cause in the future.  More uniquely, he insists that the government should provide ample support for AI literacy programs and expand funding for scientific research in the development of more trustworthy and effective approaches to safer AI. Additionally, he wants lawmakers to embrace comprehensive and layered regulatory efforts and government offices to provide independent AI oversight—not only by reopening a reinvigorated US Office of Technology Assessment staffed by scientists who aren’t beholden to tech company interests, but also by creating a new “US AI Agency”—one that presumably has a much broader mandate than the US Artificial Intelligence Safety Institute. Because AI has global implications, Marcus further maintains that we need a new agency devoted to collaborative international AI governance. And, linking back to the broader techno-capitalist approach to criticism, Marcus adds that “establishing a universal basic income is the right thing to do” to address “ever-greater inequality as AI begins to reduce employment opportunities and wages.”


Marcus realizes that law and policy can’t fix everything. And so, on top of his far-reaching governance recommendations, he calls for significant cultural change—from stopping the widespread veneration of wealthy but reckless tech CEOs to committed citizens putting economic pressure on tech companies by boycotting irresponsibly designed and monetized generative AI products.

What are we to make of all this? Any number of these proposals might be zeroed in on, their feasibility questioned. A new international AI agency, for instance, sounds nice in theory. But in practice, such concentrations of power are riddled with complex challenges—from their potential capture by special interests to their further reinforcement of long-standing power imbalances under the veneer of egalitarian collaboration. Likewise, let’s say a universal basic income could reduce social instability and that, in principle, it would be fair to draw funds from taxes on the wealthiest corporations and people. Given how fractured politics has become, we should still expect massive pushback from those who will frame it as a dependency-producing handout—and they will have wealthy and powerful backers. Or take Marcus’s proposal for people to stop using unethically designed and financially exploitative generative AI; as we’ve seen in earlier calls for conscientious digital consumerism, convenience almost always trumps ethical conviction.


One might feel compelled to praise Marcus on many fronts, such as his refusal to offer a vague and ultimately vapid call for society to critically rethink how it approaches generative AI. At the same time, one might also conclude that Marcus hasn’t given us sufficient evidence to believe his ideas will be practically effective. After all, the very systems he seeks to change are now, it seems, powerful enough to blunt, if not absorb, any challenges.


To his credit, Marcus nonetheless urges us to remain hopeful that gradual, incremental progress is possible and can meaningfully challenge broken systems and fix outdated policies. Take his proposal for “layered” oversight. He’s absolutely right in calling for clear and effective policies to address potential problems with AI models both before and after they are widely disseminated and used. Given this goal, he’s also right that ongoing, high-quality auditing must play a crucial role in governance. But what about the details? Marcus wants to see something like the FDA’s preemptive approach to regulating new medical devices and medications—in other words, he’s in favor of a licensing requirement for AI. He also reminds us that the FDA can conduct audits of drug companies after their products are commercialized. In short, he envisions a government that constantly has our backs.


Unfortunately, the FDA isn’t a good point of reference. In today’s political climate, there’s no way a government agency will raise the funding needed to cover the staff that can meet the needs Marcus identifies. That said, it’s not “go big or go home.” There are signs that quality auditing practices and corresponding legal requirements may emerge over time. These might start with local initiatives, like New York City’s pioneering law that requires automated employment decision tools to be audited for bias. The novel legislation has many problems; crucially, however, it could be the start of something much better. Alternatively, the new EU AI Act offers a helpful example of governance in its prioritization of overseeing the highest risk systems first, and implementing targeted, meaningful government oversight before and after the products enter the market.


Of course, whether we get there—or anywhere else—will depend on political will. The importance of political will speaks to the critic’s dilemma. Marcus’s expert voice adds authority to the ever-growing criticisms of Silicon Valley’s power. The tech criticism for which he stands has come a long way, to the point of institutionalization—but a vast divide still separates incisive critique from tangible, power-challenging reform.

LARB Contributor

Evan Selinger is a professor of philosophy at Rochester Institute of Technology.

Share

LARB Staff Recommendations