Digital Oligarchy

By Robert GorwaMay 6, 2022

Digital Oligarchy

Silicon Values: The Future of Free Speech Under Surveillance Capitalism by Jillian C. York

ON FEBRUARY 28 of this year, Nick Clegg, the former leader of the UK Liberal Democrats and the current head of global affairs for Facebook (now part of Meta Platforms, Inc.), tweeted that, following numerous “requests from a number of Governments and the EU,” Facebook would block access to major pages run by Russian state-backed media outlets such as RT. In early March, when a miles-long convoy of Russian military vehicles was first snaking toward the Ukrainian capital of Kyiv, I tried to view one of the channels operated by the controversial and online-savvy Russian media organization on Instagram. After a quick, ghostly glimpse of the familiar photo-grid layout, I was bumped to a plain white static page overlaid with these words: “Restricted Profile: this Profile isn’t available in your region.”

The policy marked an escalation from a posture announced by Facebook a few days prior: that Russian state media would no longer be able to run advertisements on its services; that content from pages like RT or Sputnik would be downranked algorithmically in user timelines and newsfeeds; and that those posts, if viewed or shared, would feature interstitial information boxes and pop-ups indicating that the content was “from a publisher that Facebook believes may be partially or wholly under the editorial control of the Russian government.” Twitter had publicized similar measures on the same day, and the next morning YouTube began to block access to RT, a major content producer boasting of having served more than “10 billion video views” in its channel description.

While the background context — the invasion of Ukraine that led to these decisions from major tech firms — may indeed be extraordinary, these actions provide a remarkably standard picture of the policy levers that technology platforms have been pulling in the past years. They face the Sisyphean task of policing the digital public sphere while walking a tightrope, one woven from a complex mix of public expectation, corporate profit motives, and government pressure. The events of late February and early March made publicly visible — perhaps more so than ever before — many of the hallmark themes discussed for years now by observers of platform content moderation. Here are five key elements of those themes:

1. Platform decision-making is experienced as the likely result of pressure from powerful actors. (Or as Clegg put it, the result of “numerous requests from governments.”)


2. Little to no transparency and justification is provided to users following a decision like this one. Here, the “profile not available page” made no mention of the company’s reason for making it inaccessible, and did not link to any kind of decision-making documents or news release.


3. From the user perspective, ambiguity shrouds the scope of platform actions. In this case, I was unable to discern if I couldn’t see RT on Instagram because of my location in Europe or because it had been fully removed. Or was there some other kind of technical error?


4. Quasi-coordinated actions across platform companies are typically spurred by a first mover. In this case, Twitter moved first, followed quickly by Facebook, YouTube, and others.


5. Companies are making decisions unilaterally about content monetization. In this case, platforms opted to curb RT’s ability to place its advertisements.


Platform companies are making increasingly important political — and even geopolitical — decisions that affect billions of people around the world. This was made patently obvious during a pandemic that moved even more social, economic, and political interaction online. Astute observers, however, had already been writing about the political role played by technology companies and their processes of content moderation virtually since their inception in the mid-2000s.

Jillian York is one of these observers. She is also an activist and writer who has been engaging in these debates years before they were meaningfully on the popular agenda. I myself became interested in researching and writing about how companies like Facebook or Twitter created and enforced rules for online speech after stumbling on some of her writing in 2015 and 2016. Her book, Silicon Values: The Future of Free Speech Under Surveillance Capitalism, provides a chronicle of her years of work with the Electronic Frontier Foundation (EFF), the digital rights organization based in San Francisco. It presents a general audience with an accessible introduction to content moderation in the context of free expression — and a fascinating look at the historical evolution of what we now call commercial content moderation (to use the scholar Sarah Roberts’s term for the policies and practices that constitute social media content administration at scale).

The need to conduct content moderation had companies on their back feet, as it were, from the start. As Silicon Values shows, and academics such as Tarleton Gillespie and Kate Klonick have detailed comprehensively, firms developed their rules and processes without drawing on much precedent. An early MySpace employee, speaking to a conference of practitioners and experts, described creating the network’s rules as being part of this “revolutionary thing, like nobody’s ever done this before.” York recounts the employee holding up a notebook, and stating, “This is my bible from when we had our meetings … when we would randomly come up with [rules] while we were watching porn.”

MySpace never had to deal with governments breathing down its neck to remove hate speech, disinformation, and other politically and socially unsavory content in the way that the larger online intermediaries do now. Did companies ever envision that their lean “move fast and break things” start-ups would eventually necessitate the creation of a sprawling bureaucracy to administer, police, and service their core value proposition? Probably not: the early days were largely about ad-hoc decision making, backroom conversations, and desperate emails whose aim was to try and get someone in a company to pay attention.

In the late 2000s and early 2010s, the public discourse was transfixed by the ostensibly democratizing elements of social media, especially as seen in the context of anti-government protests in countries like Tunisia and Egypt. This was part and parcel of official state department policy, an “internet freedom” agenda that created a nexus of interests between Silicon Valley and the DC Beltway. As York writes, the growing political role of companies like Twitter also created an environment for high-stakes content moderation decisions that featured informal pressure, strategic networking, and coalition-building by civil society members. York herself sought to mobilize contacts within companies to reverse misguided decisions, for example in the case of Egyptian blogger Wael Abbas, who operated a widely followed YouTube account that documented police brutality, protests, and other important political topics and was removed from YouTube in November 2007. Sometimes governments would get in the mix, putting pressure on firms to help fulfil their broader strategic policy goals in a particular region (York recounts that the State Department asked Google to bring Abbas’s account back, which may or may not have been a decisive factor in YouTube’s eventual decision to restore his channel).

Facing growing pressure from all corners, firms began deploying technological solutions to their emerging political and jurisdictional dilemmas. Drawing on interviews with a few key figures who drove company content policy in the early days — such as Google’s deputy general counsel, future Obama advisor Nicole Wong — York reconstructs what was likely to have been the first time that a company decided to geo-block content, enforcing a policy decision only in a certain region. Whereas at the start of the Russian invasion of Ukraine, it was just me trying to access RT from Germany, the foundations were laid in 2007, when Wong, facing pressure from a Turkish government amid domestic political protest and demonstrations, “made the decision to use IP blocking to make the videos [that had been ordered down by a Turkish court] inaccessible to Turkish users alone.”

While the book provides an analysis of content moderation before the 2016 US election, in the tradition of vital books like Rebecca MacKinnon’s Consent of the Networked, it also provides an indication of how much has changed in the past few years. In 2007, major decision-makers within companies like Facebook or Google weren’t publicly announcing or justifying rule changes to journalists and experts via a platform like Twitter, as Clegg and his staff now do. Facebook only fully publicized its “Community Standards” in 2018, the rules of the road that detailed what content it considered worthy of being banned, such as violent extremism, sexual content, hate speech, or nudity. (As I wrote in the LARB following the change, “we now know exactly what Facebook considers sexual activity,” including the “stimulation of naked human nipples,” the presence of erections, and the “use of sex toys, even if above or under clothing.”)

Today, platform companies engage more actively and openly with policymakers. A couple of academics have been given privileged access to observe, and write about, how firms develop their content policies. Materials from some of the meetings in which Facebook mulls over specific rule changes can now be found online (the last publicized session was in summer 2021, and featured a discussion of Facebook’s “attacks on public figures” policy), and many, if not most platforms, have significantly expanded their limited transparency reporting measures.

Getting this information directly, rather than from leaks to journalists and off-the-record statements from platform employees, is probably a positive development. Nevertheless, as York details, there is a strong argument to be made that the current content moderation status quo is inherently problematic, even if we’re trending in the right direction in terms of due process and transparency, and even if the specific company decisions we are looking seem indeed to be the “right” ones. (Something like the RT takedown seems proportionate given the scale of symbolic sanctioning activity being taken by other countries and companies, and according to some observers, is long overdue anyway.)

That said, the book paints a bleak picture of what is fundamentally going on:

Imagine a society where the laws are created behind closed doors, without public input or approval. This is a society where at any time the laws are subject to change, or be replaced with new ones altogether. There is no democratic participation, no transparency, and no due process — and the laws are enforced by minimally trained workers in faraway locales who often lack awareness of local conditions or, increasingly, by trained machines.


York is right to foreground concerns around due process and democratic participation, and global labor rights, which have largely remained under-addressed in the policy conversation around content moderation. The book is important in adding to the pile of steadily mounting evidence that the practice of setting online rules is deeply political, and quite often where the rubber meets the road in terms of platforms’ actual social and cultural impact.

Does platform content moderation reproduce existing power hierarchies on a global scale? York’s argument is that it does. Drawing on her years of experience, she describes how governments are constantly jockeying to get the companies to enforce rules that will benefit their policy agendas. Facebook's Israel office, in Tel Aviv, is in charge of the Palestinian territories and, unsurprisingly, has problematic connections to the Israeli authorities. The EU has for more than a decade been pressuring firms to rapidly process requests from government-linked “trusted flaggers,” creating a parallel reporting process for what they consider to be violent extremism or child abuse imagery. And US empire seems to hold the eternal trump card, with lower-income countries having a tough time getting what they want in a system of patronage and proximity to power. “Apparently, a phone call from the White House is worth more than a legal document from Islamabad,” York observes.

Beyond broader issues of strategy and geopolitics, platforms are arguably socially and culturally conservative. When it comes to policing nudity, and content perceived to be graphic (for example, depictions of breastfeeding, or of men kissing), York describes how “pressure from conservative governments” often submerges the types of radical, emancipatory expression that some people had hoped platforms could provide, instead reinforcing heteronormative, gendered notions of what should be considered sexual or unsafe. This status quo pressure is even more obvious when we move down the metaphorical intersectional ladder to examine how the biggest social networks moderate content relating to race and class.

These more foundational arguments underpinning Silicon Values cast doubt not only on industry leaders’ efforts to inch toward marginal change via “Oversight Boards” and self-regulatory theater, but also, more concerningly, on the major policy efforts being spearheaded by democratically elected national governments around the world. The European Union is currently in the process of deploying what has been promoted as the most sweeping regulatory overhaul of digital platform power to date. These Digital Services and Digital Markets Acts will change the legal liability landscape for companies operating in Europe, differentiating responsibilities by size (with “very large” platform firms being subject to stricter standards) and instituting new mandatory reporting practices. The DSA seeks to improve due process and transparency around decision-making, and features some potentially promising features, like provisions to facilitate researcher access to salient platform data. Notwithstanding this, it also largely formalizes and legalizes the systems of content moderation as currently deployed by the largest platform companies, and certainly does not radically overturn what York and other advocates and scholars have indicated are more existential underlying issues, including privatized decision-making, power imbalances, and labor conditions.

While current policy trends are surely a positive and incremental step in the right direction, even a “better regulated” system of platform governance remains a highly material corporate process, open to capture by powerful, well-funded, and well-connected interest groups, who will undoubtedly continue to exert pressure on intermediaries when tensions are high. It will also continue to be shaped by the inherently messy sociopolitical and historical realities of our deeply divided and unequal world. The fundamental democratic injustice of platform oligarchy is unlikely to change, with power over our increasingly important spaces of work (Zoom), play (TikTok), and interaction (WhatsApp) concentrated in a small number of hands.

York’s book, while clearly important, does less to help us understand what exactly can be done, or rather, what can be done that lies in the realm of the currently politically possible. It is evident that the wave of enthusiasm about the “user-generated content”–driven Web 2.0, which probably reached its zenith during the Arab Spring, has come crashing back to earth. The current hype around new developments like Web3, championed by leading venture capitalists and a growing number of coders for its ostensibly decentralized promise of user empowerment, offers one vision: we abandon the current broken model and build something new from the ground up. The problem is that, in reality, it probably amounts to the existing model in new clothes, with platforms, centralization, and new gatekeepers all the way down. Additionally, there is no reason why existing content moderation issues wouldn’t simply be replicated over time via sufficiently large Web3 platforms.

If there is something to be excited about, however, it might be the new wave of radical, explicitly utopian scholarship seeking to imagine alternative, non-neoliberal digital futures. Examples include James Muldoon’s recent Platform Socialism, and the forthcoming books by Ruha Benjamin and James Tarnoff, which promise to engage with an actually transformative vision of a platform ecosystem that isn’t predicated upon data, attention, labor, and carbon extraction. In a moment marked by war, global health emergency, and an ever-looming climate catastrophe, what does it look like to foreground social, economic, and environmental justice in a critique not only of digital capitalism, but of the policy tools being developed to manage it? It will be a long journey, but one thing is clear: we will likely ultimately need to move away from “silicon values” and Silicon Valley. Can we build a new future while keeping a careful eye on the politics and power underpinning the sociotechnical systems we use to get there?

¤


Robert Gorwa is a postdoctoral research fellow at the WZB Berlin Social Science Center.

LARB Contributor

Robert Gorwa is a researcher interested in the transnational policy challenges posed by digitized capitalism. He's a postdoctoral research fellow at the WZB Berlin Social Science Center, and a fellow at the Center for Democracy and Technology (CDT) and the Centre for International Governance Innovation (CIGI). His writing on technology and politics has appeared in Foreign AffairsWired, The Washington Post, and other popular outlets.

Share

LARB Staff Recommendations

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!