AUGUST 2, 2020
THERE IS THE idea of a physical human to human encounter.
As a being together.
The image is one of a shared experience of time — a time constituted by the act of committing to one another, to an encounter.
Of inhabiting, together, a space where bodies meet, where talking and laughing and crying is a haptic experience. Where one breathes the same air, smells the same smells.
An experience the body can remember — sensorially — long after.
“Reaching out and touching. Shared surfaces. Breathing, talking, anything really.”
Can this kind of encounter happen through machines or machine interfaces? Zoom, Facebook, Google, Twitter, LinkedIn, Skype, Microsoft Teams, and many more.
Can it happen with a machine?
Traditionally, the answer to both questions is no.
No, it cannot really happen by way of a machine interface because too much is lost.
And no, one cannot have a true encounter with machines.
In times of COVID-19, we spend more of our life online — in networks — than ever before.
What is the effect of this life lived on screen on what it is to be human?
We have more Zoom meetings, surf longer on Instagram, spend more time on Facebook and Twitter than ever before.
What is the transformation of the human brought about by life lived on screens — and how to bring this transformation into focus?
As a site of philosophical change — and as an opportunity for philosophers, artists, and technologists to come together and give shape?
What are the philosophical and poetic and political stakes and opportunities of this, of our moment in time?
The migration of human activity to technological platforms began long before COVID-19.
The reference, here, is particularly to the emergence in the early 2000s of interactive, often user-generated content, and the emergence of network companies.
The classic examples here are companies like Google (Google mastered microtargeting), Facebook, Twitter, Amazon, Google, Microsoft (Skype and Team), and now also Zoom.
This matters for two reasons.
The first is that the material — infrastructural — conditions of possibility for how we now spend much of our time has been laid long before the present: satellites, high-speed fiber-optic cables between cities and underneath the ocean, file sharing systems in massive computer farms that host servers, AI algorithms that work through enormous amounts of data quickly to find patterns and calculate preferences, etc.
The second reason is that the material infrastructure that makes life lived on screen possible is inseparably related to platform capitalism. Platform capitalism consists (mostly but not exclusively) of companies that make money by offering free services — such as search or posting images or messaging — but that collect and harvest user data in order to either sell it to other platform companies or, more often, to sell it to advertising companies (who then devise microtargeting strategies, that is, they deliver ads to specific audiences).
In order to generate data, these companies have been busy finding ways — long before COVID-19 — to migrate human activity online.
Or, perhaps more accurately: They have been busy creating new forms of human activity suited to life online — surfing, search, texting, sexting, browsing, FaceTiming, YouTubing, binge watching, etc.
AR and VR — especially via Facebook and Oculus — may soon be an additional element of life on screen.
Well, for most platform companies, the spread of SARS-CoV-2 and the shelter-at-home orders have been a massive boost: screen time has increased dramatically and so has their capacity to generate and mine data.
That is, COVID-19 has been a consolidation and even an expansion event for platform capitalism.
The contrast to older forms of capitalism, especially to industrial manufacturing, couldn’t be sharper.
The question thus emerges whether or not we are currently seeing a powerful acceleration of a shift from earlier forms of capitalism toward a new, still-nascent form called platform capitalism.
A shift from a mode of production focused on the industrial production of goods by labor to another one that is about users, data, and AI?
What are the philosophical, poetic, and political dimension of this shift?
In my observation, platform companies have made dominant a form of relationality — networks — that runs diagonal to the usual, place-based socialities of the nation (usually framed in terms of belonging and non-belonging, inclusion and exclusion of a people imagined in territorial and ethnic or racial terms).
In fact, I think it is no exaggeration to argue that networks have given rise to a new structure and experience of reality that is radically different from — and even incommensurable with — the structure and experience of reality that defined societies.
I offer a simple juxtaposition to illustrate my point.
Societies, usually, have three main features.
First, they are organized hierarchically. That is, they typically have a few powerful individuals at the top, while the vast majority of individuals assemble at the bottom.
Second, they are organized vertically, by which I mean that they accommodate an often vast diversity of opinions and points of views.
Third, societies are usually held together by a national sentiment and, most importantly, by a national communication or media system. The form this media system almost always takes is mass communication, where the few communicate to the many. What they communicate is information — information people may vehemently disagree about, but the baseline of this disagreement is that people agree about the things that they disagree about. Mass communication assures that people have a shared sense of reality.
Networks defy all three of those features.
First, if societies are hierarchical and vertical, then networks are flat and horizontal: networks tend to be self-assemblies of people with similar views and inclinations.
Second, while societies are contained by national territories, networks tend to be global and cut across national boundaries: another way of saying this is that while societies are place-specific units, networks are non-place-specific units.
And third, if in society the few communicate with the many and what they communicate is information, then in networks the many communicate directly — unfiltered — with the many, and what they communicate is not information but affective (emotional) intensity.
It strikes me as uncontroversial that today more and more humans live in networks — and that networks, ultimately, defy the logic of society.
Indeed, the rise of networks has created a situation in which, counter to what the moderns thought, society and the social are not timeless ontological categories that define the human.
On the contrary, they are recent — and transitory — concepts that have no universal validity for all of humanity or all of human history.
Of course, societas is an ancient concept. However, up until the late 18th century, a societas was a legal and not a national or territorial concept; it referred to those who held legal rights vis-à-vis the monarch.
Things only changed in the years predating the French Revolution when the argument emerged that the people — and not the aristocrats and the grand bourgeoisie who held legal rights vis-à-vis the king — should be the society constitutive of the political entity called France.
The early nation-states, which emerged in the context of the first Industrial Revolution and at a time when several cholera epidemics ravaged Europe, found themselves confronted with the need to know their “societies,” to know how many people lived on their territory, how many were born, how many died, how many got sick and of what; they had to know how many married and how many divorced.
As political existence and the biological vitality of the national society were understood to be connected, states began to conduct massive surveys to understand how they could reform and advance their societies.
Over time — between the 1830s and the 1890s — this gave rise to what one could call the logic of the social: the idea that the truth about humans is that they are born in societies and that society will shape them and even determine them. The truth about humans is that they are social, in the sense of societal being: tell me in which segment you were born, and I tell you who you are likely to marry, how many kids you we will have, what your job will be, what you are likely to die of.
The social was “discovered” as the true ontological ground of the human.
To this day, most normative theories of the human — call them anthropology: from Marx via the Frankfurt School to Pierre Bourdieu — are based on the idea that society is the true ontological ground of the human.
All our modern political institutions are based on society.
If it is true that networks defy the logic of society, then the social sciences, simply because they take the social for granted as the true logic of the human, will fail to bring the human into view.
What we need, then, is a shift from social anthropology (an anthropology that grounds in the concept of the social) to a network anthropology: a multifaceted study of how networks give rise to humans.
The difference between networks and societies — which appears to map onto the difference between platform and industrial capitalism — is related to the changing relation between humans and machines brought about by recent advances in AI, specifically in machine learning.
One can say that machine learning technologies are beginning to liberate machines from the narrow industrial concept of what a machine is — and that this liberation may have far-reaching consequences for what it means to have an encounter.
Traditionally, there were unbridgeable differences between human and machines.
Partly, because humans have intelligence — reason — while machines are reducible to mechanism.
Partly because machines have no life, no quality of their own. They are reducible to the engineers who invented them and hence mere tools.
The implication, often, is that there is no will, no interference, no freedom, no opening.
But machine learning and neurotechnology make us reconsider these boundaries between organisms and machines, between humans and mechanisms.
First, the success of artificial neural nets — or the basic continuity between neural and mechanical processes — suggests that the distinction between the natural and the artificial may perhaps matter much less than we thought.
Second, the emergence of deep learning architectures has led to machines with a mind of their own: they have an agency that is not reducible to the intent of — or the program written by — the engineer.
The exemplary reference here is a 2016 game of Go, played by a deep learning system named AlphaGo (built by DeepMind, a London-based, Google-owned AI company) against Lee Sedol, an 18-time world champion. Toward the end of Game Two in a Best of Five series, AlphaGo opted for a move — move 37 — that was highly unusual.
DeepMind later announced that AlphaGo “had calculated the odds that an expert human player would have made the same move at 1 in 10,000.”
It played the move anyway: as if it judged that a nonhuman move would be better in this case.
Fan Hui, the three-time European Go champion, remarked: “It’s not a human move. So beautiful. So beautiful.”
Wired wrote shortly after the game was over: “Move 37 showed that AlphaGo wasn’t just regurgitating years of programming or cranking through a brute-force predictive algorithm. It was the moment AlphaGo proved it[s] […] ability to play a beautiful game not just like a person but in a way no person could.”
Traditionally, a program that doesn’t conform to the intentionality of the engineer was considered faulty. However, contemporary machine learning systems are built to defy — to exceed — the mind of the engineer: it is expected that the machine brings something to a game, a conversation, a question that the engineers did not and could not possibly provide it with (something nonhuman).
These developments — one could call them the liberation of machines from the human or at least from the concept of the machine that up until recently defined the human imagination of what a machine could be — are related to the rise of networks.
They are related insofar as in networks, relationality — once a human to human prerogative — may no longer be limited to human to human encounters anymore.
What effects will the liberation of machines — which is constitutive of networks as much as of machines — have on what it is to be human?
Or on what it is to be in relation?
As I see it, what is needed now are philosophical investigations of the new technology that is being built.
Not studies in terms of society, as this would ultimately imply holding on to the old concept of the human as social being.
Nor studies in terms of the human, if that means the defense of the human against the machine.
But rather, collaborative studies — conducted jointly by philosophers and artists in collaboration with technologists — of how networks and machine learning are challenging old and enabling new, yet to be explored concepts of living together.
All by itself, COVID-19 has little to do with these most far-reaching philosophical transformations brought about by networks and by machine learning.
And yet, COVID-19 brings this transformation into view with sharper clarity than ever before and has led to circumstances due to which this new and different world might come faster than we anticipated.
What will it mean to be together with … a machine?
To address this question, we may need a whole new vocabulary of encounters and relations.
 Cade Metz, “What the AI Behind AlphaGo Can Teach Us About Being Human,” Wired, May 19, 2016, https://www.wired.com/2016/05/google-alpha-go-ai.
Image Credit: Stills from Lauren Lee McCarthy, Later Date, 2020
Tobias Rees is the founding Director of the Berggruen Institute’s Transformations of the Human Program. He also serves as Reid Hoffman Professor of Humanities at the New School for Social Research and is a Fellow of the Canadian Institute for Advanced Research.