WE ARE PHOTOGRAPHING everything; it is an unignorable reality. I am not simply referring to the glut of selfies and foodporn, the overt symptoms of our compulsive disorder, but to the much more subtle ways in which the act of taking a picture has become a deeply ingrained element of everyday experience. The camera, usually housed in our phones, has become a third eye, its constant presence in our pockets changing the way we move through the world. We reach for our devices to take a photo as instinctively as we scratch an itch or blow a nose. We pursue pictures, letting our lens be our guide, using photogenicness as the metric of measuring an experience’s value. In On Photography (1977), Susan Sontag wrote that “[t]he omnipresence of cameras persuasively suggests that time consists of interesting events, events worth photographing,” but that was over 40 years ago. If the omnipresence of cameras then suggested that time consists of Kodak Moments, then the omnipresence of cameras now suggests that time is, in fact, more like one big Kodak Moment, that life itself is “worth photographing.” The psychology that pushes us to make pictures, a psychology that Sontag laid bare with an incisiveness that should have made us collectively blush, seems to have reached new levels of pathology in 2018.
There is a growing consensus around this sentiment among photographers and technologists alike, that although photography and the resulting ubiquity of images has been on the rise since the daguerreotype, this — the digital-mobile-social era — is different. There also is a growing consensus that what is happening, whatever it is exactly, is bad. Numerous critics have recently, and ominously, discussed the medium’s explosive popularization. In February 2018, The New York Times published “Welcome to the Post-Text Future,” a series of multimedia essays about how photography, combined with audio and video, is pushing the written word toward extinction. The not-so-subtle subtext of the piece is that we are speeding toward a cliff that we do not quite see coming (probably because we are too busy taking pictures).
These apocalyptic attitudes are, on the one hand, the usual melodramatics that accompany large-scale change, yet our relationship with images has undeniably entered a new phase. I believe that we have, somewhat recently, crossed an invisible threshold, where the medium has gone from being primarily a means of recording reality to a force that informs it. More specifically: We used to go places and photograph what happens; now what happens is a function of what we photograph.
Another way of thinking about it: Photography’s role in culture has gone from curational to creational. In the early 1980s, legendary photographer, curator, and critic John Szarkowski compared photography to “the act of pointing,” but pointing is a reaction to something that is already there. Szarkowski conceived of the camera as a tool of selection, imagining the photographer as one who chooses which moments in a finite world deserve to be saved, and then chooses which of those extractions deserve to be shared. It is a powerfully astute understanding of what photography is that has been rendered antiquated by time, as it assumes that reality unfolds all on its own, that things will play out the same way whether they are photographed or not. That is, Szarkowski saw a separateness between life and the way we capture it that simply no longer exists.
This development in the medium’s evolution is, of course, due to many factors, and it would be impossible to trace a definitive and coherent journey of cause and effect. That said, the integration of photography into social networks, such as Instagram and Facebook, has undoubtedly blown us into uncharted waters. Social networks are, at their most basic, arenas of communication, and photography is, at its most basic, an extremely efficient form of communication. Anyone with the gift of sight can understand a photograph — there are no educational prerequisites. A photograph never need be translated into another language. Photographs especially seem to proliferate as attention spans plummet and media metabolisms skyrocket: the average literate person can read about 200 words a minute but can process an image in about 13 milliseconds. If a picture is worth 1,000 words, then pictures are saving our species a great deal of time.
Images present a uniquely low barrier to entry for consumers of information. A bit later in On Photography, Sontag notes that populist-leaning newspapers are often photo-oriented, like the New York Daily News, which was long nicknamed “New York’s Picture Paper.” (Another example: Germany’s largest populist paper is called Bild, which literally translates to “Picture.”) But the accessibility, the universal readability, and speed advantage of photos, are only a part of what makes the marriage of photography and social networks so well suited. Maybe more important is the fact that images, even in the age of Photoshop, are truthy. They still serve as relatively trustworthy evidence that whatever scene they portray actually occurred. When a user posts a photo of a vacation on a platform, its veracity is rarely questioned. It is assumed that if there is an ocean in the scene, then the photographer was on a beach, and if the subject of the image is smiling, then she was having a good time, and so on. Photographs have the power to shorten the leap of faith one must take in order to believe that the curated narratives presented on social media, narratives that are essentially based-on-a-true-story fictions, are legitimate and weighty.
The ability to publish these trustworthy narratives quickly and easily has allowed social networks to function less as peer-to-peer communication services and more as platforms of independent broadcasting. An Instagram feed is a lot like a personal television channel. A user programs content for its audience, adjusting this programming based on ratings (see: likes and comments). Instagram’s new stories feature is built so that one can watch a sequence for a bit, or move on quickly with a click, mimicking the age-old practice of channel flipping. Advertisements are even squeezed in between the blocks of entertainment. This television-like model, in which a profile exists as a container that must be populated with content, turns every moment of the day into potentially publishable material.
Suddenly, life is not meant to be lived, but authored. Experience itself becomes narrativized, something to be packaged into a simple and readable form; if it is going to be effective, it must be audience-friendly, easily comprehended by someone viewing it on small screen amid hundreds of other narratives just like it. Of course, making meaning of life, constructing a coherent story to make sense of what occurs, is not a new exercise, but it used to be done retroactively, performed with some distance from the moment. Now it is done on the fly. The future no longer becomes the present, mysteriously unfolding before us, happening to us. The present is manufactured. Our realities are loosely scripted productions.
We do not just use images to create temporal narratives; we build identities with them, too. With every photograph we post on social media, we construct a consumable and sharable idea of ourselves. Each image adds a data point to this abstraction, contributing detail and nuance to our avatar. The avatar will, in some way, always feel true to who we are because it is true to who we want to be, as we are its maker. This feeling allows a user’s sense of self to migrate gradually into the digital space, safely coming to rest in a character of her, his, or their design, a vessel of identity to control. Virtual reality may still be in its infancy, but social networks are serving as a kind of warm up to Silicon Valley’s holy grail technology, with photographs successfully bridging the gap between the real and virtual, or maybe more accurately, blurring the line between the two.
A fundamental goal of the image is, after all, to copy reality, and in turn, a fundamental goal of humanity has been to constantly reaffirm that reality is uncopyable. Alfred Korzybski said, “The map is not the territory.” René Magritte said, “This is not a pipe.” Iconoclasts of every generation have found their own way of putting it, of defending us from what Magritte called “The Treachery of Images.” But while watching a New York Knicks game a few months ago, I couldn’t help but think that, in this undefined and ill-defined post-postmodern era, those defenses had officially failed. As players came out of the locker room, around 20 kids were lined up to high-five the home team as they entered the court. It was a classic moment of hero worship, the sort that has been going on since the beginning of sports fandom, except that each kid had one hand out, extended for the players to tap as they jogged by, and one hand in front of their face, holding a smartphone. Their eyes were focused on their screens; their experiences mediated through them. It was clear that, for these kids, there was no difference between their lives and the story of their lives. The event and the document of the event, the object and the image, were one and the same.
This breakdown between experience and representation might sound nightmarish, indicative of the fact that we have entered some realm akin to Jean Baudrillard’s hyperreality, so saturated in representation that we have lost sight of what was being represented in the first place. However, I don’t fear the future. Our species has always been easily fooled by fictions, especially when they are first introduced. Legend has it that early silent film L’arrivée d’un Train en Gare de La Ciotat made audience members in a Parisian theater jump in panic when a train on the screen appeared to be heading straight toward them. Facebook launched just 14 years ago, Instagram just eight years ago, and Snapchat just seven years ago. These technologies became deeply woven into the fabric of our lives before we were savvy enough to recognize their limits, where they fall short of truth. The question is whether we need more time to adjust, to learn how to parse out the difference between fact and fiction in these new digital spaces, or whether we have simply reached a point when we actually prefer that they remain murky. In other words, are we lost and on our way to being found? Or are we lost and content to stay that way? Whichever scenario turns out to be right, it seems our current state is the same: we’re lost.