In 2019 and again in early 2020, just before the onset of the pandemic, Bendiksen traveled to the small city of Veles on the Vardar River in North Macedonia. The trips were inspired by a series of press reports, beginning in the late stages of the 2016 US presidential campaign, which revealed that Veles was a major source of the fake news stories flooding Facebook and other social media sites. Scores of young people in the impoverished city had discovered that they could make a decent living by fabricating and circulating stories celebrating Donald Trump and denigrating Hillary Clinton.
In his essay, Bendiksen reports on meeting several of the most successful scammers, some of whom were working out of small offices with teams of colleagues. None of them expressed any interest in, much less regret about, the social ramifications of their work. To them, the fractious political situation in the United States was just a convenient opportunity to attract an online audience and earn ad revenues. “I support Trump for a few reasons,” a young man named Alex explained to Bendiksen. “One, he makes money. Two, he makes Americans happy. Third, he makes money quickly.”
Any attempt to curb fake news is doomed to fail, Alex went on to say. “There are too many haters on the net, and too many fake news sites, and fake news is the new truth. The main thing is that we believe in freedom of speech.”
Bendiksen’s book presents dozens of haunting images of the city and its inhabitants. A man leans out of a window of a large apartment block, a pair of satellite dishes hanging nearby. A woman sits on an unmade bed, gazing into the screen of a laptop. Two lovers embrace under a highway overpass. Soldiers stand smoking behind a barbed-wire fence. A bear wanders past a graveyard at the edge of town. Another bear drinks from a dirty stream.
Grainy and dimly lit, the images are eerie, poignant, and beautiful. They’re also fake.
Bendiksen did make two trips to Veles, but he didn’t photograph any people. He shot pictures of empty buildings and deserted cityscapes, and when he returned home to Norway he used video-game-production software to transform the images into three-dimensional renderings. He then downloaded digitized, 3D images of models from the web and placed them inside the scenes, carefully adjusting their poses, clothing, and lighting to make everything look as realistic as possible. Finally, he transformed the renderings back into two-dimensional images. The “photographs” that appear in the book are, essentially, screen grabs from a video game that doesn’t exist.
Bendiksen’s introductory essay, too, is a fake. He didn’t write any of it. Instead, he fed existing news stories about Veles and social media fraud into GPT-2, a software program that employs machine-learning algorithms to generate fake text, and used a variety of prompts to get the program to spit out made-up stories on the topics he wanted to cover. He pieced bits of the stories together, including the entirely fictional interview with “Alex,” to produce the essay. “My work is about telling the truth”: those words aren’t Bendiksen’s; they’re the words of a computer.
When The Book of Veles appeared in July, Bendiksen assumed that readers would see through his ruse. He went out of his way to circulate the book and its images among professional photographers, confident that they would quickly realize the photos were fabrications. He was wrong. His work earned praise but not suspicion. Anxious to come clean, he created a fake Twitter account under the name “Chloe Miskin” and, in early September, used it to raise doubts about his photographs. That didn’t work, either. Finally, in an interview posted on the website of his agency, Magnum Photos, in late September, he confessed to the deception. He also explained his motivation:
I started to ask myself the question — how long will it take before we start seeing “documentary photojournalism” that has no other basis in reality than the photographer’s fantasy and a powerful computer graphics card? Will we be able to tell the difference? How hard is it to do? How skilled will our own community of photographers and editors be in sniffing out what are deep fakes and what is real? I was so frightened by what the answers would be that I decided to try to do this myself.
When you read The Book of Veles now, knowing it’s a fake, the artifice jumps out at you. The facial features of the people in the photos look slightly plastic, and many of their poses feel unnatural, as if their bones were flexible rather than rigid. Some of the pictures, such as one of a girl floating across an otherwise abandoned public swimming pool on an inflatable unicorn in winter, come off as ludicrous. Bendiksen’s essay reads like a cut-and-paste job, marked by clumsy phrasing and ungainly transitions. And, in the book’s concluding acknowledgments section, the author slyly lists the image-manipulation tools he used. “What fun toys,” he comments.
The fact that no one spotted the forgery underscores just how easy it is to fool people, even experts. We humans want to believe whatever’s presented to our senses through the media, particularly when it takes the form of images. “Photographs furnish evidence,” wrote Susan Sontag in her 1977 classic On Photography. “A photograph passes for incontrovertible proof that a given thing happened. The picture may distort; but there is always a presumption that something exists, or did exist, which is like what’s in the picture.” Take away that presumption, and our ideas about existence get tenuous fast.
“Images are powerful,” writes Noah Giansiracusa, a Bentley University mathematics professor, in How Algorithms Create and Prevent Fake News; “altering images is a method to alter reality and history.”
As Giansiracusa shows, image manipulation has been around as long as images have. A widely circulated photograph of Abraham Lincoln taken during the presidential campaign of 1860 was subtly altered by the photographer, Mathew Brady, to make the candidate appear more attractive. Brady enlarged Lincoln’s shirt collar, for instance, to hide his bony neck. In a portrait made to memorialize Lincoln after his assassination, the artist Thomas Hicks, working from earlier pictures, transposed Lincoln’s head onto a more muscular man’s body to make the fallen president look heroic. The body Hicks used was that of the pro-slavery zealot John C. Calhoun.
In 1950, in the midst of the Red Scare, Senator Joseph McCarthy’s staff circulated a fabricated composite image purporting to show Senator Millard Tydings, a fierce critic of McCarthy’s, in close conversation with the leader of the American Communist Party, Earl Browder. Historians suspect that the photo contributed to Tydings’s defeat in his reelection campaign that year.
Opponents of John Kerry’s 2004 presidential bid pulled a similar stunt. They made a composite photo showing Kerry standing beside the controversial actress Jane Fonda at a rally opposing the Vietnam War. The photo was widely accepted as real, even earning a mention in a New York Times article about Kerry’s youthful activism. Early this year, after Kerry joined the Biden administration, the same picture began circulating again, this time on Facebook.
Photos don’t need to be doctored to mislead. They need only be placed into a false context. During the Black Lives Matter protests in 2020, the Trump reelection campaign posted an ad featuring two side-by-side pictures. One showed the president meeting with police officials, with the caption “Public Safety.” The other showed police being attacked by a mob, with the caption “Chaos & Violence.” The timing of the ad implied that the latter photo was taken at a BLM protest, but it was actually taken at a pro-democracy rally in the Ukraine in 2014. The out-of-context ploy works equally well with videos. Giansiracusa points to a horrifying clip of a person being burned alive that’s been used, inaccurately, to back up claims of atrocities in at least four countries.
Such examples of images that are altered or otherwise falsified using traditional techniques are now referred to as shallow fakes. That’s to distinguish them from the deep fakes being generated by computers using machine learning and other artificial-intelligence techniques. The process of creating a deep fake is complicated, to say the least. In simple terms, it begins by teaching a computer to identify an object in a digitized image — a human face, say — by showing it many examples of that object. (The computer doesn’t actually learn what a face looks like, of course; it learns to recognize a pattern of pixels — a statistical pattern — that has a high probability of being a face.) The computer can then alter the appearance of a face, by changing the pattern of pixels, while ensuring that it will still be recognized as a face. It can also replace one person’s facial features and expressions with another person’s. And it can generate an image of an entirely new face — a face that has never existed — by producing a new array of pixels that fits the statistical model.
Deep fakes began to appear on social media about five years ago. Transposing a celebrity’s face onto a porn star’s body proved to be the most popular early use of the technology, with the resulting photos and videos appearing widely on sites like Reddit. Giansiracusa points to a 2019 survey that found that 96 percent of the deep fakes on the net at the time were “face-swap pornography.” Since then, deep fakes have proliferated — and become much more convincing. On TikTok, a series of videos featuring a strikingly realistic, but completely fake, Tom Cruise has been watched by millions.
The high-tech forgeries are also becoming much easier to create, thanks to technical advances and a welter of free production apps. No longer restricted to titillation and entertainment, deep fakes are starting to be used in politics, crime, and espionage. In March of this year, the FBI warned that “malicious actors” would soon begin using “synthetic content” as a tool for propaganda and cyber attacks.
Machine learning is also being applied to the production of bogus text, as Bendiksen’s essay illustrates. By scanning the internet’s enormous repository of prose, a computer can learn how words fit together to create sentences and paragraphs. It can build an ever more accurate mathematical model of how language works. Again, the computer’s understanding is purely a matter of statistics — it has no sense of meaning — but that’s enough to enable it to manufacture natural-sounding writing on any topic in any style. Give the machine a simple prompt — a headline, say, or a few introductory sentences — and it will in an instant output a reasonable facsimile of a human-penned article. Research cited by Giansiracusa suggests that most people can’t distinguish text written by GPT-3, the much more powerful successor to Bendiksen’s GPT-2, from the work of actual writers. And GPT-4, which will be much more powerful still, is already in development.
The spread of ever more realistic deep fakes will make it even more likely that people will be taken in by fake news and other lies. The havoc of the last few years is probably just the first act of a long misinformation crisis. Eventually, though, we’ll all begin to take deep fakes for granted. We’ll come to take it as a given that we can’t believe our eyes. At that point, deep fakes will start to have a very different and perhaps even more pernicious effect. They’ll amplify not our gullibility but our skepticism. As we lose trust in the information we receive, we’ll begin, in Giansiracusa’s words, to “doubt reality itself.” We’ll go from a world where our bias was to take everything as evidence — the world Sontag described — to one where our bias is to take nothing as evidence.
The question is, what happens to “the truth” — the quotation marks seem mandatory now — when all evidence is suspect?
Bendiksen borrowed the title of his book from an ancient manuscript, written on thin wooden boards bound together with leather straps, that dates back more than a thousand years. The manuscript is said to have been discovered by a soldier in a bombed-out castle in the Ukraine in 1919. The soldier later gave it to a Russian scientist named Yuriy Mirolubov, who restored the boards, photographed them, and translated their words into modern Russian. The manuscript provides accounts of the god Veles, a prominent deity in Slavic mythology. Known as a shapeshifter, a magician, and a trickster, Veles would often visit the mortal world in the form of a bear, making mischief wherever he went.
The ancient Book of Veles shares something else with its namesake: it’s a fake. Historians now believe the manuscript was forged, probably by Mirolubov, sometime in the middle of the 20th century. Some members of contemporary neo-pagan cults nevertheless continue to consider the book a holy text.
In his own Book of Veles, Bendiksen intersperses passages from the old manuscript among his photographs. He generated the passages the same way he generated the essay: with GPT-2. Technically speaking, they’re deep fakes of shallow fakes, or deep shallow fakes. On one level, the passages are just another of Bendiksen’s tricks — more mischief. As he explains in his Magnum interview, “Stuff like this always gets me interested: when stories start layering up and weaving themselves together in surprising ways.” But the references to the pagan god also add an unexpected resonance to the photos. By drawing mythology into his work, Bendiksen is suggesting something profound about the nature of truth and the representation of reality.
Myths are works of art. They provide a way of understanding the world that appeals not to reason but to emotion. What’s most pleasing to our sensibilities — what’s most beautiful, in other words — is what’s true. Though it may have been shoved aside by the Enlightenment’s stress on rationality and objectivity, mythology’s subjective and fundamentally aesthetic way of defining what’s true never went away. Now, after a few centuries on the periphery, it seems to be moving back into the mainstream. Once again, truth is becoming an aesthetic construct.
Think about the growing popularity, in QAnon circles and elsewhere, of extraordinarily strange conspiracy theories. They only make sense when understood as myths. Believing that Washington politicians are vampiric pedophiles operating out of a neighborhood pizza parlor, or that world leaders like George W. Bush and Queen Elizabeth II are extraterrestrial “lizard people” bent on enslaving the human race, is no different from believing that a chaos-sowing god stalks the Earth in the form of a bear. When all the evidence presented to our senses is unreal, then strangeness becomes a criterion of truth. The more uncanny the story, the more attractive and convincing it becomes.
In creating beautiful photographs that subvert the documentary tradition of photography and call into question our assumptions about how we perceive reality, Bendiksen has opened a door onto a weird and unsettling future. “Beauty is truth,” wrote Keats, and as we all know, beauty is in the eye of the beholder.
Nicholas Carr is the author of several books on technology and culture, including The Shallows, The Glass Cage, and Utopia Is Creepy. He is a visiting professor of sociology at Williams College.