A Fireball in the Marshall Islands: How a Nuclear Test Changed the World
By Émile P. TorresSeptember 20, 2023
Newsreels shown around the country reported that eyewitnesses of the Hiroshima bombing described the event as “Doomsday itself,” while the voice of H. V. Kaltenborn declared on NBC that, “[f]or all we know, we have created a Frankenstein.” In The New York Times, journalist William Laurence wrote that Hiroshima and Nagasaki marked a profound turning point in human history, comparable to “the moment in the long ago when man first put fire to work for him.” The New York Herald Tribune similarly remarked that “one senses the foundation of one’s own universe trembling. […] It is as though we had put our hands upon the levers of a power too strange, too terrible, too unpredictable in all of its sudden consequences.”
Some writers, like the German Jewish philosopher Günther Anders, contended that Hiroshima would prove more significant than the birth of Christ, and hence we should introduce a new calendar in which 1945 is set to year zero. There is human history before this event, stretching back to the dawn of civilization, and then everything that came after it, cast in the great shadow of a poisonous mushroom cloud. Writing in 1958, Anders poignantly declared that “we live in the Year 13 of the Calamity. I was born in the Year 43 before. Father, who I buried in 1938, died in the Year 7 before.”
A cursory glance at the apocalyptic language following the Hiroshima and Nagasaki bombings can easily give the impression that people linked atomic weapons with the possibility of human extinction. But this is not true: almost no one drew a connection between our extinction and the catastrophic explosions of these bombs. No one worried that humanity itself could be extinguished by splitting the atom. Rather, atomic bombs were initially seen just as monumentally bigger hammers that individual states, in particular the United States and Soviet Union, could use to smash each other to pieces.
Consider remarks from philosopher Bertrand Russell, who declared in an article he’d begun writing in the 72 hours between the bombings of Hiroshima and Nagasaki that “the prospect for the human race is sombre beyond all precedent. Mankind are faced with a clear-cut alternative: either we shall all perish, or we shall have to acquire some slight degree of common sense.” This looks like a straightforward reference to human extinction. Yet he followed it up with an assessment of the worst-case outcome of an atomic war: “[A]ll large cities,” he wrote, “will be completely wiped out. […] Communications will be disrupted, and the world will be reduced to a number of small independent agricultural communities living on local produce, as they did in the Dark Ages.”
That isn’t extinction. It would be a horrendous disaster—the collapse of modern civilization—but it wouldn’t terminate the human species. This idea was perhaps best articulated by an anonymous army lieutenant in a 1946 issue of The Zanesville Signal, a local newspaper in Ohio. Asked by a journalist what the next major war might look like, the lieutenant replied: “I dunno … but in the war after the next war, sure as hell, they’ll be using spears!” (A similar quote is often attributed to Einstein, though this may be apocryphal.) In other words, a war with atomic bombs wouldn’t end the human story, but it would slingshot us back into the Stone Age, or at least a new Dark Ages.
Many of us assume—as I once did—that people recognized the prospect of total annihilation as soon as the Atomic Age began in 1945. In fact, it wasn’t until nine years later, in 1954, that commentators explicitly worried about nuclear weapons killing literally everyone on Earth.
What happened in 1954? To answer this, we need to know what happened 17 months earlier. In late 1952, the United States tested the first hydrogen, or thermonuclear, weapon. Thermonuclear weapons produce their explosion through nuclear fusion rather than fission—that is, by fusing together atoms rather than, as happens with atomic bombs, splitting them into fragments. In both cases, enormous amounts of energy are liberated instantaneously. To put the destructive enormity of thermonuclear weapons in perspective, the biggest device of this sort ever detonated, the Tsar Bomba, was roughly 3,300 times more powerful than the atomic bomb that decimated Hiroshima, which itself was over 2,000 times larger than “the largest bomb ever yet used in the history of warfare.” If the height of Mount Everest represents the Tsar Bomba’s explosive yield, the atomic bomb of Hiroshima would stand just under nine feet tall.
After the 1952 test, the United States quickly made plans to test more thermonuclear weapons, which would be detonated on various atolls, or ring-shaped islands surrounding a lagoon, in the Marshall Islands, several thousand miles north of New Zealand. This was called Operation Castle, and the first detonation, which happened on March 1, 1954, at Bikini Atoll, was code-named Bravo. Physicists estimated that the explosive yield of this bomb would be about six megatons but, to their surprise, it produced a yield of about 15 megatons—2.5 times larger than expected.
Within seconds, a fireball expanded over three miles wide, catapulting some “[t]en million tons of pulverized coral debris […] coated with radioactive fission products” into the atmosphere. This debris rapidly fell back to Earth in the form of odorless, tasteless white flakes of radioactive coral, blanketing roughly 7,000 square miles of the Pacific Ocean. A Japanese fishing vessel downwind of the explosion, inauspiciously named the Lucky Dragon, was covered in flakes, which one crew member described as “just like sleet.” By the evening, the fishermen began showing signs of acute radiation syndrome, such as nausea, vomiting, diarrhea, and headaches. Upon returning home, they were found to be highly radioactive. This triggered an international incident between the US and Japan, and the Lucky Dragon’s radio operator, Aikichi Kuboyama, ultimately died from complications, making him the first—and only, so far—direct casualty of thermonuclear weapons. His dying words were: “I pray that I am the last victim of an atomic or hydrogen bomb.”
But it wasn’t the direct effects of the test that most worried onlookers. Traces of this radioactive fallout were detected all over the world: in Japan, Australia, India, the United States, and parts of Europe. It immediately dawned on scientists that even a small-scale thermonuclear war could pepper the entire planet with dangerous amounts of radioactive particles. In other words, global thermonuclear fallout could transform every inhabited region of Earth into a radioactive hellscape that might threaten the very existence of humanity. No one would be safe. As a 1955 document published by the US Civil Defense Administration, titled Introduction to Radioactive Fallout, reports:
Before the facts of the 1954 H-bomb explosion were announced, fallout was of little concern to us. If you lived a few miles from a possible target, you could assume that you were safe from the effects of enemy bombing. […] That is no longer true. The 1954 tests in the Pacific showed that deadly fallout could be carried nearly 200 miles by the winds.
It was this event—the Castle Bravo test—that changed everything. Digging through the archives of the mid-20th century reveals a marked change in language after this event. Prior to 1954, there were almost no references to human extinction. Immediately afterwards, such references were pretty much everywhere.
Several months after the Castle Bravo incident, Bertrand Russell published a book with a closing chapter titled “Prologue or Epilogue?” In contrast to his earlier worries that atomic bombs could “merely” destroy civilization, he cautioned that we now face the possibility of “the extermination of mankind.” His question, then, was whether humanity was writing the beginning or the end of its autobiography: total annihilation had become technologically feasible, but if we survived, the human species could exist “for another million million years,” he estimated, during which great achievements and unimaginable wonders awaited our descendants. “Is all this hope to count for nothing?” he wondered.
Later in 1954, Russell penned the short essay “Man’s Peril,” which he read over BBC radio to an audience of six or seven million people, two days before Christmas. With urgency and passion, he explained that a thermonuclear war would decimate entire metropolitan centers, including “London, New York, and Moscow.” But it would also threaten “universal death,” a phrase he used regularly after the Castle Bravo test. Referring to this debacle, he then said the following, which is worth quoting in full:
We now know, especially since the Bikini test, that hydrogen bombs can gradually spread destruction over a much wider area than had been supposed. It is stated on very good authority that a bomb can now be manufactured which will be 25,000 times as powerful as that which destroyed Hiroshima. Such a bomb, if exploded near the ground or underwater, sends radioactive particles into the upper air. They sink gradually and reach the surface of the earth in the form of a deadly dust or rain. It was this dust which infected the Japanese fishermen and their catch of fish although they were outside what American experts believed to be the danger zone. No one knows how widely such lethal radioactive particles might be diffused, but the best authorities are unanimous in saying that a war with H-bombs is quite likely to put an end to the human race. It is feared that if many H-bombs are used there will be universal death—sudden only for a fortunate minority, but for the majority a slow torture of disease and disintegration.
Russell’s eloquent address was a huge success, and the text of “Man’s Peril” was widely circulated. As Russell remarked to his cousin, it “brought an avalanche of letters,” including some from the most prominent intellectuals of the day. One of these was Max Born, a physicist who won a Nobel Prize in 1954. Inspired by Born’s encouraging words, Russell wrote to his friend Albert Einstein about releasing a joint statement on “the universal suicidal folly of a thermonuclear war,” and persuading a number of eminent scientists to sign it. The more signatories, the more authoritative it would be—especially with Einstein’s name at the top.
Unfortunately, Russell’s plans appeared to be thwarted when, during a flight from Rome to Paris, the pilot announced that Einstein had just died. Russell was devastated. He greatly feared the escalating arms race between the United States and the Soviet Union, and believed that a warning endorsed by Einstein could help shake the world out of a spiraling death trap—a predicament he later described as a “game of chicken,” in which both sides foolishly race toward disaster believing that, if they flinched, choosing sanity over recklessness, they would lose the game. This was the Cold War quandary, Russell argued.
“Shattered” by the news of his friend’s death, Russell arrived in Paris. To his surprise, however, a letter from Einstein was waiting for him at his hotel: “I am gladly willing to sign your excellent statement,” it read. This was the very last letter with Einstein’s signature, scribbled just days before an aneurysm took his life. The resulting statement, signed by 10 other Nobel laureates, is known as the “Russell–Einstein Manifesto.” Now serving as a kind of last testament for the great scientist and pacifist, the manifesto repeated Russell’s warning that “if many H-bombs are used there will be universal death” to readers through hundreds of newspapers in dozens of languages across the world.*
A flurry of similar warnings followed. In 1956, Günther Anders made the case that human history can be divided into three distinct periods: first, all human beings are mortal; second, all human beings are killable; and third, humankind as a whole is killable. The industrial mass murder of the Jews during the Holocaust had inaugurated the second period, while the advent of thermonuclear weapons initiated the third. Two years later, psychiatrist Karl Jaspers wrote in The Future of Mankind that “an altogether novel situation has been created by the […] bomb. Either all mankind will physically perish or there will be a change in the moral-political condition of man.”
One year later, a theater critic named Kenneth Tynan published an article in The New Yorker which noted that “we are now equipped for a new crime, as yet untitled, though a good name for it would be omnicide—the murder of everyone.” The word “omnicide” was later popularized in the 1980s by a philosopher named John Somerville, who defined it variously as “the annihilation of all human beings by some human beings” or “the final madness of some humans killing all humans including themselves.” He described this as “the logical (and terminal) extension of the series of such nouns as suicide, infanticide, homicide, genocide.”
The shift in thinking about human extinction in the mid-1950s was momentous and abrupt. A story of psychocultural trauma, it ripped through society like the colossal explosions that rocked the Marshall Islands. For nearly all of human history, the notion that one group of humans could rapidly exterminate every person on Earth would have seemed absurd. Yet even after Hiroshima and Nagasaki were razed, leaving “only a few skeletons of concrete buildings” standing—to quote a 1945 news article—hardly anyone thought this new addition to our arsenals of lethal weaponry could catapult the entire human species into an eternal grave.
The watershed moment, then, was 1954. This was the first time that human self-annihilation was regarded as a real possibility. Such worries receded somewhat during the 1960s, but they reached a fever pitch during the 1980s due to events like the Soviet–Afghan War, the election of Ronald Reagan, and groundbreaking new research that identified a distinct way that nuclear weapons might destroy us: by flooding the upper atmosphere with sunlight-blocking soot, causing global famines that would persist for years.
The most recent studies suggest that an all-out thermonuclear conflict would not, in fact, cause our extinction. There may be a lesson here for contemporary worries, all over the news recently, that artificial general intelligence (AGI) could cause our extinction. Killing every single person on the planet is hard. There are people living in Antarctica and Siberia, in the middle of the Pacific, and in the Amazon rainforest. While figures like Russell and Einstein were right to worry about global thermonuclear fallout given the best science of the day, fears of total human extinction turned out to be overblown. We should thus wonder whether similar warnings about AGI are also exaggerated. My guess is that they are, and those describing AGI as an “existential risk” are making the same mistake that scientists made in the 20th century. Yet the seduction of apocalyptic thinking is that this time could be different: a thousand failed prophecies do not guarantee that the next prophecy will be false. Perhaps we will soon find out.
* My thanks to Dr. Dan Zimmer for insightful comments on this history.
Émile P. Torres is a philosopher and historian whose work focuses on existential threats to civilization and humanity. Their most recent book is Human Extinction: A History of the Science and Ethics of Annihilation (Routledge, 2023).
LARB Staff Recommendations
The pleasures of reading the titles from MIT Press’s new Radium Age series, writes historian of science Michael Gordin, lies in the science fiction genre not yet having congealed....
Andrew Ahern reviews Kohei Saito’s “Marx in the Anthropocene: Towards the Idea of Degrowth Communism.”...
Did you know LARB is a reader-supported nonprofit?
LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Please consider supporting our work and helping to keep LARB free.