In 1911, Italian pilot Giulio Gavotti became the first person to launch an aerial bombardment. En route to a Turkish camp in his monoplane, he had the novel idea of throwing grenades over Libya. It was a momentous decision — his action would spawn the era of modern aerial warfare with its attendant strategies and counterstrategies. That there was a clear-cut tactical advantage for the side with the most Gavottis (by way of pilots, planes, and explosives) was uncontroversial. But, significantly, as with most new tactical advantages, aerial bombardment sparked ethical concerns — about technological and economic disparities in warfare, for example. These ethical niceties were anything but clear-cut or uncontentious.

To this day, we can see echoes of such concerns — about what’s fair in war. Do pilots, like Gavotti, have an advantage considered inhumane, even in the scope of war, because of the distance between killer and killed? What exactly is “inhumane” for that matter in the context of war? Do today’s Remotely Piloted Aircraft (RPA) personnel, or drone operators, experience greater emotional removal from war atrocities because of their increased distance from the destruction they cause? Or do their prolonged surveillance activities and extensive use of drone cameras bring them closer to the people on the ground emotionally, even though the pilots themselves are physically removed from danger? Do their emotions even matter? More pragmatically, do such technologies save lives by removing pilots from harm’s way? And, more generally, is it the nature of the tools used, and the ensuing carnage, to jump-start new debates about ethics and technology?

There are certainly no easy or static answers to these questions, but two recent books — Christopher Coker’s nonfiction work, Future War, and P. W. Singer and August Cole’s science fiction story, Ghost Fleet — provide thought-provoking, albeit very different, frameworks for thinking about them. They help us to predict, or at least imagine, the likely future of war. They also help us understand the history of that future.

¤

“War doesn’t change,” declares Coker, a professor of international relations at the London School of Economics and Political Science. Rather, what changes “is the way we see it — perception is everything.” How we interpret its causes, sacrifices, economic or personal gains, participatory roles, or any aspect of war, morphs over time. If war was once, and still is in some places, an occasion for the exuberant celebration of manhood or nationhood, it is now more likely viewed as a necessary evil whose harms must be mitigated. But, however it is regarded, how we determine what tools to use, and what policies to develop for their use, is shaped by a wider culture than the military or even militias. Again, the current use of aerial force operated by pilots at a greater distance than in the past has not only elicited much public debate, but sparked new policy efforts at the UN level by such NGO groups as the Campaign to Stop Killer Robots. These are responses rooted in a specific cultural context. As Coker points out in his book, while the emphasis in these discussions at first appears to be rooted in new technologies and the novel ways we use current technologies, the essence of what changes is not, in fact, our technologies or even our access to them. What changes is the way we must assess the world and how we act out in it and with it. As Georgetown University professor David Cole recently observed, “Technology cannot solve the moral and ethical issues; it only casts them into sharper relief.”

In Future War, Coker does not claim to present a volume of “lessons learned” applicable to future technologies. Instead, he brandishes an analytical tool he terms “future thinking.” “Just as there is a visitable past, there is also […] a visitable future, a world whose consciousness is one we can imagine because it may not be entirely dissimilar from our own.” By means of nuanced and multidisciplinary interpretations of significant points in the history of warfare, he extrapolates forward. For instance, he turns to the example of technological interactions among technologies that no longer have humans “in the loop” as one “historical change that allows us to visit the future.” In machine-to-machine communication over the internet, the preponderance of web activity is currently nonhuman, with things like search engine optimization bots and data scrapers wading through more content than people ever could. Such interconnected technologies will proliferate to ever more domains of life — we will, of course, quite happily learn to live with technologies that, to use a quotidian example, allow our coffee maker to tell the web-based coffee store we need another whole bean delivery of our preferred brand by same-day drone-to-door transport. That interconnectivity has military applications we need to explore.

In terms of how we communicate with machines, that, too, will continue to evolve in ways we need to explore and extrapolate from. Coker uses the example of search engines, which we originally turned to for alerts on what was “important.” We now turn to them for what is “true” — thus, as a primary source of knowledge, which again has relevance to how we think about, even conduct, war. To Google the word “war,” Coker points out, means the search engine will direct us to the sites that it thinks are most relevant or illustrative of our information goal. This relevancy is determined through algorithms that use, for instance, general popularity as a guide. No longer is the machine just mediating human-human interaction: what’s added is a very complicated layer of machine interpretations to the answers it provides to the knowledge we seek.

To take this idea a step further, we can look to the example of the robotic exoskeletons that soldiers or first responders may soon be wearing. They are projected not only to enhance human strength and endurance, but also to monitor and dynamically promote the wearer’s health by, for instance, triaging bleeding wounds with a styptic foam applied to bodily injuries that are sensed by the suit itself — without purposeful or direct human input. Part of the communication between human and machine will be at this “unconscious” level, occurring below the threshold of human intention or knowledge. How will that “unconscious” communication evolve in the context of war? Or of peace for that matter?

Once we begin to imagine the spectrum of near-possible technologies and their uses, policy and philosophical questions further proliferate. Sticking with the robotic exoskeleton example: Will the technology be available only to military personnel, or to everyone? If it’s available to the public at large, and surely it will be, then will the suit’s cost be so prohibitive that it further separates the economic haves from the have-nots? Will “the haves” become so enhanced that different rules apply to them? Or, is another set of questions more relevant: Will collecting continuous biodata via the exoskeleton constitute an invasion of the wearer’s privacy? How will that medical data be used and protected? Who is responsible for the safety of the wearer in different situations, the individual or the exoskeleton manufacturer?

It goes without saying that we can expect many more such revolutionary technologies in the near future, which means it is imperative that we begin the important discussions around their policy, ethical, and cultural impacts now in order to more thoughtfully integrate these innovations into our everyday lives, whether at home or in battle spaces. The coffee example is not banal, and we shouldn’t imagine it is. The looming privacy issues surrounding the Internet of Things is not banal either. Both will shape and be shaped by militarized spaces. The fact is, though, that at present, the policy-making process is enormously backlogged: it is playing catch up to a host of paradigm-shifting technologies — those already mentioned above and others like, for instance, Remotely Piloted Vehicles (RPV), domestically used quadcopters, and DIY 3D-printed weapons to name just a few. The timeframes we have historically devoted to generating ethical and philosophical frameworks regarding safety, privacy, potential off-label uses, are no longer available to us. We have to find ways of generating policy more quickly and nimbly.

The goal of future thinking is, then, to fill this gap, between what we are inventing now and what we will live with tomorrow. It is to think ahead, rather than play catch up. Coker’s intention is, thus, pragmatic: to “future think” in order to imagine and perhaps implement interventions sensitive to cultural context. In a nutshell, it is to enact practical change ahead of actual invention and proliferation.

¤

An engaging aspect of Coker’s manuscript is his use of science fiction metaphors to frame and, just as importantly, to make accessible complex scenarios around emerging technologies. Of course this aspect only works if you have at least a passing familiarity with such popular culture icons as Star Trek, William Gibson, Ray Bradbury, and Jules Verne. Singer and Cole, the authors of Ghost Fleet, also use science fiction references liberally throughout their story, including, again, healthy references to Star Trek, which is held up as an example of positive future-forward thinking. This device makes sense in both books; science fiction can function as an imaginative tool for understanding how people might interact with emerging technologies. In Ghost Fleet, using science fiction in this way has the added effect of creating a kind of “nerd comradery” between the authors and their readers, a coded way of winking: “We know you know this reference because we’re betting that you, like us, are into this stuff.” Yes, Singer and Cole, we are.

In the fictional Ghost Fleet, Singer and Cole deploy an action-packed set of story lines to paint a picture of a new geopolitical structure. (In full disclosure, P. W. Singer and I have a collegial acquaintanceship, supported by our connections via social media and [I believe] a mutual interest and respect for each other’s real-world work on humans interacting with technology in defense spaces.)

Writing about the future of anything is a bold, daunting, and thought-provoking task, and Singer and Cole’s real-life think tank background is clearly to their advantage here as authors — Singer as a strategist and senior fellow at the New America Foundation and Cole as a nonresident senior fellow at the Brent Scowcroft Center on International Security at the Atlantic Council. The pictures they paint are rooted in present-day objects or artifacts. For example, the widespread use of viz-glasses in the book is traceable to Google Glass, and, as readers, we can clearly recognize how the geopolitical landscape described in Ghost Fleet could happen, too.

Singer and Cole play out their next-world power battles in a variety of locations — in cyberspace and outer space; in the forests and beaches of Hawaii and in the cityscapes of Beijing; and in the eponymous floating graveyard of an abandoned Naval fleet. These ships — the same “ghost” ships from of the book’s title — are abandoned, floating, reserve Navy ships left by previous generations as placeholders of war. In a word, they are obsolete. Without dipping into spoiler territory, the ships are resurrected to act again as instruments of war. The book’s missions and battles illustrate why the touchstones of history and human innovations should not be so easily dismissed as obsolete, nor should they be disentangled from the future: different forms of fighting have their strengths — from advanced technologies that are semi-autonomous, to older technologies that call for a human-in-the-loop. To be sure, the newer ships might have “better” machinery with more sophisticated weaponry than this ghost fleet, but that very technological “advancement” relies to an enormous extent on the infrastructure in place, and so can be just as vulnerable, albeit in different ways. 

Consequently, the technology is a complicated presence in itself, hard to pin down as something external, or Other, because it is so fully integrated into people’s bodies; it is a part of their very selves at a biological level, a part of the human way of developing, living, and being. Throughout the book, people on board are, in philosophical terms and through literal references, considered to be “part of the ship,” meaning their lives are inextricably entwined with shipboard activities, even beyond their assigned duties — although some sailors may point out that this has always been the case (think what is expected of a captain when his or her boat sinks). Being “part of the ship” also hints at what the act of framing personnel — people — to mirror the fate of a ghost fleet might mean. If our very human bodies with all their frailties are boosted with viz-glasses and stims (synthetic stimulants), we are in essence upgrading our biological systems, but to what end? And at what cost? Will just being human ever be enough again, or will people who do not integrate technologies into their physical selves be left behind as archaic or ineffective?

Ghost Fleet imagines Chinese forces taking advantage of their American counterparts and launching a historically familiar surprise attack on the US naval base at Pearl Harbor. Triggered by a Chinese gas discovery near the Mariana Trench, the Chinese government, known here as the Directorate, has enough power and economic leverage to act without fear of repercussions (in the form of sanctions) from the United States. The battles that follow feature the likely future military integration of humans with weaponry, by way of autonomous drones, university-supported cyber militias, and individual biological boosts via stims, that increase soldiers’ reaction times and alertness — the latter ingested almost constantly by those who rely on their effectiveness.

Cultural acceptance of, and widespread reliance on, tools like viz-glasses divides the military culture even at a team level, amplifying and complicating divisions in the ranks based on age and experience. To illustrate a classic trope: The older fighters are reluctant to embrace the so-called advances exemplified by these new gadgets, while the younger ones use these technologies as part of their standard military toolkit. Although the majority of fighting actually takes place on traditional land and sea, Singer and Cole use their professional expertise to imagine, or “future think,” warfare in space and cyberspace.

The story of this new “future war” is revealed through the perspectives of multiple people around the world. Nonstop action paints a cinematic picture that does not let up, inviting easy comparisons to Tom Clancy’s works and occasionally to such storytelling cousins as Marathon Man and A Clockwork Orange. One caveat: Sometimes these contexts overshadow the multiple story lines and individual perspectives. In other words, this is a book driven by action more than character or emotions. Stories are painted with broad brushstrokes — characters sometimes fall into distracting potholes. A “Black Widow” assassin and the Chinese Directorate’s Admiral Wang are film noir-ish characters, both of them largely defined by their obsessions: her personal mission to seduce and kill Chinese soldiers in Hawaii in Bond-like fashion, and Wang’s love for relying on his encyclopedic knowledge of War Philosopher-General Sun Tzu in almost every exchange with other characters.

I expect Ghost Fleet will eventually be the basis for a movie. It is easy to imagine the multiple plots’ relentless action lending itself to on-screen representation. Coker’s Future War is also energetic in tone, and intellectually intriguing in how it melds geopolitics, history, defense, cultural change, and technology. The same recurring motifs animate Ghost Fleet: increasing physical and mental reliance on technologies, the ambiguities of war, and the dynamic effects of rapidly changing cultures.

In both books, we should, as citizens of the world, recognize ourselves as characters in the stories of politics and war. Future War shows us paths we have taken and suggests where we are heading. Ghost Fleet imagines a world of people where humans remain not only relevant, but their very flaws and humanness make for dependable heroes and antiheroes when the call of duty — and storytelling — arises. All this said, we are left with more questions than answers. For instance: Will integrating systems like immersive, web-connected augmented reality glasses with our bodies make us less human or superhuman? Will humans remain invested in wars for duty, profit, and passion? And, the singularity question plaguing so many of us: Will the mechanical minds of AI continue apace until they are told to stop, or will they spontaneously develop their own set of internal guidelines and motivations?

¤

Julie Carpenter, PhD, is a consultant, researcher, and educator on human interaction with emerging technologies, with a focus on human-robot interaction research. Dr. Carpenter has written a book on the topic, Culture and Human-Robot Interaction in Militarized Spaces: A War Story (Ashgate, 2016), as well as published numerous scholarly articles. She is currently a Research Fellow in the Ethics + Emerging Sciences Group at California Polytechnic State University.