IN 1942, the eccentric mathematician Norbert Wiener was fiddling with spotlights in a darkened MIT classroom. By tracking the automatic trajectory of one light with another controlled manually, he was trying — and failing — to model the evasive maneuvers of a fighter pilot in the heat of battle. Bombs had flattened large portions of London, and Wiener was hoping to provide the key to knocking German planes out of the sky before they could deliver their payload. A small grant from the National Defense Research Council underwrote his attempts to provide the math that would allow for the design of an automatic anti-aircraft weapon. Wiener saw that such a weapon would need to incorporate what engineers call feedback: the ability to read the spontaneous maneuvers of the spotlight/pilot and fire using that information. But he couldn’t get it to work.
The anti-aircraft problem still hasn’t been solved, but Wiener’s experiments provided him with a clue to a whole new scientific vocabulary, one that has had an incomparable influence on today’s networked globe. He emerged from that room at MIT and declared the advent of a new science that tied together biology, information theory, and the rudiments of what would become computer science. “Cybernetics” (based on kybernetes, the Greek word for “steersman,” cognate to English “governor”) was the science, Wiener announced, of “communication and control,” or of feedback mechanisms in general. In his improbable 1948 best seller Cybernetics, he would claim that the new science takes up “the blank spaces on the map” between estranged, overspecialized scientific disciplines. This ambition meant that Wiener needed a language to mediate between those disciplines: information. Physics would have to return to the moment when Gottfried Wilhelm Leibniz had imagined a universal language as the basis of the discipline, uniting information and matter.
Wiener developed many of the basic ideas of cybernetics alongside a variety of collaborators during a series of conferences that took place in New York City between 1946 and 1953. The transcripts of the so-called “Macy Conferences” (named after their patron, Josiah Macy Jr.) were reissued by the University of Chicago Press last year, on the heels of the first two histories of the movement, Ronald Kline’s The Cybernetics Moment, or Why We Call Our Age the Information Age and Thomas Rid’s Rise of the Machines: A Cybernetic History. The Macy transcripts attest to the chaos of Wiener’s “blank spaces” — they are filled with infighting and confusion — but also convey a crackling sense of excitement. Towering figures from the history of computing, like Claude Shannon and John von Neumann, argue with social scientists like Margaret Mead and her then-husband Gregory Bateson about feedback mechanisms, the concept of information, and something called “the digital.” A debate about whether the brain was digital at the second Macy conference got heated, as the discussion quickly turned from the brain to what counted as “digital” in the first place. Everyone agreed that digital measurement was discrete and exact, while analog was proportional and continuous. But the question of which is which in any given machine or animal continues to plague commentators to this day. The psychologist J. C. R. Licklider eventually closed down the debate, giving up the definition game by declaring: “We will use the words as best we can.”
By 1960, cybernetics had appeared on the cover of national weeklies, in science fiction, and even in self-help literature (it forms the backbone of L. Ron Hubbard’s Scientology). By the end of that decade it had been incorporated into the psychedelic culture of the Bay Area. Hippie bard Richard Brautigan captured that spirit in the bucolic 1967 poem “All Watched Over by Machines of Loving Grace,” where he imagines
a cybernetic meadow
where mammals and computers
live together in mutually
like pure water
touching clear sky.
In the meantime, von Neumann had conceived Random Access Memory — the architecture of the modern computer — in explicitly cybernetic terms. Licklider would go on to be instrumental in the implementation of ARPANET, the predecessor to our internet.
And then, after a few short decades of intellectual vitality and intense media fame, the cybernetics moment was over. Kline suggests that cybernetics was, at least in part, a victim of its own success; its proprietary vocabulary — feedback, information, digital, cyborg — entered into everyday speech. At the same time, its associations with science fiction and counterculture tainted it in the eyes of some scholars, frustrating its bid for academic legitimacy.
Both Kline and Rid tell compelling stories about how we got here, demonstrating just how deep the cybernetic well is. Kline thinks that the history of cybernetics can help us overcome technological determinism by showing the social and intellectual histories that gave rise to the “information age.” Rid, drawing on the thought of German philosopher Hans Blumenberg, thinks we need to work on rewriting the “myth” of the machine. Read together, they suggest that the central ideas of cybernetics can still provide a deep-structure challenge and opportunity for the humanities in the digital age. When we talk about “the digital” today, we almost exclusively focus on technological devices and their effects. Cybernetics, as reconstructed in these excellent studies, is supremely relevant in this age of digital humanities: indeed, it challenges us to think of both the digital and the human in a much broader way.
Reconceiving the notion of the human, and the organic, was always essential to the cybernetic project. One result of Wiener’s spotlight experiments at MIT was a short paper he co-authored with Julian Bigelow and Arturo Rosenblueth called “Behavior, Purpose and Teleology.” This paper — one of the founding documents of cybernetics — distinguishes between a “functional” view of organized beings (on the one hand animals, and on the other machines) and a view based on the notion of “behavior.” The functional perspective — for example, the perspective of physiology — sees great differences between animals and machines: “[O]rganisms are mainly colloidal, and include prominently protein molecules, large, complex and anisotropic,” while “machines are chiefly metallic and include mainly simple molecules.” But from the standpoint of behavior, machines and animals are not essentially different at all. Both types of entities absorb external impulses, interpret them, and put them to use through internal and external commands. They are input-output patterns, users and manipulators of energy. They differ only in arrangement, not in kind.
By looking at the world through the prism of behavioral pattern rather than function, Wiener was able to describe both animals and machines in terms borrowed from physics: both were pools of negative entropy, relatively stable patterns of resistance to the slow degeneration of all organization in the universe. This idea would prove crucial in the next years, when Wiener and Claude Shannon simultaneously developed a new theory: the theory of information, or what Shannon called “the mathematical theory of communication.” Kline’s account of this simultaneous discovery is one of the high points of his book. Both men made the analogy to thermodynamics, arguing that messages, too, were pools of negative entropy. Shannon provided a general formula for channel capacity, a way to ensure that messages could be encoded and decoded without degrading into mere noise. For Shannon, as Kline points out, “information” was an entropic background from which specific messages had to be selected (with shades of the way we use “data” today). For Wiener, it was the other way around: information was the message, entropy the noise. Wiener’s metaphor ultimately won. We now mostly imagine information as message, or, in Bateson’s famous phrase, as “a difference that makes a difference.”
Either way, information had to be processed, and the processors of information were special parts of animals and machines: brains and computers. Warren McCulloch had published an article on “neural nets” in 1943 that suggested a sort of digital programmability at the basis of neural functioning. Virtually every major cybernetician entertained this analogy at some point, including John von Neumann and Wiener himself, and it remains a stock element of speculation about artificial intelligence. Both Kline and Rid show how crucial the brain debate was to the cybernetics fad. It inspired the author Alice Mary Hilton, for example, to envision a “cybernated” world in which intelligent machines would take over mental work, as they had taken over manual labor during the First Industrial Revolution. (It also presented the menacing possibility of a takeover by intelligent machines.) For cybernetics, the digital brain completed the picture. Animals were machines, communication was control, and information processing was the principle, not just an element, of the new science. Wiener would not shy away from the philosophical consequences, claiming that “information is information, not energy or matter. No materialism which does not admit this can survive at the present day.”
Kline’s The Cybernetics Moment tracks the rise and fall of the cybernetics movement in more detail than any historical account to date. We learn about the surprising cultural influence of cybernetics even as its star waned, tied as it had become to science fiction and to fringe groups like Hubbard’s. The CIA, for example, organized the American Society for Cybernetics (still active today) partly in response to the late adoption of cybernetics by the Soviets, who had first labeled it a “bourgeois science.” And Heinz von Foerster went to the University of Illinois at Urbana-Champaign in the early 1960s to found the Biological Computer Laboratory, home of so-called “second cybernetics” (of which a full-scale history remains to be written).
Rid supplements this with an attention to the influence of cybernetics on Silicon Valley through figures like Stewart Brand, whose Whole Earth Catalog is a visual icon of the 1960s and who would go on to found Wired magazine; John Perry Barlow, a lyricist for the Grateful Dead and early web activist (and author of the “A Declaration of the Independence of Cyberspace”); and Timothy C. May, a central figure in the Crypto Anarchy movement, which produced a militant libertarianism based on the advent of public-key encryption, the technology that allows private communications the world over today. This strain of hard-line libertarianism produced the project of “seasteading,” or the creation of non-state communities on platforms in international waters, an idea vocally supported by Silicon Valley magnate Peter Thiel. Rid relates how cyborgs — “cybernetic organisms,” originally conceived as astronauts’ bodies tightly coupled to and enhanced by their machines to survive — inspired Donna Haraway’s postmodernist theory of the “general cyborg condition” in her “Cyborg Manifesto” of 1983. His closing chapter, based on original and often anonymous interviews with US intelligence agents conducted by the author, gives us the prehistory of literal cyberwar, an operation called Moonlight Maze that saw US intelligence hacked and helpless at the hands of shadowy Russians. (Sound familiar?)
Many of Rid’s tales unfold in the Defense Department and in the General Electric factory in Schenectady, New York, where Vietnam-driven businessmen, engineers, and government men created (unsuccessful) prototypes of robot weapons, and where Kurt Vonnegut sets his first novel, the cybernetics-inspired Player Piano. It turns out, although Rid does not say this in so many words, that science fiction has been as instrumental in the rise of the digital as any set of switches. Consider, for example, the creation of the Agile Eye helmet for Air Force pilots who need to integrate “cyberspace” (their term) with meatspace. The officer in charge reports, according to Rid, “We actually used the same industrial designers that had designed Darth Vader’s helmet.” This fluid movement between futuristic Hollywood design, science fiction, and the DOD is a recurring feature of Rise of the Machines. Take the NSA’s internal warning that “[l]aymen are beginning to expect science fiction capabilities and not scientific capabilities” in virtual reality. Or Rid’s account of the so-called “cypherpunks” around Timothy May. Their name was cribbed from the “cyberpunk” science fiction genre (“cypher” refers to public-key encryption), and they were inspired by novels like Vernor Vinge’s True Names (1981), one on a list of recommended books for the movement on which not a single nonfiction text figures.
Taken together, Kline and Rid’s narratives suggest that cybernetics had a significant inkling about its implications for the traditional preserve of the humanities — language, the soul, freedom — precisely because notions like “feedback” put computation or counting under the scientific microscope as such. Including Leibniz in physics was always going to have consequences for the humanities, as was a general theory of communication. If the human subject was part of the scientific enterprise, the automation of subjective capacities that the digital foresaw would have to be a major question for the humanities. Maybe that’s why literature plays an outsized role in Rid’s book. Maybe we’re playing catch-up to a much more expansive digital humanities that antedates our devices. If we live in a digital world precipitated by the cyberneticians whose stories are told in these books, then we don’t have a choice but to look to them, as Kline and Rid begin to do, for updates to our vocabulary and conceptual apparatuses as well. These books make a powerful case for a return to the origin of the cybernetic myth, an object that might come to the aid of a humanities that has unwittingly inherited so much from it.
Leif Weatherby is assistant professor of German at New York University and author of Transplanting the Metaphysical Organ: German Romanticism between Leibniz and Marx. He is working on a book about a strange encounter between cybernetics and German idealism.