From Domination to Derision

W. Patrick McCray surveys Matthew Wisnioski’s description of the United States’ evolution—and devolution—into a nation obsessed with innovation.

Every American an Innovator: How Innovation Became a Way of Life by Matthew Wisnioski. The MIT Press, 2025. 330 pages.

Support LARB’s writers and staff.


All donations made through December 31 will be matched up to $100,000. Support LARB’s writers and staff by making a tax-deductible donation today!


IN 1829, THE ESSAYIST Thomas Carlyle wrote a damning critique of a mindset that he believed was increasingly pervasive and insidiously pernicious in British society. First published in the Edinburgh Review, his “Signs of the Times” skewered and then roasted the era’s accelerating domination by machines, which he was quite sure would have consequences for human acuity and morality. His age was no longer a “Heroical, Devotional, Philosophical, or Moral Age” but rather a “Mechanical Age.” “Calculated contrivance” and “resistless engines” in the form of automation and steam power boosted both the wealth of nations and a few lucky individuals while “increasing the distance between the rich and the poor.” No domain was left untrammeled as “Philosophy, Science, Art, Literature, all depend on machinery. […] Our true Deity is Mechanism.”


Given such dyspeptic views, it is easy to imagine Carlyle viewing the state of today’s technological landscape with despair and withdrawing into the wilds of his native Scotland. From there, his primary target would no doubt be our obsession with innovation. Matthew Wisnioski’s new book Every American an Innovator explains how innovation became not merely a corporate buzzword but the “natural expression of the spirit of our time.” And, as the title suggests, the author’s goal is to explore how innovation came to be seen as something in which all Americans, not just white dudes, could participate. By the close of the 20th century, innovation had become “a way of life.” Nations, mimicking the economic and commercial miracles pouring out of Silicon Valley, looked to compete in the global economy by developing “innovation ecosystems.” Cities and universities did likewise, pouring billions of dollars into “innovation districts” and “technology incubators,” while schools encouraged students to learn to code and to become entrepreneurs. All of this was abetted by a cadre of well-paid professional thought leaders, consultants, and other “innovation experts.”


For those of us on the pointy end of this economic and social disruption, it all feels new, destabilizing, discomforting, and yet eerily familiar. This is because, as Wisnioski explains, an entire industry of “economists, philosophers, sociologists, journalists, bureaucrats, business consultants, and teachers” have been preaching the gospel of innovation since the 1950s. His book, which is more descriptive than analytic, recounts how “the meanings, practices[,] politics, and myths of innovation” evolved in the ensuing decades. A professor of science and technology studies at Virginia Tech, he organizes his material around the “persona of the innovator.” Relatively unknown academics and government administrators were, he shows, transfigured into virtuous icons—think of the hagiography around Steve Jobs—whom parents and teachers wanted children to emulate. Today’s “techlash,” catalyzed by the behavior of amoral billionaire innovators, has been one result.


In Wisnioski’s telling, the long arc of the innovation nation, from initiation and domination to derogation and castigation, began soon after World War II. In that conflict, the United States had used the might of its industrial prowess to mass-produce everything from penicillin and Liberty ships to enriched uranium and napalm. Making this possible was the close synergy of corporate R & D and government sponsorship. Uncle Sam became the “nation’s largest research manager.” When the radioactive dust of Trinity, Hiroshima, and Nagasaki settled, the question of how to plan and sustain transformative new technologies, instead of just making incremental advances, became paramount. One answer was found in a most unlikely place—rural sociology, which had emerged as a field before the war from efforts to address the economic and social gaps between urban and rural life. The development and spread of hybrid seed corn resistant to disease and drought, which happened in the 1930s, became a way for academics to study how innovations “diffused,” a term of art frequently used in the 1940s and 1950s. Relatedly, economists such as MIT’s Robert Solow asked questions about the relationships between new technologies and economic growth. Meanwhile, across the Charles River, Georges Doriot at the Harvard Business School encouraged his students to think about and then act like entrepreneurs, giving rise to a model of modern venture capital.


Books about innovation, such as Everett Rogers’s influential Diffusion of Innovations (1962), duly proliferated. Most of these were written for a small but growing group of academics interested in innovation. Even so, one detects hints of a hustle mindset in sociologist Rogers, who not only developed a “penchant for Brooks Brothers suits and Porsche convertibles” but also reduced the complexity of his subject to quotable terms like “venturesomeness” and “early adopter” while claiming to identify the “salient characteristics” separating innovators from those less burdened by “creativity,” another much-studied buzzword from the era, as detailed in Samuel W. Franklin’s excellent The Cult of Creativity: A Surprisingly Recent History (2023). Business magazines such as Fortune fanned the flames with stories likening corporate innovation to the practices of avant-garde artists. The resulting fascination, Wisnioski notes, could be quantified: the number of job advertisements seeking “innovators” soared to several hundred by the time Jimi Hendrix was setting his guitar on fire. Innovators, the experts insisted, were agents of change. The question remained, however, as to whether only a select few (“rare and brilliant individuals”) could muster the courage and character to be innovators or whether “everyone had the capacity to innovate.”


If everyone had the latent potential to innovate, then perhaps the federal government was the water that would help this seed sprout into economically and socially useful products. In the 1960s and ’70s, a cohort of bureaucratic innovators “translated innovation into a national vision.” Innovation comes with risk, of course, and a history of innovation might best be read as a lengthy book of blots. President Richard Nixon inked his own name in this tome when he announced the creation of the Presidential Prize for Innovation. In 1972, his administration selected six winners, each capturing “a distinct persona of the government’s lessons for incentivizing innovation.” Awardees included an aerospace pioneer (Harold Rosen) and a biomedical engineer (Willem Kolff) as well as the creators of Sesame Street. The rising furor over the Watergate break-in soon overwhelmed the Nixon presidency, however, and none of the prizes was actually awarded. Nevertheless, Nixon’s failed attempt to reward these “infrastructural innovators”—all funded in some way by the federal government—furthered the belief that particular people with special skills were the key to social, political, and economic change.


Jump ahead 10 years and the public image of the innovator had coalesced around the familiar cliché of the “creative capitalist who thrives amid endless change.” The digital disrupters of Silicon Valley became a synecdoche for this class of individuals, and computing software and hardware became the compressed site par excellence of innovation.


We can see this transition simply by looking at the covers of Time magazine. Before Ronald Reagan defeated Jimmy Carter in the 1980 presidential election, computers had appeared on the cover of Time only a few times, the content of these issues offering a relatively narrow interpretation of computing technology focused on the machines themselves: what they did, how they worked, and how they were predicted to “impact” society. In 1982, this changed. In less than a single year, Time’s editors put computing on three covers with accompanying feature sections. While this surge of attention from the nation’s largest newsmagazine was itself notable, this trio of issues saw a marked shift in reportorial focus. Instead of computing machines and their workings, now it was individuals—specifically, computer users and corporate entrepreneurs—who were profiled. For example, in February 1982, Time’s cover article focused on the “new breed” of “bold and brassy risk takers” who were “betting on the high-technology future” and “leading the U.S. into the industries of the 21st century.” The poster child of this cohort of entrepreneurs was, of course, Steve Jobs, featured on the cover with text that blared “STRIKING IT RICH.” A zigzagging arrow emanates from a stylized Apple II computer to pierce a red apple placed by the illustrator on Jobs’s head. (This was the first of eight times Jobs appeared on a Time cover.) Wisnioski does an excellent job of capturing this transformation in his chapter “Be an Innovation Millionaire!” The entrepreneur had morphed, with the aid of pliable journalists, from “civic-minded prototypes of the 1960s” into the familiar typecast we know today.


Being an innovator implies taking risks. Cue the Silicon Valley clichés about “fake it until you make it” and “fail fast, fail often.” Although Wisnioski doesn’t make it a theme of the book, most of the innovation efforts he documents were unsuccessful. This is especially true of four efforts funded by the National Science Foundation through its Innovation Center Experiment. In the 1970s, the agency was under pressure to demonstrate societal returns from its investments in basic science, a goal that aligned with a bipartisan interest in identifying innovators as “agents of national progress” (the meaning of “progress” is, by the way, what is left unaddressed by the author). Technology incubator” were established at MIT, Carnegie Mellon University, the University of Oregon, and the University of Utah. Only in Utah, which built on the university’s established efforts in subjects like computer graphics, was there much in the way of positive results. Many other universities nonetheless scrambled to follow suit, launching their own incubator experiments. What hatched from all of these efforts was not so much commercial products but a “robust conception of the entrepreneurial self.”


If failure is one common thread in discussions around innovation, another concerns the question of who exactly an innovator is. Not surprisingly, most of the actors were white men affiliated with either major universities or well-placed federal technocrats. Wisnioski responds to the all-too-delayed recognition that women and people of color could also be agents of change by himself emphasizing the career of Rosabeth Moss Kanter. Trained as a sociologist, Kanter originally studied communes and other utopian-oriented communities, which led to her classic 1977 study of “tokenism” and how being a minority in a corporate setting influenced one’s visibility, recognition, and performance. Kanter presented her findings in her now-classic 1977 book Men and Women of the Corporation. The book’s success encouraged Kanter to start a consulting firm, which raised her profile enough to turn her into a management guru. In time, Kanter’s academic exactitude was dulled by the production of books such as The Change Masters: Innovation and Entrepreneurship in the American Corporation (1983) and When Giants Learn to Dance (1989), which offered B-school pablum instead of the rigor that had initially distinguished her scholarship. Empowering people besides white men to be innovators proved a remarkably obdurate task, impeded in large part by the business world’s “tech bro” culture.


Nevertheless, the idea that certain cultures and cultural values either support or undermine innovation took hold. This transformation originated, Wisnioski suggests, in the 1980s when business leaders and politicians in the United States looked to new policies to solve the failures of traditional manufacturing industries (especially as compared to Japan and West Germany). Key watchwords of the Reagan years were competitiveness and productivity, which gave rise to the idea of an innovation deficit in particular regions of the United States, and between it and other industrialized nations. Why did certain regions, such as the Bay Area, prove receptive to high-tech companies while once-mighty seats of industry, like Cleveland or Rochester, New York, remained mired in economic malaise? Federal policies such as the Bayh–Dole Act, which allowed universities to commercialize federally funded research, helped but seemed insufficient. Scholars like AnnaLee Saxenian—in particular, in her 1994 book Regional Advantage: Culture and Competition in Silicon Valley and Route 128, which combined ethnography with business advice to explain why, in Wisnioski’s words, “manufacturing cities declined while high-tech clusters boomed”—addressed the issue frontally. Ultimately, Saxenian concluded that success boiled down to culture: certain traits and dynamics specific to particular places enabled their success. “The innovation deficit,” Wisnioski notes, “was a culture deficit.” Unfortunately, this also meant that if Silicon Valley’s success was due to unique cultural traits (and unrepeatable historical contingencies), then its success could probably not be replicated elsewhere.


With his proclivity for focusing on texts and ideas, Wisnioski locates the apogee of “innovation as culture” in the appearance of Richard Florida’s best-selling book The Rise of the Creative Class (2002). Trained as an urban planner, Florida started his career studying the gaps between knowledge work and manufacturing in places like Newark, New Jersey, and also in Pittsburgh, where he was perched at Carnegie Mellon University. Ultimately, he concluded that a region’s “innovative capacity” was due to the “3Ts”—technology, talent, and tolerance. The third ingredient, notes Wisnioski, was the “newest and most controversial” because, as Florida put it, high-tech innovation and “acceptance of alternative lifestyles” were closely intertwined. A “Creativity Index” placed “weird” locales like Austin, Texas, with its prominent class of creative people, at the top, and intractable places like Pittsburgh in the pits. Exemplars of creativity and innovation—the two now conflated—wanted less regulation, lower taxes, and the freedom to live their lives as they wanted. These were also the ingredients underpinning the “Californian Ideology” that Richard Barbrook and Andy Cameron cited in their influential 1995 essay of the same name. Absent those ingredients, members of the creative class would migrate to more hospitable climes, something Richard Florida, now an academic rock star and in-demand innovation expert, did by abandoning Pittsburgh.


Fostering a tolerant culture of innovation apparently had to begin at an early age. An entire industry of experts trained in education, psychology, and other specialties arose to link childhood creativity with future innovation. “Play,” Wisnioski states, became “a virtue for innovators of all life stages.” Giving kids laptops came to be seen as a replacement for giving them an actual education. One of the chief proponents of this mindset was the Smithsonian Institution’s Lemelson Center for the Study of Invention and Innovation, launched in 1995 with a $10 million donation from Jerome Lemelson, a wealthy businessman who made his fortune by being, depending on whom one asks, either a prolific inventor or a notorious patent troll. (Disclosure: Several years ago, I was a fellow at the Lemelson Center.) A major focus of the Lemelson Center is to encourage children’s innate talents for creativity in order to foster their future contributions to the innovation economy. The implications are striking—and so here one wishes for more analysis in Wisnioski’s book regarding the elision of the boundaries between “creativity” and “innovation” as terms, as well as regarding what “categories” of actors are at stake.


With so much investment and attention pinned to innovation—as a policy, a plan for regional development, a path for self-development, and the goal of raising children—there was bound to be a reckoning of sorts. Wisnioski concludes Every American an Innovator by focusing on the last decade, starting with the presidency of Barack Obama. Framed as a social innovator—he used the phrase “Yes, we can” seven times in his 2008 electoral victory speech—Obama projected an attitude that change could be a force for good. But, as we know, jump ahead a few years and passionate critiques of an economy managed by out-of-touch knowledge workers sparked visceral loathing of neoliberalism, Wall Street, and Silicon Valley. The techlash was on.


Once-lauded regions like Silicon Valley were lampooned and eviscerated for their treatment of individual privacy, workers’ rights, and the precarity of the gig economy. Wisnioski alludes to these features (or call them bugs) of the innovator’s mindset. He could, however, have thrown far more powerful punches. Nonetheless, he rightly notes how innovation, circa 2016, was no longer an agreed-upon path to growth and prosperity but had become, rather, an engine of inequality. Scholars such as Lee Vinsel, Andrew L. Russell, Benoît Godin, and Lilly Irani questioned the motives of proponents of innovation and its alleged benefits. Vinsel, in particular, was especially caustic, likening “design thinking,” a subset of contemporary innovation practice at some schools, to a contagion that rots the brains of students and policymakers alike. I recall being on a panel in 2017 hosted by the World Economic Forum with three university presidents and making a case for why innovation might not always be good. Crack cocaine was a very successful innovation, I noted. The message was not well received. Nevertheless, the proliferation of vapid phrases like Google’s “Don’t be evil” and the mendacity of “innovators” such as Elizabeth Holmes (Theranos), Martin Shkreli (pharma-bro), and Travis Kalanick (Uber’s misogynistic CEO) have provided grist for my point. What remains less explained in this book is why the techlash took place. How and why did innovators become such antiheroes? Was it due to journalistic opportunism, media overexposure, or the fact that the standard-bearers of innovation were increasingly removed from any sort of contact with workers, the consequences of their actions, and reality itself. Perhaps it was due to the era’s focus on technological innovation instead of on its social or economic dimensions. As Lyndon B. Johnson learned during the 1960s, it was much easier to send people to outer space than to solve the problems of the inner cities.


Over the last 80 years, innovation has been seen as a lever to move the world and address pressing problems. From national security and economic competitiveness to issues of diversity and inclusion, innovation in all of its colorful and flawed guises appears as a universal panacea. “[N]either a passing buzzword nor an inevitable reality,” innovation is open to multiple interpretations and uses, as Wisnioski successfully shows. The book concludes by asking who innovation is for and to what ends we pursue it. That, I believe, is the best and most appropriate place to start asking questions. Perhaps it is a sign of the times that such a fundamental question is so often overlooked.

LARB Contributor

W. Patrick McCray is a professor of history at the University of California, Santa Barbara and the author of README: A Bookish History of Computing from Electronic Brains to Everything Machines (2025). He currently holds the Kluge Chair of Technology and Society at the Library of Congress.

Share

LARB Staff Recommendations