Is There a Crisis of Truth?

By Steven ShapinDecember 2, 2019

Is There a Crisis of Truth?
OF COURSE, there’s a Crisis of Truth and, of course, we live in a “Post-Truth” society. Evidence of that Crisis is everywhere, extensively reported in the non-Fake-News media, read by Right-Thinking people. The White House floats the idea of “alternative facts” and the President’s personal attorney explains that “truth isn’t truth.” Trump denies human-caused climate change. Anti-vaxxers proliferate like viruses. These are Big and Important instances of Truth Denial — a lot follows from denying the Truth of expert claims about climate change and vaccine safety. But rather less dangerous Truth-Denying is also epidemic. Astrology and homeopathy flourish in modern Western societies, almost a majority of the American adult public doesn’t believe in evolution, and a third of young Americans think that the Earth may be flat. Meanwhile, Truth-Defenders point an accusatory finger at the perpetrators, with Trump, Heidegger, Latour, Derrida, and Quentin Tarantino improbably sharing a sinful relativist bed. [1]

I’ve mentioned some examples that take a crisis of scientific credibility as an index of the Truth Crisis. Though I’ll stick with science for most of this piece, it’s worth noting that the credibility of many sorts of expert knowledge is also in play — that of the press, medicine, economics and finance, various layers of government, and so on. It was, in fact, Michael Gove, a Conservative British MP, once Minister in charge of the universities, who announced just before the 2016 referendum supporting Brexit that “the people in this country have had enough of experts,” and, while he later tried to walk back that claim, the response to his outburst in Brexit Britain abundantly establishes that he hit a nerve.

It seems irresponsible or perverse to reject the idea that there is a Crisis of Truth. No time now for judicious reflection; what’s needed is a full-frontal attack on the Truth Deniers. But it’s good to be sure about the identity of the problem before setting out to solve it. Conceiving the problem as a Crisis of Truth, or even as a Crisis of Scientific Authority, is not, I think, the best starting point. There’s no reason for complacency, but there is reason to reassess which bits of our culture are in a critical state and, once they are securely identified, what therapies are in order.

Start with the idea of Truth. What could be more important, especially if the word is used — as it often is in academic writing — as a placeholder for Reality? But there’s a sort of luminous glow around the notion of Truth that prejudges and pre-processes the attitudes proper to entertain about it. The Truth goes marching on. God is Truth. The Truth shall set you free. Who, except the mad and the malevolent, could possibly be against Truth? It was, after all, Pontius Pilate who asked, “What is Truth?” — and then went off to wash his hands.

So here’s an only apparently pedantic hint about how to construe Truth and also about why our current problem might not be described as a Crisis of Truth. In modern common usage, Truth is a notably uncommon term. The natural home of Truth is not in the workaday vernacular but in weekend, even language-gone-on-holiday, scenes. The notion of Truth tends to crop up when statements about “what’s the case” are put under pressure, questioned, or picked out for celebration. Statements about “the case” can then become instances of the Truth, surrounded by an epistemic halo. Truth is invoked when we swear to tell it — “the whole Truth and nothing but” — in legal settings or in the filling-out of official forms when we’re cautioned against departing from it; or in those sorts of school and bureaucratic exams where we’re made to choose between True and False. Truth is brought into play when it’s suspected that something of importance has been willfully obscured — as when Al Gore famously responded to disbelief in climate change by insisting on “an inconvenient truth” or when we demand to be told the Truth about the safety of GMOs. [2]

Truth-talk appears in such special-purpose forums as valedictory statements where scientists say that their calling is a Search for Truth. And it’s worth considering the difference between saying that and saying they’re working to sequence a breast cancer gene or to predict when a specific Indonesian volcano is most likely to erupt. Truth stands to Matters-That-Are-the-Case roughly as incantations, proverbs, and aphorisms stand to ordinary speech. Truth attaches more to some formal intellectual practices than to others — to philosophy, religion, art, and, of course, science, even though in science there is apparent specificity. Compare those sciences that seem good fits with the notion of a Search for Truth to those that seem less good fits: theoretical physics versus seismology, academic brain science versus research on the best flavoring for a soft drink. And, of course, Truth echoes around philosophy classrooms and journals, where theories of what it is are advanced, defended, and endlessly disputed. Philosophers collectively know that Truth is very important, but they don’t collectively know what it is.

I’ve said that Truth figures in worries about the problems of knowledge we’re said to be afflicted with, where saying that we have a Crisis of Truth both intensifies the problem and gives it a moral charge. In May 2019, Angela Merkel gave the commencement speech at Harvard. Prettily noting the significance of Harvard’s motto, Veritas, the German Chancellor described the conditions for academic inquiry, which, she said, requires that “we do not describe lies as truth and truth as lies,” nor that “we accept abuses [Missstände] as normal.” The Harvard audience stood and cheered: they understood the coded political reference to Trump and evidently agreed that the opposite of Truth was a lie — not just a statement that didn’t match reality but an intentional deception. You can, however, think of Truth’s opposite as nonsense, error, or bullshit, but calling it a lie was to position Truth in a moral field. Merkel was not giving Harvard a lesson in philosophy but a lesson in global civic virtue.

What we’re now experiencing is not, I suggest, a Truth Crisis or even a Scientific Authority Crisis. The problems we are confronting are real but they are quite specific. Reflect back on the problems introduced at the outset. I’ve asked many people over the past months about the Crisis of Truth. They seemed to know what I meant and they agreed that there was such a Crisis. But, when asked to provide examples, practically all of them mentioned the same three instances — climate change denial, anti-vaccine sentiment, and various forms of anti-evolutionary thought. There’s no denying their importance. Material consequences follow from belief or disbelief in anthropogenic climate change or the safety of vaccines, but, although it’s depressing that anti-evolutionary attitudes are so widely distributed, it’s not evident that much of practical significance — beyond what’s taught in schools — flows from skepticism about Darwinism.

That’s not a very long list from which to establish a Crisis of Truth, at least in domains pertaining to scientific authority — certainly not long enough to justify concluding that anti-scientific sentiment is pervasive and profound. Consider, too, the status of items on the list. There is widespread and deplorable rejection of vaccine safety, climate change, and species change. But note the far longer list of scientific facts and theories about which there’s no dispute and which enjoy a degree of acceptance that is the envy of many other cultural practices. Here, it’s a good idea to make a distinction between the claims of expert science that are matters of some public concern and those that are matters of public indifference.

Large sections of the public have encountered and, without friction or frisson, accepted a mass of scientific claims — facts and inferences from facts. [3] These include practically everything in the school scientific curriculum and, less importantly, some things that have presence in the increasingly fragmented mass media — for instance, the laws of motion and thermodynamics; the speed of light and the nature of the nervous impulse; the fissile nature of a uranium isotope. Of course, public knowledge of matters like these is superficial, insecure, or (if you like) deficient, but the relevant circumstance here is public exposure and the absence of noticeable argument. So, compared to the shortlist of disputed items that constitute the scientific portion of a Crisis of Truth, we have a very large list of things not in dispute, many circulating smoothly in the public culture.

Several points are worth noting about this state of affairs. The first is the issue of “scientific ignorance” or, as it’s common to say in academic Science & Technology Studies, “the public (mis)understanding of science.” There is massive public ignorance of scientific facts and theories; public knowledge of consensually held scientific knowledge is somewhere between poor and appalling. And it’s customary for Right-Thinking People to shake their heads sadly, sigh, and condemn this. In the present context, scientific ignorance is often pointed to either as a cause of the Crisis of Truth or as key evidence that such a Crisis exists. From which the remedy follows: public ignorance must be repaired. The public should be exposed to a lot more science — the scientific facts and theories of various disciplines, or if there isn’t enough space in the school curriculum, then certainly whichever version might be selected from among the very many, and often incompatible, accounts of The Scientific Method. Once that happens, the public will think rightly about climate change and the safety of vaccines.

In Carl Sagan’s famous formulation, “We live in a society exquisitely dependent on science and technology, in which hardly anyone knows anything about science and technology.” But public ignorance of science is understandable. It’s not a bad thing; it’s even, in many respects, a highly desirable state of affairs. Of course, you want future scientists — and I use this word broadly — to know the relevant facts, theories, and procedures of their specialties. And, insofar as scientific knowledge is accounted part of what it might mean to be an “educated person” — despite disagreement among educators these days about what that is or whether “educated persons” are still supposed to be the product of universities — that, too, might justify the scientific bits of school and higher education.

Yet the scientific education of the larger part of the public not involved in technical activities is, one could say, naturally limited. You want the facts, theories, and methods of making scientific knowledge to be opaque in the same way and for the same reasons that you want the inner mechanisms of your car or cell phone to be opaque. You do not want to open up their black-boxes; you want to take their workings for granted; and, unless you take particular pleasure in knowing these things, you might not want to be burdened with the principles on which your car or your phone work. If you want to dispute my account of scientific-ignorance-as-practical-virtue, consider the full range of now black-boxed scientific knowledge you might wish to open up, explore, and require all sorts of people to command. That includes the science embedded in artefacts like cars and phones, but it also includes black-boxed and taken-care-of ideas — the laws of motion, thermodynamics, aerostatics, and on and on. If you want to say that the public needs foundational knowledge, allowing them to get to grips with live scientific issues, then you can perhaps explain — to the educators and students — exactly how knowing, say, the laws of motion or thermodynamics encourages people to make better decisions about climate change. Not so easy.

Those who offer More Science in the curriculum and in the media as solutions of the Truth Crisis tend to equate science with accomplished science, textbook science, secure scientific facts and well-supported theories. A public better educated in these things will, it’s presumed, be better able to sort out reliable science from junk, pseudoscience, errors, and lies. But recall the shortlist of wrongly challenged knowledge that, on reflection, actually constitutes the alleged scientific Crisis of Truth. Evolution by natural selection is disputed in part because it opposes cherished articles of faith in strands of fundamentalist religion; vaccine safety is disputed in part because parents are desperately concerned to avoid risk to the health of their kids; human-caused climate change is disputed in part because, if it’s the case, people may have to ride a bike, eat less meat, and bring reusable bags to do their shopping. To put it in the blandest terms: disputed science is science that seems worth dispute. In the 17th century, Thomas Hobbes noted, and accounted for, a crucial difference between geometry and ethics — the deliverances of the latter are endemically subject to dispute, those of the former almost never:

[T]he doctrine of Right and Wrong, is perpetually disputed, both by Pen and the Sword: Whereas the doctrine of Lines, and Figures, is not so; because men care not, in that subject what be truth, as a thing that crosses no mans ambition, profit, or lust. For I doubt not, but if it had been a thing contrary to any mans right of dominion, or to the interest of men that have dominion, That the three Angles of a Triangle, should be equall to two Angles of a Square; that doctrine should have been, if not disputed, yet by the burning of all books of Geometry, suppressed, as farre as he whom it concerned was able. [4]


Matters of concern, that is, are likely to be matters of contention. Textbook science is closed science, not considered to be matters of concern, and one notable index of closure is that — momentarily if not permanently — such knowledge has escaped from crossing human “ambition, profit, or lust.” [5]

The problem we confront is better described not as too little science in public culture but as too much. Given the absurdities and errors abroad in the land, it may seem crazy to say this, yet the point can be pressed. Consider, again, the climate change deniers, the anti-vaxxers, and the creationists. They’re wrong-headed of course, but, like the Moon-landing deniers and the Flat-Earthers, their rejection of Right Thinking is not delivered as anti-science. Instead, it comes garnished with the supposed facts, theories, approved methods, and postures of objectivity and disinterestedness associated with genuine science. Wrong-headedness often advertises its embrace of officially cherished scientific values — skepticism, disinterestedness, universalism, the distinction between secure facts and provisional theories — and frequently does so more vigorously than the science rejected. The deniers’ notion of science sometimes seems, so to speak, hyperscientific, more royalist than the king. And, if you want examples of hyperscientific tendencies in so-called pseudoscience, there are now sensitive studies of the biblical astronomy craze instigated in the 1950s by the psychiatrist Immanuel Velikovsky, or you can consider the meticulous methodological attentiveness of parapsychology, or you can reflect on why it might be that students of the human sciences are deluged with lessons on The Scientific Method while chemists and geologists are typically content with mastering just the various methods of their specialties. The Truth-Deniers find scientific facts and theories shamefully ignored by the elites; they embrace conceptions of a coherent, stable, and effective Scientific Method that the elites are said to violate; they insist on the necessity of radical scientific skepticism, universal replication, and openness to alternative views that the elites contravene. On those criteria, who’s really anti-scientific? Who are the real Truth-Deniers? [6]

If you follow the claims of the Truth-Deniers, you can’t but recognize this surfeit of science — so many facts and theories unknown at elite universities, such an abundance of scientific papers and institutions, such a cacophonous chorus of scientific voices. This is a world in which the democratic “essence” of science is taken very seriously and scientific aristocracy and elitism are condemned. Why should such institutions as Oxford, Harvard, and their like monopolize scientific Truth? It’s hard to fault the principle of scientific democracy, but, as a normal practice, it’s faulted all the time. It is both difficult, and now imprudent, to say, but established science, like all other professional and expert practices, has always controlled entry, proper conduct, and the rights to speak and to judge. In these senses, it’s not democratic. It is those rights to speak and judge in science that are now being so powerfully contested. Or, to put it more starkly, there’s too much science about — and, properly described, that’s part of our problem.

Turn the epistemic screw once more and modern Truth-Defenders force the Deniers to confront the fact — so often misrepresented — of scientific consensus. The Deniers falsely claim to discern disagreement in the relevant science, and so argue that there’s no solid basis for taking the practical steps experts urge — for example, in the case of climate change science, de-carbonizing our factories, homes, farms, cars, and diet. Insisting on the fact of consensus may be persuasive in certain specific settings, but in many others it kicks the can down the road: a public that was expected to distinguish real science from bad science (or from non-science) is now asked to discriminate between those apparent experts who say there is scientific consensus and those who say there is not, or who say that the consensus pointed to is evidence of Deep State conspiracy. [7]

There is a viscerally appealing and currently powerful argument against my claim that the notion of scientific ignorance doesn’t adequately describe our present problems. That argument is Donald Trump — his confidence in the Chinese Hoax theory of climate change; his cavalier disregard of scientific expertise in favor of kooks, cranks, and criminals. Trump knows no science, and the planet pays the price. Trump is a moron and he’s a liar on an industrial scale. Contrast Trump with Obama and his administration’s environmental policies: good science; better, if not good enough, policies. But here’s another, also probably irritating, claim: it’s not obvious that Trump does know significantly less science than Obama — a highly intelligent man but, after all, just a Harvard lawyer. And there’s no good reason to think that politicians’ personal knowledge of science makes much difference to concrete political outcomes.

Rather, a difference between the two — and a consideration pertinent to links between expert Truth and political consequences — isn’t knowing science but knowing where science lives: who to recognize as knowledgeable and reliable; who to trust; which institutions to consider as the homes of genuine knowledge. Knowing this sort of thing — call it a kind of social knowledge — is a different matter than knowing the laws of motion, the nucleotide makeup of DNA, or the statistical means of determining global temperature and establishing its rate of change. This type of knowledge involves rightly knowing the scientific reputation of institutions; rightly knowing the integrity of those who testify to those reputations; rightly knowing the ascribed virtues and vices of the institutions and their procedures; and even rightly knowing the personal characteristics and material interests of the spokespersons for these institutions and those who testify to their qualities. It involves knowing whose opinion to take, and to take seriously, about matters of which you happen to be ignorant. That sort of knowledge isn’t technical, and people might say that it isn’t scientific, or even that it isn’t really knowledge — but almost all of the technical knowledge that we have is held on that basis. In the distant past, I did advanced scientific work (in genetics, as it happens), but — and I speak here just for myself — everything that I know about climate change, including my knowing that Trump is wrong, is held courtesy of this social knowledge. Being a knowledgeable person may mean knowing a lot of stuff, but it certainly means knowing who knows and who does not know. [8]

Given the demography of the learned world, it’s possible that practically everybody reading this essay possesses that knowledge and shares in the judgments following from it. So, it might be thought, this social knowledge is easy to obtain, with no special expertise required. Admittedly, there is no course of dedicated study to acquire such knowledge, but it’s not easy to obtain and almost impossible effectively to communicate to those who happen not already to have it. People who have this knowledge appear as people like us, possessed of common sense and adequately sound judgment, sharing in our stocks of self-evidence, their minds competently supplied with fit-for-purpose cultural furniture. To know the likely sources of Truth is to be a certain sort of person. And that’s why it’s so hard to understand what it’s like to be someone who knows otherwise — for example, someone who finds the MMR vaccine unacceptably risky. But how do you write down and effectively communicate knowledge held in that way? How do you justify it in the public culture? Believe Harvard climate scientists and don’t believe their peers at Eastern Kentucky Baptist University or at Exxon Mobil. Prefer The New York Times to Breitbart News. These recommendations don’t seem very admirable. They are also frankly undemocratic and they commend prejudice. It’s unlikely that you would want these counsels distributed as global norms — but that’s a problem with the seeming and with the requirement for rationally and morally justified global norms.

Perhaps it can now be seen just how difficult it is for the laity to know Truth when it’s right in front of them. And why exposing the public to more science isn’t likely to cure the Crisis. In order to know the Truth, in order to have right belief, people must, essentially, be very much like us — not to know facts and theories as their personal possession (since most of us don’t either) but to trust the people and institutions that we trust. The Crisis of Truth is better described as a Crisis of Social Knowledge and, specifically, as a Crisis of Institutions — of institutional authority and legitimacy.

I’ve focused here on the Crisis of Truth recognized in relation to science, but there are many contemporary institutions said to be experiencing a crisis of authority in delivering their special sorts of knowledge. It would not be right to equate a decline with a collapse of the authority of expert institutions: there’s evidence that this authority remains considerable. Surveys of public attitudes to science and scientific institutions yield ambiguous results depending on how questions are phrased, what counts as evidence, and where and when inquiries are conducted. Some opinion surveys indicate that trust in science has markedly declined among conservative groups in the United States while remaining notably stable in other social groups, but there are studies that offer no evidence that there has been any overall change in public confidence in science. [9] Still other surveys give little support for declinism, finding (for the United Kingdom and the European Union) “broadly positive public attitudes towards experts — contradicting the bleak commentary associated with the so-called ‘post-truth era’.” [10] Sociologist Gil Eyal begins his fine recent book on modern predicaments of expert authority by apparently agreeing that “we are in the midst of an all-out assault on expertise” while later qualifying the claim: “[N]ot all of ‘science’ is under assault”;

What needs to be explained is not a one-sided “death of expertise,” “mistrust of experts” or “assault on science,” but the two-headed pushmi-pullyu of unprecedented reliance on science and expertise coupled with increased suspicion, skepticism, and dismissal of scientific findings, expert opinion, or even of whole branches of investigation. [11]


Nevertheless, credibility and legitimacy problems attending a number of cultural institutions aren’t new. We’ve never needed critical analysis to support the belief that governments lie. Machiavelli recommended deceit as sound policy, and, in a famous 17th-century formulation, the English diplomat Henry Wotton defined an ambassador as “an honest gentleman sent to lie abroad for the good of his country.” English High Court judges, asked to consider a summons against Boris Johnson for demonstrable falsehoods in the referendum on EU membership, ruled against the petitioners, saying that everyone knew that lying was part of politics. Trump lies every hour on the hour, and Trumpism is often accounted an extreme version of the emerging global normal. We know that politicians lie: what we’re now debating is whether they lie more, or more brazenly, than they used to. And we don’t need to be told that companies lie to protect corporate profits and executive bonuses. So one thing to be said about science is that its patchy difficulties as a recognized source of Truth have begun to look something like the more far-reaching problems of intellectual authority routinely attached to the pronouncements of government and business.

If the Crisis is one of institutions and the credibility of their knowledge, how did the institution of science get itself into this state? The current difficulties of institutional legitimacy arise from institutional success — in the case of science, historical success in enfolding scientific inquiry and scientific findings into modern civic life, especially the practices of government and commerce. You can call that a realization of the Baconian dream — inserting scientific knowledge into the constitution and exercise of power and securing wide appreciation that science does play that role. Of course, we should understand the extent to which natural science and practical mathematics have been folded into governmentality and commerce since Archimedean and Vitruvian antiquity. In Bismarckian and Wilhelmine Germany, innovative chemical, pharmaceutical, and electricity industries sucked in huge numbers of academically trained researchers, as did the great industrial research labs emerging from around 1900 in the United States. By the early 20th century, the asymmetric alliance between science, the state, and industry was being enthusiastically celebrated, and more trained scientists were being employed by industry and government laboratories than by institutions of higher education. In 1917, Max Weber was describing the scientific Beruf solely in disinterested academic terms, but the reality was that the scientific role was being delivered more, and more perceptibly, to the institutions of power and production. [12]

It was, however, the success of the Manhattan Project that ushered in an enduring Cold War confederation of science, the military, civilian expertise, and statecraft. The universities, taken as “natural homes” of science, were fundamentally altered by this alliance between power and knowledge. In 1961, President Eisenhower warned of the military-industrial complex, and, in 1968, a senator rightly modified that to the military-industrial-academic complex. This was a state of affairs that leaders of the scientific community had, for many years, advocated and with which the great majority of scientists were satisfied, even if, by the middle of the 20th century, a few — especially among the Bomb-Makers — were beginning to have nostalgic second thoughts. The scientific life became normalized in institutions long considered external to science, but it also became internally normalized, and was widely recognized as such in sectors of the public culture. On the one hand, the professional normalization of the scientific job meant a degree of autonomy (as mid-20th-century sociologists wrote), but it also meant a degree of conformity and conservatism (as Thomas Kuhn pointed out in the 1960s).

As these sensibilities about the nature of the scientific life filtered into public culture, they provided a way to dispute scientific claims — not because such claims were a technical idiom for furthering supposedly external political or commercial goals, but because they might be self-serving ways of pursuing professional goals. This was evident, for example, in the 2009–2010 “Climategate” controversy: a “conspiracy” of climate scientists was accused of manipulating data to protect professional investments in the reality of global warming. The material advantage protected by alleged climate science malfeasance was not that of, say, Danish wind-turbine companies but of the scientists’ career interests. The more belief in man-made climate change was secure, the more the careers of climate scientists were secure. And if Truth-Defenders insist that there is still scientific consensus about climate change, then, the denialists say, “Now we know what this ‘consensus’ really means. What it means is: the fix is in.” [13] For consensus, read conspiracy.

So, by the middle of the 20th century, the scientific community — in the United States and many other Western countries — had achieved a goal long wished for by many of its most vocal members: it had been woven into the fabric of ordinary social, economic, and political life. For many academic students of science — historians, sociologists, and, above all, philosophers — that part of science which was not an academic affair remained scarcely visible, but the reality was that most of science was now conducted within government and business, and much of the public approval of science was based on a sense of its external utilities — if indeed power and profit should be seen as goals external to scientific work. [14] Moreover, insofar as academia can still be viewed as the natural home of science, universities, too, began to rebrand themselves as normal sorts of civic institutions. For at least half a century, universities have made it clear that they should not be thought of as Ivory Towers; they were not disengaged from civic concerns but actively engaged in furthering those concerns. [15] They have come to speak less and less about Truth and more and more about Growing the Economy and increasing their graduates’ earning power. The audit culture imposed neoliberal market standards on the evaluation of academic inquiry, offering an additional sign that science properly belonged in the market, driven by market concerns and evaluated by market criteria. The entanglement of science with business and statecraft historically tracked the disentanglement of science from the institutions of religion. That, too, was celebrated by scientific spokespersons as a great victory, but the difference here was that science and religion in past centuries were both in the Truth Business. [16]

When science becomes so extensively bonded with power and profit, its conditions of credibility look more and more like those of the institutions in which it has been enfolded. Its problems are their problems. Business is not in the business of Truth; it is in the business of business. So why should we expect the science embedded within business to have a straightforward entitlement to the notion of Truth? [17] The same question applies to the science embedded in the State’s exercise of power. Knowledge speaks through institutions; it is embedded in the everyday practices of social life; and if the institutions and the everyday practices are in trouble, so too is their knowledge. Given the relationship between the order of knowledge and the order of society, it’s no surprise that the other Big Thing now widely said to be in Crisis is liberal democracy. [18] The Hobbesian Cui bono? question (Who benefits?) is generally thought pertinent to statecraft and commerce, so why shouldn’t there be dispute over scientific deliverances emerging, and thought to emerge, from government, business, and institutions advertising their relationship to them?

If a crisis of scientific authority is supposed to testify to cultural failure, then it’s a failure that follows significantly from institutional success: the normalization of science. No longer a handmaid to the Sacred, science might have won a battle for cultural supremacy, but at a price: the abandonment of the Sacred’s traditional claim to Truth. The preferred philosophy of science among scientists, and those who support and commend their work, is now some version of pragmatism. Transcendental Truth has been eclipsed. Among the theoretically fashionable, Truth is identified with Power — substantively and not just as a Baconian test of validity — thereby making disinterestedness a nonsense. This is where fashionable cultural theorists name-check Michel Foucault. Foucault described how, by the end of World War II, the “universal” had been replaced by the “specific intellectual” — taking Oppenheimer as his paradigm — whose emergence was associated with the provision of technical expertise to “State or Capital.” It was in this context that Foucault announced the cohabitation of Truth and Power:

The important thing here, I believe, is that truth isn’t outside power, or lacking in power: contrary to a myth whose history and functions would repay further study, truth isn’t the reward of free spirits, the child of protracted solitude, nor the privilege of those who have succeeded in liberating themselves. Truth is a thing of this world: it is produced only by virtue of multiple forms of constraint. And it induces regular effects of power. [19]


If there is a decline of trust in scientific claims — and the reality and extent of that decline should remain problematic — then distrust tracks a decline in ascribed disinterestedness. And that decline in disinterestedness may be the price paid for secular success. Much about the current damage to scientific authority — such as it is — flows from considerations beyond scientists’ control, but much has been a communally self-inflicted wound.

It would be romantically nostalgic and practically impossible to disentangle science from commerce and government and return it to an Ivory Utopia. A nostalgic return to Truth and disinterestedness would mean a much smaller and poorer science, and it would mean forgoing many of the benefits we enjoy through the enfolding of science in the fabric of everyday civic life. So how to solve such a Crisis as we’re actually experiencing? Frankly, I don’t think that a solution is actually on offer, and I’ve given reasons why some now-popular proposed solutions are likely to prove ineffective. But many problems that have no solutions can and should nevertheless be managed, and managing them as best we can may give time and space for solutions eventually to emerge. If my account is substantially correct, then scientists and those who care for them should have a better appreciation of the price paid for civic success and be more open to suggestions about how that enfolding might be managed. An English proverb has it that he who sups with the devil needs a long spoon: civic institutions are not the devil, but their needs, after all, do not match with dedication to Truth. And that appreciation might also encourage some scientists to speak less in public of how science increases profit and enhances power and more in the language of dedication and calling, if the memory of doing so still survives.

¤




¤


[1] Michiko Kakutani, The Death of Truth: Notes on Falsehood in the Age of Trump (New York: Tim Duggan Books, 2018), esp. p. 48; Lee McIntyre, Post-Truth (Cambridge, MA: MIT Press, 2018).

[2] An evident exception is such utterances as “that’s true,” “very true,” “true enough” in ordinary conversation, though here “true” acts as a civil gesture, a filler, not as a considered judgement of validity or correspondence. True, I wrote a book called A Social History of Truth (Chicago: University of Chicago Press, 1994). The title was partly intended to provoke, and the substance of the book dealt with more modest epistemic items — experimental and observational facts and inferences from those facts to theorized states.

[3] That point was briefly made in Steven Shapin, “Science and the Modern World,” in idem, Never Pure: Historical Studies of Science as if It Was Made by People with Bodies, Situated in Space, Time, and Society, and Struggling for Credibility and Authority (Baltimore: The Johns Hopkins University Press, 2010), pp. 377–391, on pp. 383–385, and it is powerfully elaborated in Gil Eyal, The Crisis of Expertise (Cambridge: Polity, 2019), esp. p. 7.

[4] Thomas Hobbes, Leviathan, ed. C. B. Macpherson (London: Penguin, 1968; orig. publ. 1651), p. 166.

[5] Cf. Bruno Latour, “Why Has Critique Run Out of Steam? From Matters of Fact to Matters of Concern,” Critical Inquiry 30 (Winter 2014), 225–248. My usage is more rustic, and more straightforwardly Hobbesian, than Latour’s. I invoke matters of concern to pick out claims that are thought to have consequences for people’s “ambition, profit, or lust.” That sense seems, however, to share sensibilities with Latour’s (p. 237) when he refers to “the merging of matters of fact into highly complex, historically situated, richly diverse matters of concern. You can do one sort of thing with mugs, jugs, rocks, swans, cats, mats, but not with Einstein’s Patent Bureau electric coordination of clocks in Bern. Things that gather cannot be thrown at you like objects.” Sociologists Harry Collins and Trevor Pinch have repeatedly urged the importance of a curriculum space for science-in-the-making, e.g., The Golem: What You Should Know about Science, 2nd ed. (Cambridge: Cambridge University Press, 1998).

[6] For the idea of hyperscience, see Michael Gordin, The Pseudoscience Wars: Immanuel Velikovsky and the Birth of the Modern Fringe (Chicago: University of Chicago Press, 2012), and my appreciation: Steven Shapin, “Catastrophism,” London Review of Books 34, no. 21 (November 8, 2012), 35-38. For parapsychology, see, for example, H. M. Collins and T. J. Pinch, “The Construction of the Paranormal: Nothing Unscientific is Happening,” The Sociological Review Monograph, no. 27 (1979), 237–269. For reflection on the fetishizing of methodology in the social sciences: Stanley Aronowitz and Robert Ausch, “A Critique of Methodological Reason,” The Sociological Quarterly 41 (2000), 699–719.

[7] Naomi Oreskes, “The Scientific Consensus on Climate Change,” Science 306, no. 5702 (3 December 2004), p. 1686; idem and Erik M. Conway, Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming (New York: Bloomsbury, 2010), ch. 6. Oreskes’s picture of the “critically achieved consensus” of a free-acting, socially diverse scientific community as a warrant for “informed trust” is fleshed out in her recent Tanner Lectures: Why Trust Science? (Princeton: Princeton University Press, 2019). The recognition of diversity, open criticism, pertinent disinterestedness, command of appropriate methods, and superior knowledgeability might well be locally powerful arguments for trust, yet the problem remains of describing the circumstances in which the public recognizes that specific expert communities do possess those characteristics. Oreskes seems to acknowledge such a problem, resolving it by saying that “the social markers of expertise are evident to non-experts” (pp. 221–222). There is, however, much to suggest that neither the existence of such “markers” nor their diagnostic value is self-evident to the public.

[8] This argument was sketched in Shapin, “Science and the Modern World,” pp. 386–389. It is, of course, possible that climate change deniers like Trump or the oil company executives possess the same sorts of social knowledge as their opponents, that they do believe the deliverances of the expert consensus, but prefer publicly to peddle misrepresentations. The difference here would be that other agendas inform their claims, e.g., securing short-term profit or appealing to their political base — or it might be that they just don’t care about the fate of the planet.

[9] Gordon Gauchat, “Politicization of Science in the Public Sphere: A Study of Public Trust in the United States, 1974 to 2010,” American Sociological Review 77 (2012), 167–187; Cary Funk, “Mixed Messages about Public Trust in Science,” Issues in Science and Technology 34, no. 1 (Fall 2017), 86–88; https://www.pewresearch.org/fact-tank/2019/08/05/5-key-findings-about-public-trust-in-scientists-in-the-u-s/.

[10] Katharine Dommett and Warren Pearce, “What Do We Know about Public Attitudes towards Experts? Reviewing Survey Data in the United Kingdom and European Union,” Public Understanding of Science 28 (2019), 669–678.

[11] Gil Eyal, Crisis of Expertise, pp. 3–4; see also https://www.pewresearch.org/science/2019/08/02/trust-and-mistrust-in-americans-views-of-scientific-experts/.

[12] Steven Shapin, “Weber’s Science as a Vocation: A Moment in the History of ‘Is’ and ‘Ought’,” Journal of Classical Sociology 19 (2019), 290–307.

[13] Robert Tracinski, “Climategate: The Fix Is In” (November 24, 2009): https://www.realclearpolitics.com/articles/2009/11/24/the_fix_is_in_99280.html.

For reportage, see: https://www.theguardian.com/environment/series/climate-wars-hacked-emails.

[14] Steven Shapin, “Invisible Science,” The Hedgehog Review 18, no. 3 (Fall 2016), 34–46.

[15] Steven Shapin, “The Ivory Tower: The History of a Figure of Speech and Its Cultural Uses,” British Journal for the History of Science 45 (2012), 1–27.

[16] A historical qualification: around the time of Galileo and Newton, a crucial distinction was made between natural philosophy — taken as an inquiry into the ultimate nature of things and causal processes — which could be understood as a search for Truth, and practical mathematics (e.g., ballistics, statics, fortification, observational astronomy) — which was just the search for regularities, predictive power, and the grounds of practical action: see Robert S. Westman, “The Astronomer’s Role in the Sixteenth Century: A Preliminary Study,” History of Science 18 (1980), 105–147; Mario Biagioli, “The Social Status of Italian Mathematicians, 1450-1600,” History of Science 27 (1989), 41–95.

[17] See: https://www.pewresearch.org/fact-tank/2019/10/04/most-americans-are-wary-of-industry-funded-research/.

[18] The two crises are treated as much the same in Sophia Rosenfeld, Democracy and Truth: A Short History (Philadelphia: University of Pennsylvania Press, 2019); idem, “Truth and Consequences,” The Hedgehog Review 21, no. 2 (Summer 2019), 18–24; also Daniel A. Bell, “An Equal Say: Where Does Truth Fit into Democracy?” The Nation 308, no. 4 (February 11/18, 2019), pp. 27–31.

[19] Michel Foucault, “Truth and Power,” in idem, Power/Knowledge: Selected Interviews & Other Writings 1972-1977, ed. Colin Gordon, trans. Gordon et al. (New York: Pantheon, 1980), pp. 109–133, on pp. 126–131 (extended quotation on p. 131).

LARB Contributor

Steven Shapin is Franklin L. Ford Professor of the History of Science at Harvard University. He joined Harvard in 2004 after previous appointments as Professor of Sociology at the University of California, San Diego, and at the Science Studies Unit, Edinburgh University. His books include Leviathan and the Air- Pump: Hobbes, Boyle, and the Experimental Life (Princeton University Press, 1985 [new ed. 2011]; with Simon Schaffer), A Social History of Truth: Civility and Science in Seventeenth-Century England (University of Chicago Press, 1994), The Scientific Revolution (University of Chicago Press, 1996; now translated into 16 languages), Wetenschap is cultuur (Science is Culture) (Amsterdam: Balans, 2005; with Simon Schaffer), The Scientific Life: A Moral History of a Late Modern Vocation (University of Chicago Press, 2008), Never Pure: Historical Studies of Science as if It Was Produced by People with Bodies, Situated in Time, Space, Culture and Society, and Struggling for Credibility and Authority (Baltimore: Johns Hopkins University Press, 2010), and several edited books.

He has published widely in the historical sociology of scientific knowledge, and his current research interests include historical and contemporary studies of dietetics, the changing languages and practices of taste, the nature of entrepreneurial science, and modern relations between academia and industry. He writes regularly for the London Review of Books and has written for The New Yorker. He is a Fellow of the American Academy of Arts and Sciences, and his awards include the J. D. Bernal Prize of the Society for Social Studies of Science (for career contributions to the field), the Ludwik Fleck Prize of 4S and the Robert K. Merton Prize of the American Sociological Association (for A Social History of Truth), the Herbert Dingle Prize of the British Society for the History of Science (for The Scientific Revolution), a Guggenheim Fellowship, and a Fellowship at the Center for Advanced Study in the Behavioral Sciences. With Simon Schaffer, he was the 2005 winner of the Erasmus Prize, conferred by HRH the Prince of Orange of the Netherlands, for contributions to European culture, society, or social science.  

Share

LARB Staff Recommendations

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!