IT IS POSSIBLE that John Pringle’s neighbors viewed him with some distaste. Formerly physician-general to the British forces in the Low Countries, in 1749 Pringle had settled down in a rather swanky part of London, where he began investigating the processes behind putrefaction. He would place pieces of beef in a lamp furnace, sometimes combining them with another substance — and then wait for putrefaction to commence, or not. Falsifying an earlier hypothesis, Pringle showed that not only acids but alkalis, like baker’s ammonia, could retard the progress of decay. Even more impressively, some substances, like Peruvian bark (source of the malarial prophylactic, quinine) could outright reverse putrefaction, rendering tainted meat seemingly edible again.

In 1752, Pringle used these results to make medical recommendations in a book entitled Diseases of the Army. Rot within the body, he asserted, accounted for a great many diseases afflicting soldiers in particular. It followed that substances that could slow down or reverse putrefaction could also — when swallowed — combat these diseases. Hence the proven success of the bark against “putrid fevers.” Hence also an increasingly common explanation for the oft-remarked utility of citrus fruits as a cure for scurvy. Scurvy was perhaps the putrid disease par excellence, afflicting sufferers with wounds that would reopen or never close; gums that would rot and teeth that would fall out; and breath that was foul, even by the standards of the 18th century. Citrus juices were great preservatives. Surely it was no surprise, then, that they were so valuable in combating scorbutic complaints. [1]

In the 1750s, British science and medicine took a resolutely antiseptic turn, with new results following in quick succession. In 1764, for example, a surgeon named David Macbride published experiments that suggested “fixed air” — what we now call carbon dioxide — was a fine antiseptic. In 1771, a doctor named William Alexander described his tests of meat hung above swamps and “necessary houses.” Counterintuitively, the meat seemed to resist decay for longer than in common air. In 1773, Pringle, now president of the Royal Society and a recipient of the Copley Medal, bestowed the same medal on Joseph Priestley for his work on the discovery of “different kinds of air”: what we would today call gases. These discoveries are, indeed, what Priestley is still most famous for, but Pringle emphasized not merely the fact of them, but their utility. Priestley, for example, had managed a scurvy cure of his own: water imbued with antiseptic “fixed air.” The next time you sip a soda, note that Priestley imagined it would preserve you from the most dreaded of sailors’ diseases.


It can be very tempting, from our contemporary vantage point, to try to tease apart the commendable from the seemingly absurd results above; to applaud the realization that oranges could work against scurvy while condemning the thought that sparkling water might; to sort, in short, the scientific silver from the seemingly unscientific dross. And yet, as the contemporary plaudits and awards indicate, “experts” at the time would not have made the same decisions. Nor is it likely that methodological distinctions will help us much. Precisely the same methods, and precisely the same leaps of brilliance and faith that led in some cases to science that has withstood the test of centuries, led also to results that were rapidly cast into oblivion. Wish as we might, little more than the passage of time and thus hindsight tells us what was “good science” as opposed to a poor guess, based on faulty inferences and deep misunderstandings.

Jump forward almost two centuries from Pringle’s experiments, and a similar question was animating discussions in Austria: How to demarcate science from pseudoscience? Was Sigmund Freud a scientist? What of Karl Marx? It is to the Viennese philosopher Karl Popper that we are indebted for the term “the demarcation problem.” His response — the falsifiability criterion — is most often given as its solution. For a field to be scientific, Popper argued, it must make predictions that could be proven wrong. Were they indeed proven wrong, he further insisted, then we should celebrate rather than lament, for science proceeds not by becoming more true but instead by becoming less false.

However, as Michael Gordin, professor of the history of science at Princeton University, notes early on in his lively and thought-provoking survey of multiple dodgy and perhaps-not-as-dodgy-as-you-thought scientific areas, falsification invariably fails almost before it starts. How do you know that you’ve actually falsified a theory? Is getting a weird, unexpected result enough? (If so, I’d like to announce that I falsified many theories of physics as an undergraduate major.) Rather than falsifying Newton’s theory of gravitation, seeming anomalies in the motion of Uranus led observers to predict the existence of the planet Neptune, its discovery hailed as yet further confirmation of Newton’s genius. It is very, very rare for scientists to abandon a long-held and powerful theory on the basis of a single apparent refutation.

More broadly, Popper’s criterion works like an inattentive bouncer: it lets in too many of those we might like to exclude and excludes too many of those we might like to welcome. Clearly, it is not enough for a claim to be falsified for it to be scientific. I have a theory that the reason I’m 10 pounds heavier than I was before the COVID-19 lockdowns is because fairies hold me down on our bathroom scales. That theory is (unfortunately) easily falsifiable, but certainly not scientific. Not all falsifiable claims, then, are scientific, and not all scientific claims are falsifiable. How would one directly test the Big Bang Theory, which seeks to explain the origins of the universe? For some time, Popper himself worried that Darwinian evolution, which largely explains the past rather than predicting the future, was not a “testable scientific theory, but a metaphysical research programme.”

In the United States, the zombie-like persistence of the falsifiability criterion can be tied, in part, to a court case that sought to determine whether creationism could be taught in school science classes. In the 1981–’82 federal court case McLean v. Arkansas Board of Education, one of the expert witnesses arguing against creationism as science was the British philosopher of science Michael Ruse. Ruse surely knew the profound problems with Popper’s arguments, but he raised them anyway. Judge William Overton then cited Ruse and falsifiability in his final judgment against the creationist cause. Wobbly philosophy would become part of legal doctrine.

Philosophers, in general, have given up on the idea that there could be a “bright line,” a clear and straightforward set of criteria, that separates science from its nefarious mirror image, although arguments are still made about somewhat softer modes of demarcation. Part of the problem, of course, is that no one calls themselves a pseudoscientist, any more than they call themselves a quack. Both terms are only ever insults, and the unfortunate fact is that some non-zero percentage of those derided as frauds and humbugs turn out to be right. On the Fringe therefore works not by defining pseudoscience in advance but by exploring a multitude of areas that could or have been called pseudoscientific. The result is often entertaining, but the purpose is ever serious. By studying those areas at or beyond the fringe — and the ways the boundary moves according to changes of time and place — we can gain a good deal of knowledge about the science on the other side of the line.

Gordin groups the almost impossibly wide array of possibly shifty sciences into four areas. The first includes what he terms “vestigial sciences,” those fields or ideas that were once deemed perfectly reasonable but have been rejected since. Think of contemporary believers in astrology, or imagine someone today prescribing Perrier for yellow fever. Second are “hyperpoliticized sciences” intimately associated with particular political regimes — eugenics, for instance, or the “German physics” movement of the Nazi era. Third are those fields that cast themselves as warriors against the establishment, mimicking the forms and logics of mainstream science by funding not only research, but journals, doctoral programs, and institutes — the intelligent design movement, for instance, or climate change denialism. The final category includes a cast of characters ranging from hucksters and frauds to professors at Ivy League universities exploring the possibilities of mesmeric healing, extra-sensory perception, and telekinesis. Among the gems here is a history of the cards used by Dr. Peter Venkman, the character played by Bill Murray in Ghostbusters, to test for psychic ability. Venkman, it will be noted, is the very epitome of the shameless charlatan, albeit eventually revealed, in spite of himself, to have been speaking the truth all along.

The taxonomy is useful. It is easier to think through what is bothersome about a new, questionable area if one can note its family resemblance to previously categorized cases. As Gordin phrases it in his conclusion: “Understanding more of the processes at work in the creation of the fringe, and its heterogeneity, helps us grapple with those few movements that can cause significant public harm.” He is right, too, that much of what we might call pseudoscientific feels fairly harmless or remediable in straightforward ways. Yeti enthusiasts, fans of the Loch Ness Monster, and Ufologists seem largely innocuous, and many of those hawking fraudulent products based on nonsense can be prosecuted in the courts. What remains, though, of perhaps the most pressing questions of our age? We are constantly told to “trust the science,” but what does that mean if we cannot quite tell where or what the science is?

It is not quite sufficient, in answering this question, to point to a near-consensus on the part of the scientific community, thus in effect arguing that science is what the vast majority of scientists say it is. If this is largely how the argument does indeed function in the cases of debates about the safety of vaccines or the causes of global warming, we should nonetheless be aware of its flaws. Scientific revolutions, after all, involve rejections of scientific orthodoxy by a committed minority. If many anti-vaxxers or climate-change denialists appear more like Venkman than Galileo, it is nonetheless Galileo’s mantle that they seek. We’re back to Popper’s problem: is there an in-principle way of distinguishing a revolutionary from a dedicated flim-flam artist that doesn’t assume the correctness of either position in advance?

Probably not. But there are broad guidelines that could help. We might insist, for example, that those wishing to challenge the scientific consensus do their homework, demonstrating a competent (but not necessarily expert) understanding of the theories and empirical data they wish to displace. We might require that they have reasonable arguments concerning the theory or data’s flaws that do not presuppose, as some anti-vaccine arguments do, the existence of a large-scale conspiracy to distort the truth. And we would be well within our rights to demand that they argue in good faith and not — as historians of science Naomi Oreskes and Erik Conway have shown for many climate change denialists — as professional water-muddiers, following a playbook first written by cigarette companies attempting to obfuscate a known connection between smoking and cancer.

These criteria would leave untouched the informed and competent true believer, raging against the closed-mindedness of the majority. This is as it should be. The historian’s usual form of induction would suggest that, two centuries from now, a future public will study our science with the critical eye we apply to Pringle and Priestley. Both men were in the mainstream in their own time, but today’s mainstream finds risible their idea that internal putrefaction was the cause of illness and imbibing antiseptics its cure. Certainly, the response from the medical community when President Trump proposed drinking bleach as a cure for COVID-19 suggests this was a vestigial theory at best. Yet the “putrefactive paradigm” of the 18th century made sense of orange juice as a scurvy cure almost two centuries before the disease was identified as resulting from a vitamin C deficiency. It did so while also licensing the use of carbonated water as a similar treatment. Tolerating well-prepared gadflies is the price for any human endeavor that celebrates the possibility that consensus positions could eventually be largely — even laughably — false.


Professor of History of Science at Cornell University, Suman Seth is the author of Difference and Disease: Medicine, Race, and the Eighteenth-Century British Empire(Cambridge University Press, 2018).


[1] For a discussion of the ways that James Lind interpreted the results of what has been called, with considerable hyperbole, the “first controlled clinical trial” (in which he gave six different proposed cures to pairs of sailors suffering from scurvy and determined that those given citrus fruits easily did best), see my Difference and Disease pp. 137–139.