The year was 1897. The setting was the basement of Columbia University’s Schermerhorn Hall. And the architect of these Beatrix Potter–themed nightmares was Edward Thorndike, a graduate student in psychology. His questions were simple but had deep roots: How do thinking beings arrive at true knowledge — which is to say, knowledge about the world that is simultaneously real and able to be acted upon? Do beings come to truth by reason, or simply by repeatedly trying different stuff until something works? And if “reason” turns out to be fundamentally the same as “simply trying different stuff until it works,” is there such a thing as an ideal way to arrive at truth?
This, as historian Henry Cowles writes in The Scientific Method, was one of a century’s worth of experiments devised by various researchers to figure out what made science a worthy way of accumulating knowledge. From the early 1800s, when “science” meant simply “a body of systematic knowledge,” the term came by the middle of the 20th century to suggest a particular habit of mind — or a set of behaviors — that enabled the production of verifiable truth. Along the way, science became special, rather than general — a way of understanding the world predicated on particular rules or assumptions about the ways in which minds and matter interacted: rules about how to distinguish certainty from doubt.
These rules and assumptions can be summed up today under the rubric of “the scientific method” — understood in the present to be something like: (1) encounter a question; (2) define the problem; (3) make a hypothesis; (4) test the hypothesis; (5) repeat steps two and three until arriving at a solution. The difficulty with this, as Cowles points out in the first sentence of his book, is that “the scientific method” does not exist. In spite of its prominent definite article, there is no one method that all scientists would readily admit as their singular way of conducting business. What does exist in place of the scientific method is a palimpsest of ideas about what it means to know things, and methods for theorizing how knowledge of “nature” feeds back into knowledge about knowing things. Ultimately, Cowles argues, the notion of a method for arriving at truth, and the notion of thought itself, have — incongruously — come to seem like they ought not be separable.
Thorndike’s experiments with cats were of a piece with the question of how humans come to know things. By way of an answer, Thorndike designed feline escape rooms such that an accidental action made by a panicked animal — like catching a claw on a dangling string — would spring the door of the cage, releasing the cat to a waiting meal of sardines. The animal would then be put back into the room several times, where repeating the action would again win freedom. After this, the animal would be placed in a different cage with more complicated, less accidental escape mechanisms (a lever, or a string obscured by a panel), and the process would begin again. Whether cats learned — and, more importantly, whether they reasoned from particular to general modes of escape — could be measured by the speed at which they solved increasingly abstract puzzles.
Thorndike found that the cats did learn. Those cats who managed to escape from a cage once would escape more quickly when placed in the same cage again. What’s more, when placed in a different cage, with a different puzzle, they would solve the new puzzle more rapidly than the initial ones.
Nevertheless, for all of their splendid ingenuity, Thorndike didn’t think that the cats exhibited reason, or intelligence. (This in spite of having entitled the 1898 report of his research Animal Intelligence.)  It was true, he conceded, that it was difficult to distinguish between the simple, habitual paths of mental association followed by the escaping cats, and the more complicated mental associations understood by humans as “intelligence” or “reason.” Nevertheless, the methodical application of truth-yielding techniques — the sine qua non of intelligence, to Thorndike — appeared to him to be a singularly human quality.
If they cared one way or the other, the feelings of the cats vis-à-vis their own rationality didn’t make it into Thorndike’s report. But for modern humans, the question looms large — indeed, larger still for the efforts of Thorndike and his peers. For there is a way in which this version of intelligence — and, indeed, the larger question about the methods and institutions through which we validate knowledge as true and good — has become its own sort of cage from which 21st-century moderns find it difficult to escape. This cage flies under the flag of a “Crisis of Truth” — a catchall term, as Steven Shapin has pointed out in these pages, for an array of ills including a growing popular mistrust of technocratic institutions; widespread disbelief in evolution, climate change, and biomedicine; and concomitant popularization of mysticism, superstition, and conspiracy theories.  It’s too much to say that these social phenomena are caused by failures of scientific method, but understanding the history of scientific reasoning makes it clearer that the problem is neither isolated to recent years, nor is it one confined to a simple misunderstanding of methods for knowing what’s true.
The reason it was important for Thorndike to investigate the cognitive qualities of cats in the first place had to do with the notional uniqueness of science itself. It is easy to imagine that, from the dawn of the Enlightenment onward, science was widely recognized as a singular, superior way of understanding the world. With the “scientific method” as its foundation, science would provide the one, surefire method of understanding nature, humanity, and society alike. In this telling, it is only in recent years that science has found itself under serious assault by forces of unreason.
As Cowles explains, however, prior to the middle of the 19th century, the notion of a “method” unique to science would have seemed dubious to its practitioners. “Natural philosophy” was the name given to investigations of nature. It was true that Anglophones in particular frowned on “speculation,” or hypothesizing based on intuition. For “the universal lover and patron of every kind of truth” — as the charter of the Royal Society of London for the Improvement of Natural Knowledge put it — observation and experiment held a special pride of place.  Nevertheless, the “science” that “natural philosophers” produced was of a piece with other bodies of systematic knowledge or practice. It was possible, for instance, to speak of the “sweet science” of boxing, or the “gay science” of poetry without abusing a specialized term.  “Science” in these instances did not point to any particular method — it just meant systematic knowledge.
This relative harmony between what we now know as “science” and other fields of inquiry (e.g., poetry and pugilism among other humanistic endeavors) began to fracture, according to Cowles, during the political and institutional disharmony of the first Industrial Revolution. Amid debates in Britain over tradition and progress, centralization and specialization, mechanization and organicism, and class conflict and reform (to name but a few), natural philosophical inquiry seemed to provide a means not only of securing knowledge but political harmony. Into this milieu William Whewell introduced the term “scientist” — an initially unpopular neologism to describe the seductive figure of an individual who cultivated habits of mind that gave rise to objective truths.
This new “scientist,” stipulated Whewell, was not merely an observer of nature, but an active participant. He — for the scientist was most definitely male — was disciplined and rigorous, but also creative and intuitive. He was free to “hypothesize” — to make associations between sets of observations based on his intuitions — but these hypotheses were not mere speculation. They were upwellings of an inventive and methodical mind. This emphasis on mind likewise meant that the scientist was not merely a calculating machine, like Charles Babbage’s “analytic engine” — a theoretical, early analog computer that promised to supplement (or even supplant) the process of human thought. Science was a product of dynamic minds at work. It was not simply knowledge dropped at humankind’s feet by passive observation or algorithm.
But if mind was key to scientific practice, what ensured it would give valid results? Whewell himself worried that basing science on mental processes was an inherently unsound idea. Unless one knew how minds worked, it was difficult to make an argument for science as an especially trustworthy endeavor. Following from this concern, research into the workings of science became, ipso facto, research into the workings of mind. It was important to understand, for instance, whether thoughts arose from ever more complex “associations” drawn from primary sense data, or whether certain ideas were innate, and preceded even basic sensory information. It was similarly important to know how the brain worked: did it come prepackaged with certain “faculties” as phrenologists believed; or did the matter of the brain form associations? Most importantly of all, it was vital to develop methods for studying the mind. Could the mind of the scientist best be understood through “introspection” and anecdote — through the thinker querying their own thoughts and asking about the thoughts of others? Or was experimentation of the type employed in fields like physics the key to knowing mind, and therefore knowing knowledge?
The surprising glue that came to bind these questions together, Cowles argues, was Charles Darwin’s theory of evolution. While Darwin himself had no evident designs on creating a system specifically for thinking about scientific thought, his interpreters saw in the notion of natural selection an explanation for how the scientific mind worked. Just as creatures evolved through a slow process of being tested by their environments, so, too, could thoughts be seen as validated by a continual process of testing — a system of “trial and error,” as psychologist Alexander Bain put it. Unsound ideas would be subject to the harsh conditions of inquiry, and would fail to perpetuate. Valid ideas would stand up to the rigors of experiment, and would thrive. What’s more, natural selection offered an explanation for how minds and brains worked in coordination, since the very processes of natural selection that gave rise to the organic stuff of brains also could be seen as validating or invalidating thoughts.
Different researchers, of course, had very different ways of approaching the evolutionary properties of thought. Physiologist William Benjamin Carpenter, for instance, saw thought as based in unconscious “mental reflexes,” themselves grounded in the physical stuff of neurons. Psychologist G. Stanley Hall approached thought as a kind of developmental unfolding, in which different evolutionary stages of human minds manifested at different times in individuals. For philosophers Charles Sanders Peirce and William James, thought was fundamentally a tension between belief and doubt, resolved through the practical impacts of a given idea. The various ways of interpreting thought were almost endless. But, as Cowles shows, regardless of the particulars, evolution provided both an explanation for scientific truth, and an outline of its method.
This is where Thorndike’s cats came in. By the end of the 19th century, an efflorescence of newly minted psychology departments in universities throughout Europe and North America cemented the importance of mental research. This move toward disciplinary definition, in turn, entailed a clarification of method. Increasingly, introspection and anecdote seemed like suspect ways of understanding minds, especially when compared with laboratory experiments. Since it was difficult to experiment upon human beings, however, researchers turned to nonhuman proxies — and with these proxies illuminated new questions about the nature of scientific methodology.
Those who felt that animals displayed reason suggested that, at root, all intelligence was simply a way of connecting means and ends. This went for scientific thought, too. With its constant hypothesizing and testing, science was, then, just a more sophisticated means of pursuing a naturally given mode of arriving at true knowledge. Those who disagreed with the premise that animals could reason pointed to the impossibility of knowing other minds that were so distinct from one’s own. While humans might identify trial and error as “reason,” it simply wasn’t possible to say with any certainty that animals who learned from trial and error did so by establishing a connection (an association, or a hypothesis) between means and ends. The problem, for people such as Thorndike, was not, therefore, that animals couldn’t reason — it was just that they didn’t experiment, and therefore the workings of their minds were opaque.
Debates over the precise contours of experiment and reason had far-reaching implications, but they were accidentally sharpened into the thing that we now call “the scientific method” by the work of philosopher and educational reformer John Dewey. Dewey believed that the minds of others could be known, though he found his “other minds” minds in schoolchildren rather than animals. His “Laboratory School” — a grammar school run through the University of Chicago — was designed as just the sort of reflexive experiment that he believed exemplified thought itself, and science in particular. Children at the laboratory school learned through trial and error, and observation and experiment, rather than through rote memorization. At the same time, teachers at the Laboratory School experimented with pedagogy — continually coming up with new ways to steer their charges toward the ends of their school tasks. In both cases, students and teachers alike were learning to think scientifically — not only about the immediate tasks at hand, but, thought Dewey, also about interpersonal interactions, politics, and democratic society.
For his part, Dewey saw the school as his own laboratory for studying cognition, and he formulated his conclusions, drawn from observing the teachers and children, into a popular book entitled How We Think (1910). He broke the components of thought into “five distinct steps”:
(i) a felt difficulty; (ii) its location and definition; (iii) suggestion of a possible solution; (iv) development by reason of the bearing of the suggestion; (v) further observation and experiment leading to its acceptance or rejection; that is, the conclusion of belief or disbelief.
These steps were not intended by Dewey to be taken out of the context of the total social being. They were meant to represent how humans, interacting with humans, came to general agreement on knowledge.
Nevertheless, in synthesizing nearly a century’s worth of debate about the cognitive nature of scientific thinking, Dewey had produced something that looked less like a description of general cognition, and more like a specific series of steps by which truth could be attained. His scientific method rapidly attained a popular status as a sort of “how to” manual for scientific thinking that quickly proliferated through manifold aspects of everyday life — from the development of scientific management techniques, school curricula, and factory workflows, to ideas about mass politics, governance, and ideology.
Thus, as Cowles writes, “what seemed like a success was also an undoing. This vision of science was rule-bound rather than wild, algorithmic rather than creative.” The idea of science as both a product and a mirror of living minds “became something both constrained and constraining, a set of steps rather than a description of lived reality. The rise of ‘the scientific method’ was, in this sense, less a success than a tragedy.”
According to some of Thorndike’s critics, the major problem with his experiments was that he never really gave his cats a chance to show intelligence or reasoning. By confining the beasts in artificial circumstances, he had necessarily limited their behavioral — and, indeed, cognitive — possibilities. To truly get a sense of the minds of others, as one of Thorndike’s opponents put it, he should have provided conditions “opposite to those created in his experiments, viz freedom, security from harm, satiety, in a word — well-being. Nothing so shrinks and inhibits completely the fullness and variety of an organism’s activities than prison life and fear.” Experimental subjects that were scared, hungry, confused, and uncertain could not be expected to reason well.
At least two of Thorndike’s feline participants appear to have agreed with this assessment. Unique among his research subjects, cats 11 and 13 proved so dispassionate that each time they were placed in a cage, they made only a few desultory attempts to escape, then just gave up until Thorndike was forced to remove them himself. Reading between the lines in Animal Intelligence, one has a sense of Thorndike’s frustration with these creatures, but also a distinct inkling that the cats did, in fact, learn — they just learned to wait their captor out rather than exert themselves. While this did not impress Thorndike, the contemporary reader is left to the conclusion that, if cats 11 and 13 didn’t precisely display “reason,” they nevertheless proved the truism that, in certain cases, refusal is a viable form of escape.
This brings us back to the Crisis of Truth — which, following Shapin, is like the scientific method insofar as it doesn’t, precisely, exist. Rather, Shapin argues, Americans (his test subjects) generally subscribe to a wide range of scientifically vetted facts about the world — the speed of light, the existence of atoms, the beneficial and harmful effects of microorganisms, and so forth — without much protest. Most science finds its way without controversy into school curricula, mass media, and, of course, a vast array of technologies and other cultural products. This is not to say that the majority of people understand science in the way of experts — but nor is it clear that an expert understanding of, say, thermodynamics will amplify vaccine acceptance.
Rather than a Crisis of Truth relating to science, Shapin proposes that the problem is, in fact, an overabundance of belief in scientific certainty, and an acute dependence on the scientific method as a means of navigating points at which science runs up against fear and discomfort. This is certainly borne out, for instance, in discussions among the vaccine hesitant, which tend to rely strongly on the idea that if the science behind vaccines were scientifically valid, then vaccines themselves would be absolutely error free. As one author put it, explaining his decision not to vaccinate his children, “a theory is only valid in science until an exception to that theory is found. If only one exception is found, the proposed theory is no longer valid and there are many exceptions to [the theory that vaccines produce antibodies].” 
Shapin’s proposal is likewise borne out in the conspiracy thinking that follows from such conclusions: if scientists follow a method that produces incontrovertible facts, and yet the facts that scientists peddle are not incontrovertible … then why are experts lying to us? Surely this is evidence that institutional science is not disinterested or truthful — even if the “scientific method” is still valid. Furthermore, conspiracy thinking — with its insistence on skepticism; on the rectitude of ratiocination among a community of the like-minded; on the availability of facts to those who choose to look — seems a lot like science according to the scientific method. It’s just that conspiratorial science thrives in periods of upheaval, when institutional science appears inadequate to address social disruption, displacement, and distress. 
The scientific method, then, has come to provide something it was never meant to provide — an assurance of certainty, rather than a model of productive thought. And by these lights, our manifold crises of “truth” begin to seem less like crises, and more like forms of rational (even learned) recalcitrance: an insistence on holding science to a promise that it had never really made, but capitalizes on nonetheless.
Given this state of affairs, Thorndike’s cats may well have left behind some salient lessons for those of us trapped in similar, if more conceptual, cages. In the first place, they tell us that expectations are resilient but can be changed. As Cowles remarks, it would take some work, but science could shrug off its cloak as a singular, exalted method, and “could once again be nothing more — and nothing less — than how we think.” In the second place, rather like the connection between results and reason, the association between science and truth is, as Shapin notes, mostly a matter of how one articulates the problem. Should we wish to consider science not simply as rigid method or an exclusive tool of powerful institutions, but also as a guide for thinking, we might well “speak less in public of how science increases profit and enhances power and more in the language of dedication and calling.”  Finally, and perhaps most saliently, it may prove to be the case that the best science for a pluralistic, interconnected 21st century is a science responsive to, contributive to, and, ultimately, interwoven not just with bare institutional or productive power, but also with politics, art, media, culture, comfort, and the quotidian exigencies contingent upon living, thinking beings. This would be a vision of science — and truth — based more in practice and less on destination; more in engagement and less on distance; and more in philosophy and lived experience, and less on cats.
Michael Rossi is a historian of medicine and science on the faculty at the University of Chicago. His work focuses on the historical and cultural metaphysics of the body: how different people at different times understood questions of beauty, truth, falsehood, pain, pleasure, goodness, and reality vis-à-vis their corporeal selves and those of others.
 See Edward L. Thorndike, Animal Intelligence: an experimental study of the associative processes in animals. New York: Macmillan, 1898. It’s worth noting that Thorndike ran similar experiments with dogs and chicks, and came up with similar results.
 Shapin, “Is There a Crisis of Truth?,” LARB, December 2, 2019
 Charter, British Royal Society.
 On boxing, see e.g., Pierce Egan, Boxiana, or Sketches of Ancient and Modern Pugilism, G. Smeeton: London, 1812. On poetry, see W. Taylor, “Review of Histoire de la Littérature Espagnole,” Monthly Rev. 70, 1813, pg. 455
 Jason Christoff, “Why My Wife and I Decided not to Vaccinate Our Daughter,” http://www.jchristoff.com/why-my-wife-and-i-decided-not-to-vaccinate-our-daughter/. Thanks to Eleanor Cambron for bringing this reference to my attention. The piece is characteristic of the genera, with a heavy appeal to “logical” inconsistencies in scientific positions, and an insistence that complexities in evidence and conflicting ideas indicate corruption or incompetence among scientific practitioners.
 See, e.g., Demetra Kasimis’s “The Play of Conspiracy and Democratic Erosion in Plato’s Republic” (American Journal of Political Science 65:4, October 2021, 926–937) for an assessment of conspiracy thinking in ancient Athens. Susan Lepselter provides a modern, ethnographic reading of conspiracy in “Situating Conspiracy Theory,” Anthropology Now 13: 25–29, 2021.
 Shapin, “Is There a Crisis of Truth?”