Everything Is Embedded in What Came Before: A Conversation with Robert M. Sapolsky
By Julien CrockettOctober 22, 2023
Determined: A Science of Life Without Free Will by Robert M. Sapolsky
WHAT DOES it mean to make a choice? Answering such a question requires input from many fields and a creative mind to synthesize their conclusions. Robert M. Sapolsky, one of our most celebrated neurobiologists, took on that question in his previous book Behave: The Biology of Humans at Our Best and Worst (2017). In his latest work, Determined: A Science of Life Without Free Will (2023), he updates his analysis and asks what it would mean to take seriously the conclusion that we do not have free will. While distressing at a personal level, Sapolsky argues that it is the path to a more humane society.
JULIEN CROCKETT: Most of the discoveries you reference in Determined are from the last 50 years, and half are from the last five, pointing towards a recent shift in biology and related fields. How has the answer to the fundamental question in biology—what is life?—changed during your career?
ROBERT M. SAPOLSKY: The main trend I’ve noticed is people arguing about whether viruses are alive or not. Which gets to the mechanistic point: they’re made of the same stuff as the organisms they infect, the same building blocks, all working by the same principles. So there is some sort of continuum with life. At the other end is deciding when something is dead—when does life end? And what does it mean? We have been able to get EEG waves out of a pig’s brain hours after it has been removed. There is also research suggesting that being in a coma is a heterogeneous experience. So brain death is not quite as straightforward as we used to think.
That ties into the question of our experimental methods and what we can extrapolate from results. How have our methods and explanations changed?
The overall trend over the last 500 years is that science is getting much more reductive, with the psychologists envious of neurobiologists who are envious all the way down to the physicists. Science operates through this belief that if you want to understand a system, you need to break it down into its component parts and get up close with your big fancy magnifying glass. And that has completely dominated biology and the molecular revolution.
Given that techniques, respect, and thinking have all moved hand in hand in this reductive direction, most folks are not thinking about whole cells, let alone whole organisms or the question of whether an organism is “alive” or not. They just spent their whole career looking at how one enzyme changes shape and throwing X-rays at it to try to understand it better. It’s a little asinine to say science has moved in such a cold, detached direction—that it has lost the whole picture—but people mostly think about very tiny parts of systems now. And that’s in part because there’s plenty enough to think about at the tiny level.
Your book advocates bridging fields and taking a big picture approach. In fact, one of the conclusions from Determined is that you can’t answer a question like why someone did something without considering findings from multiple fields. Do you consider yourself an outlier by engaging in this multidisciplinary approach?
It probably happens more as people get older and decide they don’t want to write grant requests anymore and can just go off on any subject they choose. But to the extent I have that perspective to an atypical degree, it’s probably because I’ve had two different sides to my career, one as a lab neurobiologist and the other as a field primatologist. As a field primatologist you talk with evolutionary biologists, ecologists, and sociologists and learn how kids in horrible inner-city settings wind up being antisocial and what childhood adversity tells us about both baby humans and baby baboons. Whereas with the lab work, you learn from the reductionists.
Are scientists increasingly engaging in this multidisciplinary approach or is the trend still to specialize?
To specialize. It takes a certain amount of social skill to make it seem that rather than straddling two different disciplines, you are a specialist at the intersection of two fields. In reality, what this mostly translates into is when faculty positions or grant funds are tight, everyone says that while this is really interesting interdisciplinary stuff, let the other half of the two disciplines give this person the slot or do the funding, so you risk falling through the cracks in that way. Interdisciplinary is a wonderful trendy word, and every university has a dozen different interdisciplinary institutes, but science is increasingly learning more and more about less and less.
Why did you write a book on free will?
I published a book about five years ago called Behave: The Biology of Humans at Our Best and Worst, which tried to make sense of why a behavior just occurred. For example, somebody just pulled a trigger. Contextually, it could be a great thing, or a terrible thing, or in between. But what went on a second before in that person’s brain, a minute before in the environment that triggered the brain, hours before in the person’s hormone levels, years before when this person was a fetus, a millennium ago when the ancestors were inventing the culture that influenced how the mother constructed the person’s brain through her actions within minutes of birth? So you need to look at this whole long perspective, and that was my chance to do this interdisciplinary song and dance.
In the aftermath, I did a lot of public talks about the book and I would go through like an hour of what happened before the behavior. Invariably in the Q and A, there would be someone who would say, “Given all these biological things, does it not challenge the concept of free will?” To which my response was, Are you kidding? How could that not be exactly where it takes you? Looks like I have to be a lot less subtle about it.
How do you define free will?
My definition is totally out in left field, but I mean it very seriously. Look at a behavior that’s just occurred, and let’s make it an atomistic thing like the example I used a minute ago: you’ve pulled a trigger. Here are the four neurons in your motor cortex that told your muscles to flex. You ask, Why did those four neurons just do that? To show free will, show me that those neurons would have done exactly the same thing regardless of what all the other neurons around them were doing. But that’s not enough. Show me that those four neurons would have done the exact same thing if you weren’t exhausted, or stressed, or euphoric, or blissful, or if your hormone levels had been different, or if the trauma that happened a year ago had never happened, or if you hadn’t found God 30 years ago, or if you had been raised in another culture, or if you had completely different genes. If you could change all of those variables and those four neurons would still have done that exact same thing at that moment, you’ve just proven those four neurons have free will. But you can’t. Everything is embedded in what came before.
You quote the philosopher John Searle, who basically argues that the problem of free will is not “does it exist?”; it is why we have such strong illusions for free will and whether that is a good thing. Why do you think we evolved to feel that we have free will?
Well, it’s exactly why 90 percent of philosophers who are compatibilists and admit there are atoms out there nonetheless manage to pull free will out of a hat. The alternative is too distressing. Evolutionary biologists have looked at the human evolution for self-deception and concluded that you can’t get a species this smart that knows the sun is going to die someday and so am I. There are some very basic mechanisms in the head for being able to deny reality. One of the best definitions I’ve heard of depression is that it’s the pathological loss of the capacity to rationalize away reality. We have a huge incentive for failing to accurately face reality.
You were asked a while ago, “What do you think about machines that think?” and you answered: “Well, of course it depends on who that person is.” Is that still how you would answer that question?
Yeah. For example, we use the same kinase enzymes that phosphorylate receptors when we learn something as do sea slugs when they learn something. It’s the same molecule. You can transplant stem cells and coax them into being a certain specialized type of human cortical neuron and put it into rat brains, and they get smarter. We’re all built from the same building blocks. Of course, the critical thing underlying all of our malaise is that we’re the only machines who can comprehend our machineness. Things just get a little bit distressing and unanswerable at that point.
When they think of the human-machine distinction, most people intuit certain critical essential differences like that humans create meaning while machines don’t. Machines do what they’re programmed to do while we make choices. Do these distinctions not exist?
I think instead what happens is something that’s going to start being aligned with AI fairly soon, which is not that we make choices, but that our systems are sufficiently nonlinear that unexpected stuff emerges. For example, put as many neurons in a chimp’s head as we have and the chimp will come up with aesthetics and religion. They would be unrecognizable to us, but put enough neurons together, and that’s what they’re going do. The nonlinear, nonadditive emergent features are going to bite us in the ass big-time with AI sooner rather than later. But meanwhile, I think that’s the distinctive thing.
Is there no difference then between “living” intelligence and the intelligence of a calculating machine?
On some fundamental level, there are mechanistic rules. We have ones that are more complex, more multifaceted, and we have so many more of them that it’s hard to see the similarities between us and a clock or fruit fly. But it’s the same essential systems. We happen to use the exact same biological brain stuff as does a squid—and a clock is completely different in its building blocks—but it’s the same underlying logic.
When writing about three revolutions in biology from the last century—chaos theory, emergent complexity, and quantum indeterminacy—you conclude that these topics “reveal completely unexpected structure and patterns; this enhances rather than quenches the sense that life is more interesting than can be imagined.” How are these three revolutions related, and why do they seem to attract free will believers?
It’s because they’re new and cool and nebulous and counterintuitive and nonadditive and all sorts of unexpected stuff. And dead white males are going to their graves saying this can’t be true. So of course this must be really great. But absolutely none of them are where you get free will from, because in every case it’s a fundamental misunderstanding about how the system works. For example, with chaoticism, people argue that because something is unpredictable, it’s not determined. That’s the fatal flaw. With emergent complexity, the fatal flaw is thinking that as soon as the component parts all come together and do something emergently amazing, the component parts are now a lot smarter, or creative, or agentive. But no, the cool thing is, they’re exactly as stupid as each one is on its own terms in terms of its limited repertoire. And quantum indeterminacy, well it attracts crackpot theories like statues attract pigeon shit. It’s like tens of orders of magnitude scale away from affecting anything relevant. And if any of that stuff actually mattered, it would explain why everything you do is random, but that’s not what we’re interested in. We’re interested in our consistent characters and souls.
You write that “no matter how unpredicted an emergent property in the brain might be, neurons are not freed of their histories once they join the complexity.” That seems to sum it up.
I got the chills the first time I figured out how ant intelligence can solve the traveling salesman problem. It’s so damn elegant. And it takes so few rules. And that’s also how our brains are wired. That’s why all of us have brains that are wired in roughly the same way but never identically. But that’s not where you’re going to find free will.
What is the best argument you’ve heard for the existence of free will?
I’m really getting insufferable if I say I don’t know of any good ones. The one that nonetheless has to stop you in your tracks is why is it that we can predict so little?
As I note in the book, I’ve started working with public defenders’ offices on murder trials where there are neurological complications and mitigating factors. And any good DA—only about half actually ask me this—when they’re cross-examining me after I’ve testified, will ask, “Would every single person with the same head trauma have committed this murder? Could it have been predicted beforehand?” And the answer has to be no, we can’t predict that. But we have hints. If you want to figure out why for the same amount of damage, this one became a murderer and this other one will meet someone and say, “Oh, I see you’re grossly overweight,” we have pretty good predictability of which one was brought up in a stable middle-class family. If you had to bet which one of them had a mother who was a substance abuser during pregnancy, you get a little more predictability there. You can add the pieces together. But when the DA is standing there asking, “Can you brain people say it was inevitable that this person would do this?” No. Without feeling too embarrassed when it’s over with, that’s the question that has to stop the whole show.
Writing about behavior and judgment and punishment is an emotional topic. You acknowledge in the book that at times, it can put you in “professorial, eggheady sort of rage” that some try to “assess someone’s behavior outside the context of what brought them to that moment of intent, that their history doesn’t matter.” Did you find writing this book more difficult than your previous books? And what was the impact of that on your writing process?
Ultimately, this subject is about social justice. It’s also about CEOs having grotesquely inflated salaries and feeling like they earned it. And it’s also about making sense of why we like vanilla instead of chocolate, or whom we’ve fallen in love with. But most consequentially, it’s a social justice thing. We run the world on the falsehood that this is a just world in which luck evens out over time, and in which it is moral to hold people as culpable for things they had zero control over. I get a little soapboxy at this point, but I really thought the conclusion to the book was going to have to be, well, yeah, this sucks and we’re good at deceiving ourselves. But no. That message is for people who have their nice Ivy League diplomas and are all bummed out because maybe they didn’t earn it. For most people, all this is is liberating. And this is a humane direction for science to go in and it’s desperately needed. We like punishing, and we like feeling agency, and we like attributing agency, and we judge like crazy. But none of it is morally compatible with anything we know about the universe.
What do you think is a better system for punishment than the one we currently have?
My metaphor is you have a car whose brakes don’t work and you don’t know how to fix it. Of course you don’t let it out on the street. But you also don’t tell the car that it deserves not to be able to go for a ride in the park. Rather, under the quarantine model, you do the absolute minimum to make this person no longer of danger but you don’t do one smidgen more than that. And you frame it in the exact same way that you would frame it when your kid has to stay home from kindergarten because they have a cold. This is a public health model. And there’s one additional step: you need people who are devoting their time figuring out where this disease came from in the first place.
This conclusion, however, also applies on the flip side: we have to completely get rid of a meritocracy, the notion that somebody earned their higher salary or earned their safer neighborhood. But again, just as you need to keep dangerous people off the street, you should make sure your neurosurgeons are the ones who are competent. But don’t tell them that they earned their better salary or maybe don’t even give them a better salary. In my experience, getting rid of criminal justice is going to be trivial compared to getting rid of meritocracy or meritocratic thinking.
Are you not worried that abandoning meritocratic thinking will undermine incentive for, for example, people becoming neurosurgeons?
Well, there are few issues related to free will where I’m more utopian and clueless. I could go with all sorts of silly answers, along the lines of, “All we have to do is restructure society from top to bottom so that prestige and money are no longer motivators.” Or something else inane, like how there really is no such thing as true altruism (the “scratch an altruist and a hypocrite bleeds” notion), so just hope that the internal rewards will carry people through all the training and sleepless nights. Maybe fall back on the mirror of the notion that sometimes punishment would be okay, but only in a limited instrumental sense—here, it would be okay to use reward in an instrumental sense as well (although that just reinvents incentivizing things). Maybe the best we can do is to go with the inverse of parents reprimanding their miscreant child about how they’re not a bad person but they did a bad thing—don’t facilitate a neurosurgeon deciding that they’re entitled to more than other people; instead, merely proclaim just how wonderful their actions (over which they had no control, etc., etc.) are—“Wow, you rock, there’s no one west of the Mississippi who can remove a glioblastoma like you.” Maybe that could carry things along …
How do you cope with the knowledge that there is no “me” making decisions?
I hypocritically ignore it 99 percent of the time. But you have to try to acknowledge it in particular cases that really matter, like if you’re in a jury box or if you’re a parent trying to make sense of why your child isn’t learning to read. You have to work on it in those domains. But 99 percent of the time, every instinct of mine is like captain of the ship all the way. I haven’t believed in free will for half a century, and I’m still crappy at it.
Robert M. Sapolsky is the author of several works of nonfiction, including A Primate’s Memoir: A Neuroscientist’s Unconventional Life Among the Baboons (2001), The Trouble With Testosterone: And Other Essays on the Biology of the Human Predicament (1997), and Why Zebras Don’t Get Ulcers (1994). His recent book Behave: The Biology of Humans at Our Best and Worst (2017) was a New York Times bestseller and named a best book of the year by The Washington Post and The Wall Street Journal. He is a professor of biology and neurology at Stanford University and the recipient of a MacArthur Foundation “Genius Grant.” He and his wife live in San Francisco.
Julien Crockett is an intellectual property attorney and the science and law editor at the Los Angeles Review of Books. He runs the Los Angeles Review of Books column The Rules We Live By, exploring what it means to be a human living by an ever-evolving set of rules.
LARB Staff Recommendations
Costica Bradatan speaks with Julien Crockett about his new book “In Praise of Failure: Four Lessons in Humility.”
Many decades before generative AI, the writer J. M. Coetzee actively engaged with machine voices, says Andrew Dean, and also grappled with the perils...
Did you know LARB is a reader-supported nonprofit?
LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!