Everything You Think Is Physical
Julien Crockett discusses cognition and metaphors with George Lakoff and Srini Narayanan, authors of “The Neural Mind: How Brains Think,” in a new installment of the series The Rules We Live By.
By Julien CrockettOctober 6, 2025
:quality(75)/https%3A%2F%2Fassets.lareviewofbooks.org%2Fuploads%2FNeural-Mind.jpg)
The Neural Mind: How Brains Think by George Lakoff and Srini Narayanan. University of Chicago Press, 2025. 384 pages.
Keep LARB paywall-free.
As a nonprofit publication, we depend on readers like you to keep us free. Through December 31, all donations will be matched up to $100,000.
This interview is part of The Rules We Live By, a series devoted to asking what it means to be a human living by an ever-evolving set of rules. The series is made up of conversations with those who dictate, think deeply about, and seek to bend or break the rules we live by.
¤
THE PROJECT OF UNDERSTANDING how the brain creates thoughts and feelings has progressed in fits and starts, leading some to despair that the so-called “mind-body” problem is fundamentally unanswerable. How can nonphysical ideas reside in physical brains? Yet, George Lakoff and Srini Narayanan claim in their new book The Neural Mind: How Brains Think, we now have a working theory.
Lakoff, a UC Berkeley cognitive linguist best known for his work on metaphors and how they structure our understanding of the world, and Narayanan, a computational neuroscientist at Google DeepMind, are well positioned to make this claim. Together, they have been at the forefront of the remarkable transformation in neural mind research over the past 50 years as the fields of neuroscience, cognitive science, linguistics, and computer science have reframed our understanding of thought and everyday experience.
In our conversation, we discuss the appropriateness of the computation metaphor for understanding the brain, how neural circuitry creates our experiences, and the importance of the narratives we live within.
¤
JULIEN CROCKETT: Your book, The Neural Mind: How Brains Think, addresses one of the greatest mysteries of our existence: how the physical body creates the nonphysical mind. You start with a quote from neuroscientist Antonio Damasio: “The immune system, the hypothalamus, the ventro-medial frontal cortex, and the Bill of Rights have the same root cause.” How does this quote capture the main thrust of your book?
GEORGE LAKOFF: Damasio is pointing out that ideas are not floating around abstractly in your brain. Rather, everything you think is physical and carried out by neural circuitry, without your awareness. Once you discover this—that everything you think and feel and understand about the world and your body is physical—the question becomes: How does it work? Our book provides a reasonable start to answering that question.
SRINI NARAYANAN: To add to that, a lot of thinking comes from the brain’s interaction with the body. The brain is trying to monitor and control various systems, trying to plan and anticipate potential world states and actions. In order to do this, it needs to have not only a good model of the external world but also a model of its own internal states—states of emotions, hunger. Organs like the hypothalamus are crucial in this process and provide an underpinning for extensions into feelings and thoughts.
Before we get into the extensions that lead to the mind-body problem, I want to discuss how we conceptualize and talk about the brain. We often speak as though the brain were a computer performing computations. You use some metaphors along these lines in the book (e.g., the circuit metaphor, the neural computation metaphor). Do you think the computer metaphor remains accurate and productive? Will we replace it at some point?
GL: I doubt it will be replaced. It could be, and I’m not against it. But the important thing about metaphors is that they provide us with a way of structuring our thoughts, and “computation” is the best metaphor we currently have for providing structure to our understanding of the brain and how it functions.
Beyond metaphor, there is also a definition of “computer” that actually includes the brain, right?
SN: Yes, but there are different definitions of computer and computation. One is computation as defined, say, in the 1930s, by Alan Turing and Alonzo Church as “a way of calculating.” It’s a very abstract definition, but in this sense, a computation is related to any information-processing system, including the brain. In this case, computation has a very precise definition, which is that it can do this Turing machine–type calculation. But the brain has a different, very specific job compared to an abstract computing machine. The brain is working to control the body interacting with the physical and social environment. It’s trying to maintain an optimal internal state through homeostasis while simultaneously engaging in mental time travel to plan, make predictions about the future, and simulate world states. This is mappable to computation but has a different flavor.
I think computation is a reasonable way to think about the brain, but it can also lead to thinking too abstractly and losing the important grounding in the body that is central to how a brain functions.
This ties into artificial intelligence and the promise of (and confusion around) the possibility of creating artificial minds.
GL: Before we go on, let me mention that I knew one of the computer scientists who coined the term “artificial intelligence” [Marvin Minsky], and he emphasized the “artificial” part, not the “intelligence” part. That is, the idea that this isn’t real intelligence—it is the best that we can do algorithmically at this time. People now think that we are building real intelligence, but it was never intended to be understood in that way.
Related to that distinction is the relationship between a model and what it is modeling. In your book, you add a lot of caveats about the importance of not confusing the two. You use the example of a model of a weather system. The model can tell us a lot about the weather but is not the same thing as the weather—it can’t get us wet. Modeling the brain using artificial intelligence, however, seems different. If we understand the brain as an information-processing system, then, the argument goes, the underlying substrate doesn’t seem to matter like it does for a weather forecasting system (i.e., the principle of substrate independence). So, there wouldn’t be anything in principle preventing us from creating a functional equivalent of the mind in software, unlike, say, a simulation of the weather.
SN: There are several ways to make the comparison between an AI system and the brain. One is the input-output relationship and whether a behavior (i.e., the output) is exhibited by a system given certain input. This can be tested with, for example, a math problem or the Turing test.
Another way is by considering the data requirements to making an output happen. Consider an AI system today—such as Google’s Gemini or OpenAI’s ChatGPT—the system is basically fed every potential conversation that has ever been recorded and is accessible. It’s trained to generalize from that data, find patterns, and respond to questions. The AI models today have more information than any person in history. But unlike our brain and mind, these systems have not evolved and are not in the business of grounding any of that in anything analogous to brain circuitry. From an input-output point of view, you can say it’s solving X, or it’s able to do Y. But rather than a “brain,” it’s a cultural artifact—it has all the information of all the people that has been recorded.
But is that alone sufficient to define what we mean by intelligence? It is certainly a component of intelligence. But when we think about “intelligence,” we also think about social and emotional intelligence. We think about affiliations to other human beings and the feelings that we have toward them. These models are not trained to exhibit or to respond to anything at that level. So, it depends on how you define intelligence. If the definition includes human motivations, feelings, social skills, and adapting and learning on the fly, today’s AI systems do not exhibit these abilities and would not yet be considered “intelligent.” In addition, there is the assumption of “substrate independence,” which gets rid of biological specifics that may undergird our motivations, goals, emotions, and feelings—and therefore moves away from real human intelligence.
If you make these leaps, then you can start equating AI to human intelligence. But instead of faking it by assuming that our current AI systems have humanlike intelligence, perhaps we should think of AI as a beautiful engineering artifact that we can put to use as part of our social ecosystem augmenting what we do to tackle important societal and scientific problems rather than replacing us as artificial humans.
But is there anything in principle preventing us from building an artificial mind that resembles our own? It sounds like the premise of substrate independence misses the underlying importance of the brain being embodied.
SN: There is nothing in principle that prevents us from building artificial minds that are closer to human minds. But the specifics matter, and today’s LLM [large language model] training and inference are very unlike our brains, bodies, and minds. Our brains and bodies are products of biological evolution that have uniquely adapted to the uncertainty and dynamics of our physical and social environment. Critically, we are vulnerable beings who have motivations, feelings, and desires including the existential need to live under an enormous range of conditions. We have no idea and very few actual results pointing to how to endow artificial systems with these kinds of attributes.
You also discuss in your book the various fields involved in the study of thought. You write, “Our hypothesis is that the science of thought and language starts with neuroscience but, in addition, must include cognitive science, linguistics, and computational modeling.” Why are all these fields needed in addition to neuroscience?
GL: It’s because language has meaning. This goes back some years, but I was one of Noam Chomsky’s earliest graduate students. He wrote a book called Syntactic Structures (1957), which assumed that there is a form of mechanical, meaningless syntax that accounts for language. But I thought Chomsky’s theory was incomplete. It missed meaning. It missed understanding. It missed real thought. To address these, in addition to formal syntax, you need to have an understanding of cognition. I got into a lot of trouble with Chomsky because of this. Our disagreement is recorded in a book [by Randy Allen Harris] called The Linguistics Wars: Chomsky, Lakoff, and the Battle over Deep Structure (1993).
SN: Another way to think about it is that each field addresses the topic at a different level. Neuroscience studies neurons and circuits. But when we start thinking about language or thought generally, it spans four or five different levels, where in addition to individual cell types, connections, and circuits, you have systems. If you just start from the bottom up and look at the space of possibilities, it’s huge. You need some way to constrain what you study, and the phenomena you study have to be informed by these other fields because they bring structure at these different levels. And these different levels need descriptions because they are emergent phenomena, if you will. The phenomena they represent are not immediately available from just looking at cells or connections between cells. It doesn’t mean that we won’t be able to explain someday, at the neural level, a lot of these phenomena. But we need these other constraints to even begin to think about what our possible space of solutions looks like.
GL: To reiterate what Srini is saying, our understanding of the science of thought is not yet complete. We are trying to use various theories to help us understand as much as possible about how we function, but we’re going to keep missing things because there is no grand theory that covers it all. We need multiple theories, drawing from various fields.
This reminds me of a fortune cookie a friend of mine got when he went to a Chinese restaurant. It said, “If the brain were so simple that we could understand it, then we couldn’t understand it.” [A variation of this quote is also attributed to Carnegie Tech professor Emerson M. Pugh.]
That’s as insightful a fortune as I’ve heard from a fortune cookie. Computational modeling plays a key role in your book. What was the shift like when you started using computer science in your research?
SN: Computer science forces us to make everything very specific and defined. We need to know which parameters we know well, which we don’t, and be explicit about the assumptions we’re making. Otherwise, we risk talking about the same thing in different ways or completely different things in the same way.
Computational modeling is also very useful for studying processing. It’s not just how our brains are wired that matters but also what happens in the brain: what gets active when, what gets active next, how does that flow happen, how does the pathway that is then triggered contribute to a behavior. That kind of processing is best studied in a computational model where we build a simulation and understand the dynamics of the simulation, which provides a window—a modeling window, nevertheless, but still a better window—into the actual activation patterns.
Computational modeling also allows us to experiment at different levels of abstraction. For example, you can build computational models that try to get at some reduced function in a very detailed way without worrying about all the other biology that a neuroscientist has to deal with.
But you can’t divorce the brain’s internal computations from its internal and external embodiments. This is something we wanted to bring more into focus in the book than contemporary research would have you believe. The brain is not an abstract computing machine. It controls, monitors, and engages in “interoception” in order to project, plan, feel, and think.
GL: It’s your body’s way of telling you what’s going on inside you. Here is an example of the importance of embodiment. One of the first things we learn when we’re functioning in the world as infants is that there are physical limits. If I put my hand down on this desk, my hand isn’t going through the desk. We’re constantly dealing with these types of limits in the external world, and as a result, they are learned so well that we use them unconsciously and automatically.
SN: And extending it to the topic we started with—metaphors, a lot of our conceptual system is actually based metaphorically on these interactions with the physical world. They are indispensable, and they establish common ground between us, as humans, when we speak and when we coordinate. In fact, we have this deep shared background given our bodies’ interactions with the world, with our social environment. We’re all thrown into this world with that understanding, which helps us develop these different forms of coordination and communication.
George, can you tell us about the false theory of metaphor that arose with the ancient Greeks and that many of us still have, and how your research has corrected this misunderstanding?
GL: The false theory is that metaphor is simply metaphorical language, that metaphor is in the words that we use, and that if we can replace metaphor accurately with literal thought, we can achieve literal thinking. But thinking itself can actually be metaphorical—we often understand the world around us metaphorically. Take these classic examples: More is up, less is down—when you pour more milk into a glass or pile more books on a desk, the level goes up. When you drink the milk or remove books from the pile, the level goes down. Purposes are destinations: achieving a purpose is reaching a destination. Action is motion toward achieving that purpose.
This metaphor system applies every time you go to the refrigerator to get something to eat or drink. You can’t get away from it. Our very thoughts are often metaphorical in our everyday functioning with our bodies in the world. Why? Because metaphors allow us to function much more efficiently. That is part of why metaphors are so important.
In fact, even mathematics uses metaphorical thought, right?
GL: My colleague Rafael Núñez and I have written a book on the subject, Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being (2000). It shows how metaphorical thought is used throughout mathematics, especially where various different concepts of infinity are used. For example, one concept of infinity comes with our understanding of the integers: 1, 2, 3 … an unending sequence produced by adding 1 sequentially. A different concept of infinity arises in the idea that “all parallel lines meet at infinity.” Think of railroad tracks extending to infinity: you form an image of them as getting closer and closer until they get so close that they meet—at infinity. Here infinity is a thing—the point where they meet.
Another key point in your book is that we don’t passively see what comes to our senses, but rather our neural circuitry actively creates what we see, what we feel, what we hear, mostly unconsciously.
SN: Here is an example that might make this more clear. When you ask a multimodal AI system “What do you see?,” it might say, “I see a cup” or “I see a lake.” When you ask a human being “What do you see?,” part of the response would be to identify these objects, but also to consider “Can I grab the cup?” or “Can I swim in the lake?” We are constantly in a brain state that is engaged in the world, thinking about the future, and performing actions. We view what is presented in a contextual way that is also conditioned by our internal state. Am I hungry? Am I distracted? Am I tired? These states may well even change when you actually perceive the same object in different contexts. It isn’t like you’re presented with the world, and like a camera, you record an image of the world. At any given moment, you are in a certain state, engaged in doing something, and that state and your ongoing actions influence your goals and how you see something. This is fundamentally physical, but it also extends to people’s political views and biases. You come into a situation with some brain state, with neural circuits that are already active, and those active neural circuits color and influence how you perceive the world. We are never free of context and goals.
How do these unconscious states integrate into consciousness?
SN: We don’t know. But inspired by work by neuroscientists such as Antonio Damasio, we think that something like consciousness may arise from some basic, embodied, homeostatic circuits that operate to maintain certain states such as temperature, blood pressure, and satiation. These are evolutionarily very old. And they could be precursors to a system that interrogates the internal state, both in terms of valence (is it good? is it bad?) and more information (am I hungry?). So, there is a kind of self-awareness and a self-monitoring condition that then, through processes we don’t really understand, gives rise to feelings and then sentience and consciousness. That’s the story that seems to me to be on the right track. But then, what is a mystery is how the integration you’re asking about forms consciousness. We just don’t know.
Consciousness also has this fascinating autobiographical element. In the book, you discuss different factors that play a role in forming our persistent identities, such as culture and the stories we tell.
GL: That’s right. We’re constantly telling stories, and functioning within stories. At any moment, we are “computing” narratives about our lives. We start to learn as children the stories that we’re functioning in. And these stories are mostly unconscious. But we need them to function in life. And we develop a store of stories. Often when we don’t know what to do, it’s because we don’t know which story we should be functioning in. We have to figure it out. There are metaphors that we live by, but there are also narratives that we live within.
SN: In the book, we also discuss autobiographical memory projection, which may implicate what is called the default network (the anatomy and function is still debated and an active area of research). The brain enters a mode of constantly simulating our own autobiographical story, if you will, in the kind of framing that George used, but also projecting ourselves into the future and imagining what it would be like. So there seems to be this mental time travel that is happening all the time, which we can understand as this default network providing structure to that story. A really interesting question is the connection between the kinds of narratives that George was talking about—the kinds of framings we do autobiographically of ourselves—and this simulation that is happening all the time.
It’s fascinating that our brain uses essentially the same amount of energy at its rest state, when it is engaged in these simulations, as when we are deeply focused on something.
GL: That is exactly right. And it’s interesting that not everyone has the same default network, since we are all living out different stories.
What do you mean?
GL: It means that people are prompted to do certain things in certain contexts. This likely is the result of the interaction between the default network and the attention and salience networks for an individual. This is true in a variety of social situations. If I work in an office, my co-workers will notice things I won’t notice, and correspondingly I’ll notice things that they won’t notice because we have different default network interactions with our attention and salience networks. This is also true in friendships and in family life. A spouse may have concerns about how a house is run and may notice things that his or her partner may not notice and vice versa, because they each have different default network interactions with varying degrees and kinds of similarities and differences. And this is perfectly normal.
I want to end by asking about the importance of understanding how we think. You write,
Understanding how neural systems think is a deep scientific enterprise. But it is also urgent, because the neural unconscious, working socially and politically, is constantly creating a reality that can be very scary and needs to be understood. This book is about science, but its ramifications are about power.
How has studying the mind influenced how you operate in and perceive society?
SN: How we think influences how we act and how we live. Societies operate on prevailing metaphors and cultural narratives. Studying how ideas are created by the social brain and body helps us better understand the genesis and trajectory of different values, experiences, and worldviews. It gives me a new appreciation of how deeply embodied our beliefs, decisions, and actions are. It also makes me realize how inseparable emotions and feelings are from human decision-making and rational behavior.
GL: I agree with Srini. In my book Moral Politics: How Liberals and Conservatives Think (1996), I point out that most Americans unconsciously understand American politics via the Nation As Family metaphor, with liberals and conservatives differing as to what kind of family is optimal: conservatives use the Strict Father family model, while liberals use the Nurturant Parent family model. Incidentally, after I published a version of this, I received a note from Donald Trump Jr. saying, “The trouble with you liberals is that you didn’t have strict enough fathers.” When Barack Obama was first running for president, he said the most important thing his mother taught him was empathy.
When I was seven years old, I heard the word “democracy” used on the radio and asked my father what the word meant. He replied, “It means that there’s nobody better than you and you’re not better than anybody else.” Even a seven-year-old could understand that. Studying the mind has led to my noticing all these things and taking them seriously. I’ve been at this for many decades. When I was 12 years old, my brother Sandy, who was 10 years older than me, took me for a drive. He pulled off the road after a while and asked me what I wanted to do with my life. I replied that I wanted to understand the mind.
¤
George Lakoff is professor emeritus of cognitive science and linguistics at the University of California, Berkeley. He is best known for his discovery of the detailed workings of metaphorical thought and how it structures our understanding of the world. He is the author or co-author of numerous books, including Metaphors We Live By (with Mark Johnson, 1980), published by the University of Chicago Press.
Srini Narayanan is distinguished scientist and senior research director at Google DeepMind, Zurich, where he leads a research group on machine learning and natural language processing. Until 2014, he was director of the International Computer Science Institute, a core faculty member in the Cognitive Science program, and a faculty member at the Institute of Cognitive and Brain Sciences, all at the University of California, Berkeley, where he was also a co-founder of the Berkeley Neural Theory of Language group.
LARB Contributor
Julien Crockett is an intellectual property attorney and the science and law editor at the Los Angeles Review of Books. He runs the LARB column The Rules We Live By, exploring what it means to be a human living by an ever-evolving set of rules.
LARB Staff Recommendations
How to Raise Your Artificial Intelligence: A Conversation with Alison Gopnik and Melanie Mitchell
Julien Crockett interviews Alison Gopnik and Melanie Mitchell about complexity and learning in AI systems, and our roles as caregivers.
Life Is More Than an Engineering Problem
Julien Crockett speaks with Ted Chiang about the search for a perfect language, the state of AI, and the future direction of technology.