CONTRARY TO THE biblical Gospel of John, in the beginning was not the Word. More important, in the beginning was not the sentence (or even the poet’s line).

One obvious preliminary question here is, What do you mean by “beginning”? Well, let’s say, as global historian Yuval Harari does in Sapiens: A Brief History of Humankind (2014), that one plausible candidate for “beginning” is 200,000 years ago when Homo sapiens evolved, eventually spreading from Africa throughout the world. It was in the course of that dispersion that all other species of humans — such as Neanderthals — were wiped out.

There are many other pre-linguistic beginnings that Harari (along with others, such as paleoanthropologist Ian Tattersall) cites, in a timeline that ranges from the Big Bang (approximately 13 billion years in the past) to the formation of “life” on planet Earth (some 3.5 billion years ago) to relatively “recent” landmarks, such as the evolution of the genus Homo roughly 2.5 million years ago. But the beginning of Sapiens a mere couple of hundred thousand years ago will do for our purposes.

The point is, for more than a hundred thousand years after that beginning, until what science writers now call the Cognitive Revolution — roughly, 70,000 years ago — there was, for tool-making, fire-using, small-animal-hunting, rather insignificant Sapiens, as far as communication is concerned, just a signal system, pointing, and some gestures, as was the case for other animals, especially apes and chimpanzees. By a signal system, we mean a limited number of non-combinable signs that could be used externally for communication about potential food, enemies, or mates. What went on internally, in terms of “thought,” in the large-brained heads of Sapiens or the smaller-brained heads of chimps (or pick your favorite creature … dolphins, elephants, etc.) is pretty much obscure, apart from the very sophisticated guessing game known as evolutionary psychology.

Harari remarks, early on in his best-selling, reader-friendly Sapiens, that “the appearance of new ways of thinking and communicating, between 70,000 and 30,000 years ago, constitutes the Cognitive Revolution. What caused it? We’re not sure.” Harari’s modest “we’re not sure,” which he repeats with respect to various other knotty issues, ought to be underscored. When we talk about the origins of language, communication, and human nature, as well as what distinguishes humans from other animals, a considerable degree of modesty about our limited understanding is warranted.

That modesty is one of the things I usually find charming about the world-renowned linguist Noam Chomsky, 87, who, in his new book, What Kind of Creatures Are We? (based on his 2013 Dewey Lectures at Columbia University), asks, “What is language?”; “What can we understand?”; and “What is the common good?”

He begins demurely:

The general question I would like to address in this book is an ancient one […] I am not deluded enough to think I can provide a satisfactory answer, but it seems reasonable to believe that in some domains at least, particularly with regard to our cognitive nature, there are insights of some interest and significance.

I’m not at all sure that Chomsky satisfactorily answers his eponymous, folksy question, but as almost always with the questions Chomsky asks, the project seems promising.

As for what happened 70,000 years ago, as Harari says,

The most commonly believed theory argues that accidental genetic mutations changed the inner wiring of the brains of Sapiens, enabling them to think in unprecedented ways and to communicate using an altogether new type of language. We might call it the Tree of Knowledge mutation. Why did it occur in Sapiens DNA rather than in that of Neanderthals? It was a matter of pure chance, as far as we can tell. But it’s more important to understand the consequences of the Tree of Knowledge mutation than its causes. What was so special about the new Sapiens language that it enabled us to conquer the world?

Before we get to what’s so specially empowering about language, it should be noted that Chomsky deserves a good deal of credit for that “most commonly believed theory” about genetic mutations in human brains. More than a half-century ago, Chomsky reoriented linguistics toward biolinguistics, taking the view that the ability to acquire language as well as the underlying structure of language are biologically determined in the human mind and thus genetically transmitted. So, all humans share a capacity to acquire language, irrespective of sociocultural differences, even though those cultural differences will have a major impact on actual language acquisition and use. At the time, in the mid-20th century, Chomsky was concerned with refuting behaviorist psychology, which regarded human minds as a sort of blank slate and thus treated language as purely learned behavior.

Clive Wynne, in his Do Animals Think? (2004), provides a useful and clear account of what’s so special about language. By the way, the short answer to Wynne’s title question is: Well, yes, sort of, but not very much, given that thinking is so dependent on language, and other animals don’t have language. Wynne emphasizes the set of rules or algorithms that we call grammar and apply to our vocabularies, both lexical (“ball,” “the,” “fire”) and referential (“The French Revolution,” “Noam Chomsky,” “Christina Aguilera”), in order to combine those items into new thoughts and sentences. “Grammar is the crucial lubricant,” says Wynne, “that opens language up from being limited by our vocabulary to being completely infinite in its expressive possibilities.” He adds, “Without grammar there is no language.” And maybe, without language, there isn’t much thinking.

Wynne, a psychologist who is otherwise filled with wonder and delight at the capacities of nonhuman animals, provides a careful account of scientific efforts to teach other hominids language, including Herbert Terrace’s well-known experiments with a chimp he mischievously named Nim Chimpsky. Wynne notes that while humans are stringing together little sentences at age two or three, “this never happened to Nim. The average length of his utterances remained stuck at only a little over one word.” Neither Nim nor any of the subsequent word- and sign-acquiring chimps of the 1980s and 1990s ever demonstrated anything close to a minimal grasp of grammar. “And grammar,” argues Wynne, “is what makes the difference between being able to express a number of ideas equal to the number of words you know and being able to express any idea whatsoever.” Grammar is what turns vocabularies into open-ended systems; without it, you don’t develop new sentences, new ideas, new thoughts. That’s why, in a sense, humans have rocket ships and chimps don’t.

Chomsky says something similar when he observes that

Darwin was not the first to conclude that “the lower animals differ from man solely in his almost infinitely larger power of associating together the most diversified sounds and ideas.” […] But Darwin was the first to have expressed this traditional concept within the framework of an incipient account of human evolution.

The rigid and limited signal system of most animals is contrasted with the exponentially gigantic vocabularies and flexible grammar of humans as a major marker of the unique kind of creatures we are. There are primatologists, such as Frans de Waal, who remind us of the importance of seeing the similarities and continuities between Sapiens and other primates, but it’s the linguistic and cognitive gap that distinguishes us from our chimpanzee relatives.

In his Columbia lectures, Chomsky emphasizes the abruptness of the Cognitive Revolution, citing contemporary scientist Ian Tattersall’s Masters of the Planet: The Search for Our Human Origins (2012) as a good recent guide to the issues. Tattersall observes that it was once believed that the evolutionary record would yield “early harbingers of our later selves.” He continues:

The reality, however, is otherwise, for it is becoming increasingly clear that the acquisition of the uniquely modern [human] sensibility was instead an abrupt and recent event. […] And the expression of this new sensibility was almost certainly crucially abetted by the invention of what is perhaps the single most remarkable thing about our modern selves: language.

Chomsky notes that Tattersall dates this “abrupt and sudden event” as likely occurring “within the very narrow window of 50,000 to 100,000 years ago.” Chomsky adds,

The exact dates are unclear, and not relevant to our concerns here, but the abruptness of the emergence is. […] If Tattersall’s account is basically accurate, as the very limited empirical evidence indicates, then what emerged […] was an infinite power of“associating the most diversified sound and ideas,” in Darwin’s words.

At this point, something odd happens in Chomsky’s discourse. Although the folksy register of his talk is maintained, the informal tone is no longer directed to a general audience of laypeople but becomes an inside-baseball lecture (or, more properly, we should say, an inside-syntax lecture) aimed at an audience of linguists and other specialists. Even semi-professional readers, who have some familiarity with the topic, will find Chomsky’s exposition a slog:

The concept of finite systems with infinite power […] made it possible to provide a clear formulation of what I think we should recognize to be the most basic property of language, which I will refer to just as the Basic Property: each language provides an unbounded array of hierarchically structured expressions that receive interpretations at two interfaces, sensorimotor for externalization and conceptual-intentional for mental processes.

Chomsky doesn’t really elaborate on the notion of “Basic Property,” nor does he explain what “hierarchically structured expressions” exactly refers to, and it turns out that he has no interest, at least for the purpose of this talk, in discussing the “externalization” of language for communicative and other purposes. Instead, he says:

At the very least, then, each language incorporates a computational procedure satisfying the Basic Property. Therefore a theory of the language is by definition a generative grammar, and each language is what is called in technical terms an I-language — “I” standing for internal, individual, and intensional: we are interested in discovering the actual computational procedure, not some set of objects it enumerates, what it “strongly generates” in technical terms, loosely analogous to the proofs generated by an axiom system.

From this point on, Chomsky’s text, at least with respect to the “What is language?” question, is pretty much unreadable for non-specialist and semi-professional auditors and readers. From time to time, Chomsky returns to the comprehensible, always raising interesting questions. For instance, he reminds us of the early research program of linguist Otto Jespersen. “A century ago, Otto Jespersen raised the question of how the structures of language ‘come into existence in the mind of a speaker’ on the basis of finite experience, yielding a ‘notion of structure’ that is ‘definite enough to guide him in framing sentences of his own,’ crucially ‘free expressions’ that are typically new to speaker and hearer,” recalls Chomsky. He adds,

The task of the linguist, then, is to discover these mechanisms and how they arise in the mind, and to go beyond to unearth “the great principles underlying the grammars of all languages,” and by unearthing them to gain “a deeper insight into the innermost nature of human language and of human thought” — ideas that sound much less strange today than they did during the structuralist/behavioral science era that came to dominate much of the field, marginalizing Jespersen’s concerns and the tradition from which they derived.

Indeed, these ideas sound not only “less strange,” but perfectly understandable, whether or not we have the answers to the questions about “the great principles underlying the grammars of all languages.”

Chomsky is here almost exclusively interested in the “internal” nature of language. He says that “investigation of the design of language gives good reason to take seriously a traditional conception of language as essentially an instrument of thought.” He continues:

Externalization then would be an ancillary process, its properties a reflex of the largely or completely independent sensorimotor system. Further investigation supports this conclusion. It follows that processing is a peripheral aspect of language, and that particular uses of language that depend on externalization, among them communication, are even more peripheral, contrary to virtual dogma that has no serious support. It would also follow that the extensive speculation about language evolution in recent years is on the wrong track, with its focus on communication. It is, indeed, virtual dogma that the function of language is communication.

The reader sort of understands what Chomsky is driving at, but it never becomes quite clear why the “language of thought” argument is his almost exclusive focus. As for the possibility of communication as a purpose of language, Chomsky dismisses that notion out of hand. “It is, in the first place, odd to think that language has a purpose,” Chomsky says. “Languages are not tools that humans design but biological objects, like the visual or immune or digestive system. […] Even if the term ‘communication’ is […] used as a cover term for social interaction of various kinds, it remains a minor part of actual language use.”

The problem with the presentation here, apart from Chomsky’s rather clunky prose, is that it never becomes clear why Chomsky is almost exclusively following this particular line of inquiry. Chomsky doesn’t explain it, and yet he obviously can’t be utterly indifferent to the rest of the story since he recommends to us scientists like Tattersall, whose own book takes up a variety of empirical issues about early hominids, providing a sort of history of fossil recovery, discussions of the transition to upright postures and bipedal locomotion, as well as explorations of diets, tool development, and the like. Chomsky’s approach leaves readers to puzzle it out for themselves, and as a Kirkus review puts it, “general readers may find the text opaque and the narrative flow disconnected.”

For those perhaps frustrated by this particular book but interested in further exploring Chomsky’s ideas, the famed linguist is also, fortunately, famously prolific, ever since he began with such early works as Syntactic Structures (1957) and Cartesian Linguistics (1965). What Kind of Creatures Are We? is just one of a dozen or more similar books, often brought to publication by Chomsky’s colleagues and admirers. This one has a longish if not especially helpful introduction by Akeel Bilgrami, a prominent Columbia University philosophy professor, who hosted these lectures and is the editor of the series, “Themes in Philosophy,” in which this volume appears. Similar Chomsky books, focusing on language, range from On Nature and Language (2002), talks Chomsky gave at the University of Siena around the turn of the century, to The Science of Language (2012), Chomsky’s interviews with McGill University philosopher James McGilvray, to, most recently, Robert Berwick and Noam Chomsky, Why Only Us: Language and Evolution (2016).

Most people, I suspect, will find the social issues more interesting than the technical linguistic questions addressed here, finding themselves, as I did, reading around Chomsky to get more of a picture of those small bands of early, fairly isolated Sapiens sitting around campfires at night in what was otherwise a very dark, often unknown, world — a world without conversation for hundreds of generations.

I think the distinction between language-using humans and signal-system-using other animals is the key divider. What kind of creatures are we? Well, “language animals,” to use the term in the title of philosopher Charles Taylor’s recent book, The Language Animal: The Full Shape of the Human Linguistic Capacity (2016). Of course, language implies more than merely a lexicon: it raises questions about the difference between words and signals, about the relation of thought to utterances, about how language pre-supposes an entire structure of words governed by rules that allow for an infinite number of new sentences, ideas, and meanings. And as the record indicates, about 30,000 or so years ago, the language-using human mind begins to generate meaning through what we now call “art.” Finally, since language is also a matter of sounds, inevitably we have to think about those organized sounds we call “music.”

In Taylor’s account, much of the focus is on the constitutive character of language and its relation to our sense of reality. Unlike philosophers who take a “realist” position about a mind-independent, objective reality, Taylor, as have others (e.g., Ludwig Wittgenstein, Richard Rorty) before him, argues that language is inseparable from our understanding of reality. The point is nicely made in a passage from the late Hilary Putnam’s Realism with a Human Face (1990), when he says that “elements of what we call ‘language’ or ‘mind’ penetrate so deeply into what we call ‘reality’ that the very project of representing ourselves as being ‘mappers’ of something ‘language-independent’ is fatally compromised from the very start. […] Realism is an impossible attempt to view the world from Nowhere.”

The notion of the gap between infinitely generative language and closed signal systems has been slow to register in the popular public imagination. The myth of other language-using animals persists. I recently read a thorough and well-judged examination of dolphin intelligence, Justin Gregg’s Are Dolphins Really Smart? (2013), which debunks the notion that dolphins are “speaking dolphinese.” Although dolphins are certainly relatively intelligent creatures that live complex social lives, it turns out that the myth of their human-like linguistic attributes was, somewhat innocently and amusingly, cooked up by biologists in the 1960s, principally John Lilly. A lot of his epigoni (some of whom I knew) were very much of the age, and could fairly be described as stoned, hippie scientists. Most of Lilly’s speculations about dolphins turned out to be simply untrue.

Although Chomsky dismisses the notion that “language has a purpose,” a range of other scientists have spent considerable time figuring out what, exactly, language is for in terms of its functions in human communities. Michael Tomasello, in his lectures on the Origins of Human Communication (2008) focuses on the cooperative and informational functions of language and devotes considerable space to examining behavior involving pointing, pantomiming, and gesture as precursors to full-fledged language use. “The underlying reason for the cooperative spirit by which humans work to get the message across,” says Tomasello, “is their species-unique cooperative motivations for communicating in the first place. […] It is not clear that other animal species collaborate in this way in communication,” he notes, adding half-facetiously, “[F]or example, there is no evidence that other animals ever ask one another for clarification.”

Similarly, primatologist Robin Dunbar, in Grooming, Gossip, and the Evolution of Language (1996) sees parallels between the grooming behavior of monkeys and equally extensive human gossiping as crucial features in developing social solidarity that could support significantly enlarged human communities. Yuval Harari concedes that “most likely, both the gossip theory and the there-is-a-lion-near-the-river theory are valid.” But in his view,

The truly unique feature of our language is not its ability to transmit information about men and lions. Rather, it’s the ability to transmit information about things that do not exist at all. As far as we know, only Sapiens can talk about entire kinds of entities that they have never seen, touched or smelled. Legends, myths, gods and religions appeared for the first time with the Cognitive Revolution. […] This ability to speak about fictions is the most unique feature of Sapiens language. It’s relatively easy to agree that only Homo sapiens can speak about things that don’t really exist, and believe six impossible things before breakfast. You could never convince a monkey to give you a banana by promising him limitless bananas after death in monkey heaven.

It might have been helpful if Chomsky shared some of Harari’s sense of humor — although Chomsky does get off the acrid quip that, at the rate we’re going, apes are more likely to acquire human rights than undocumented workers.

His other lectures in this slim volume deal with the limits of human knowledge and the politics of the common good, and are comparatively much easier going. The limits of knowledge talk (which is supplemented by a similar but extended essay at the end) usefully examines philosophical and scientific considerations, particularly in the 16th through 18th centuries, of whether some topics are not just “problems” but “mysteries” beyond the human capacity to shape into “admissible hypotheses.” One perspective that Chomsky offers that isn’t sufficiently thought about in much contemporary philosophy is the degree to which much of what goes on in human beings — from digestion to immune systems to language — occurs outside of human consciousness. Chomsky’s willingness to consider the unconscious is one of the features of his discussion that focuses his emphasis on the limits of our knowledge.

Since we are not only individuals who think and use language, but social animals who are by definition faced with the question, How should we live together?, it makes sense for Chomsky to address the issue of the common good. His political critiques, especially of American imperialism, are probably as well if not better known to the broad public than his linguistic ideas. Here, he delivers a sort of standard stump speech that offers a defense of his own beliefs in anarcho-syndicalism, libertarian socialism, and philosophic anarchism, relying on the obscure 1930s anarchist theorist Rudolf Rocker, as well as the ideas of the liberal-democrat philosopher John Dewey. Chomsky provides a far more comprehensive exposition of his political analysis in his recent Who Rules The World? (2016).

Since this is the season of much boilerplate political rhetoric in the almost never-ending run-up to the 2016 US presidential election, perhaps we can forgo a debate here about whether anarchism is the best approach to large-scale industrial and information societies and the political states they’ve developed. It may be enough to note that if you want Chomsky’s take on the matter to put alongside those of Donald Trump and Bernie Sanders, it’s available here. Overall, though, the “opaque text” and “disconnected narrative” renders this volume of Chomsky less than the ideal introduction to his thought that the informal title suggests it might be. Still, Chomsky meets the requirements of the plaintive lament heard in pop singer Christina Aguilera’s “Say something, I’m giving up on you.”


Stan Persky is the author of Reading the 21st Century (2011) and Post-Communist Stories (2014).