LET ME BEGIN by explaining the aim of my title. I assume that two of the most important and closely tied purposes of a research university are the production and reproduction of knowledge, which goes hand in hand with the training of a new generation of scholars, researchers, and artists. We might say that these purposes rest on other more fundamental values, or we might say that they rest upon the claim that knowledge in and of itself is the highest value. Universities might serve truth aspirations, or social and political institutions, or aim to provide the practical wisdom needed to organize a sustainable civilization — or all.
Often, universities (and when I say universities, I mean research universities) seem to have only two objects of study: the natural world, and the human — with the latter often placed within the natural world. Chemistry studies carbon, for example, and enables invention, which remakes nature — think plastics or graphene. Psychiatry studies what its modern founder called the soul (psychē) — what contemporary practitioners call “mental behaviors.” Art historians study the technical details of visual objects and their creative processes, sometimes hoping to explain the nature of beauty, the place of art in human life processes, and/or the orders of culture. Of course, I could mention many other fields of study and research, some of which would intensify the divide between human and nature (e.g., quantum gravity) while others would close the gap (e.g., nanotechnology in medicine.) Things seem to have changed little since 1956, when C. P. Snow first made his general point about the existence of two cultures, one scientific and one humanistic. According to Snow, members of the science tribe could not speak to members of the literary tribe, and they did not share the same worldview. Snow’s claim was always overstated; and newer sciences that merge the human and the hard sciences — such as genetics and “digital humanities” — make it impossible to take what Snow said too literally.
Snow did, however, give us an important reminder, namely that all the decisions taken in the production and organization of knowledge effect not just conversation and society but the very way in which the human being exists. (“Culture” comes in here as topic and problem, but for the moment, I will set it aside.) I used the phrase, “the production and organization of knowledge.” It might be an unfamiliar phrase, and it can mean or refer to many things. For the moment, let us say it refers to the way universities organize knowledge and the production of new knowledge within disciplines, departments, divisions, fields, and programs. This structure comes from many different sources, but we do not need to analyze it genetically to know its consequences. We see them before us and we live them, especially when we feel the need to mitigate them. For example, Pittsburgh University’s Brackenridge program of student and faculty presentations — in which this text was first delivered as a lecture — asks us to speak in clear ways to colleagues who are, as we say, “outside our field.” This is an admirable ambition because it recognizes and addresses a problem, and because it inherently tells each of us that we must do our best to create a community of truth seekers and intellectuals across the differences of field limits and jargons. We have good social, intellectual, and political reasons to insist on such clear communication. We hope to educate and to be good, productive, wise participants in the largest political questions of our time, and we want to make wise choices regarding the various alternatives we imagine for our futures and the future of our species. We believe that learning these communicative skills is essential so that we do not find ourselves in a situation where, for instance, masses of people disbelieve climate science and cannot learn from apocalyptic literature and film.
Nevertheless, there is more. It is with some of this “more” that I am concerned in what follows. First, I want to open up for discussion the idea that the nature of the human is not fixed but changing, that it is historical, and that it changes as a result of human activity. Second, I want to propose that the production of knowledge, even when that knowledge is not about or of the human, both reflects and represents the changing nature of the human and the role the human plays in bringing about that change. I still find it hard to articulate this second point as clearly as I would like. At times, I think of this process as a feedback mechanism. When neuroscience, for example, applies new techniques to discover new truths about human neurophysiology, this changes the very nature of what we think when we say “Human,” and in turn, this changes the nature of the experiments we do and the ways in which we think of what we are — indeed, it changes what the Human is. I want to support this claim today, and suggest a few of its consequences. I will start by reminding you of certain steps along the path modern knowledge has traveled. I want to remind you how, at times, this path has involved a felt need among researchers, teachers, and thinkers to get an understanding of what we are doing, why we are doing it, and the consequences of our actions in the world of knowledge and creativity. Why do I want to remind you of this? In part, because I want to show that there is a tradition of thinking about these questions and their consequences. In another part, because I want to sketch the foundations that make thinking about these topics not only possible, but necessary.
Let me begin with Darwin. It’s well known that Darwin finally published The Origin of Species in 1859 because Wallace was about to publish his own findings on evolution. As we also know from Darwin’s own later remarks, he had excluded discussion of the species homo from his 1859 book because to have included it, he writes, “should thus only add to the prejudices against my views.” The Origin had definitively overturned all possible claims that a god had created a fixed nature with its species at some time in the past. As a matter of science, we might say, Darwin had fossil and other evidence enough to establish the evolution of species and to defend the claim that selection determined the steps and results of that evolution. As a matter of society, his work made intellectually illegitimate all moral and political claims that rested upon assertions of fixed natural law, of divine will — and his work did this across different levels of life and thought, from scholarship, to priests preaching of god’s hidden plan, to all supernatural accounts of human life. Effectively, his work changed the very ground of Western thinking since at least Plato, who had asserted an existing if unclear relation between a world of human finitude and an infinite non-human realm, which Plato called “Ideas” or “forms.” As a matter of art and culture, The Origin of Species displaced a grand narrative of human and natural beginnings, apparent in the West in such texts as Genesis or The Gospel According to John, and intensified enlightenment processes of secularization. In other words, The Origin of Species established the agreed upon narrative about nature, and it took the place previously occupied in intellectual and cultural life by monotheistic narratives of “creation” and “redemption.”
In The Descent of Man (1871), Darwin ended his self-imposed silence about the human. In that book, he showed definitively that the human is also an evolutionary species, that it is inherently finite — which means that it is marked by the material limits of time, history, and natural forces (especially sexuality) — and that it is in no extra-natural way a special species, the result of a special creation, a creature of god’s love. In Darwin’s story, there is no pre-planned fate for the species in any religious sense — no messianic return, no heavenly utopia, and no promise of a redeemed nature, a second Eden.
As Darwin said, his main conclusion was simply “that man is descended from some less highly organized form.” This is the main takeaway by both his supporters and his enemies. In the same paragraph, though, he added the more antagonistic claim that anyone who did not see this truth — that the human is natural and part of an all-connected nature — looks out upon the world “like a savage.” This is a very brutal insult, but I want to take it seriously and literally, as a declarative statement and not as the speech act, or “insult.” The method, the scientific world-view, the argument from evidence — on one side of this line is the modern secular intellectual and researcher; on the other side, the savage.
The word “savage” designates those whose common sense, whose habitual worldview, holds that the human is unique and stable, fixed in nature and in natural priority, and that the human is not part of nature itself. One need not be monotheistic to hold this particular set of beliefs. Pre-Socratic Greeks often held elements of this set. By contrast, the non-savage not only reasons from evidence presented by science according to strict method, but the non-savage holds that the human is finite, part of nature, unfixed by law — that is, that there is no fixed human nature, but more fundamentally, that whatever the human is, it is historically variable. Finally, and as important, the non-savage holds that the human is knowable by the human through historical and scientific methods that enable action upon nature and the nature of the human itself. This final point is, for me today, the more important point.
Darwin shows us how knowledge changes not just our idea of the human, but he also suggests that knowledge newly produced by humans and the processes of that knowledge production can in fact change the historical substance of the human itself. The savage has no historical perspective, just as she has no respect for science. Above all, the savage has no idea of how the human working upon the connected natural world, in the process, works upon itself in each action that modifies nature — even if that action is seemingly far from the human.
In The Descent of Man, once Darwin had established that the human is not the result of “a separate act of creation,” he immediately made a very interesting claim about the human. “The intellect must have been all-important to man, even at a very remote period, as enabling him to invent and use language, to make weapons, tools, traps, etc., whereby with the aid of his social habits, he long ago became the most dominant of all living creatures.” What does this imply? Once Darwin has established that the human is part of nature, and so evolving and historical, he opens the door to the idea that the species, homo, can and does alter its own “nature” by its actions in the world. Human dominance depends upon the intellect extending human will and desire across the range of communication, technology, and society. In other words, evolving human intellect alters nature and, consequently, alters the human that hosts or embodies it. The human intellect, at least, would seem to have emergent properties. Of course, I am reading Darwin against himself here because he often spoke about the mind in ways that denied it emergent properties, insisting the human mind is only different in degree from other animals’ minds. Early debate about evolution often concerned the state of the human mind. I quote this passage on the evolving dominance of human intellect with its equally evolving capacities to suggest that within Darwin sat an idea he did not like — namely, a radical differentiation of human intellect.
We do not need to pretend to settle that debate here; a simpler point will do. Darwin’s work reminds us of how recently Western civilization came to accept the fact that the human is of a changing nature, and stopped wrongly believing it to be specially fixed as the god-given master of the natural world. Also, the evolving mechanisms of human dominance, centered in intellect and expressed in knowledge, communication, and transformative work — that is, in “culture” — “feeds back” upon the human, altering it in nature, concept, and technology as it alters the world in which it lives.
As I said, there was considerable scholarly dissatisfaction with Darwin’s writings on the human mind. His remarks not only invited debate and criticism, but also opened the opportunity for solutions to the problem his work posed. Let me give you a relevant example of such a solution. Bringing together evolutionary theory and elements of empirical neurobiology, Humberto Maturana invented the word “autopoesis” to theorize both the reality which Darwin failed to describe accurately, and to explain implicitly why he had failed. I mention Maturana here as one instance of a thinker and researcher who tries to take seriously at a fundamental level the issues that might arise when we think of systems, especially human-dominated systems, as self-transformative and persistent. This is, in simple terms, evidence that the human does change in concept, form, and substance in time and that knowledge of this fact has generated serious intellectual work.
Why have I offered you this sort of start to my talk? I want to give you a reason to entertain the idea that neither nature nor concept fixes the human. I want to add that human activity, always understood in relation to human intellect, necessarily changes the human in the very processes by which it thinks and works upon itself and the world. Moreover, I want to add that the outcomes of these processes are not predictable. I mentioned Maturana not only because he gave us the term autopoesis for thinking about living systems and their internal adaptation, but also because he explains the importance of the individual within these autopoetic processes, a fact he believes Darwin understated in his species-based thinking.
I can put all these fragments together in a tentative way to return us to the topic of the human and the university.
The research university is a place where, perhaps uniquely, humans can both produce knowledge that effects the world, and simultaneously reflect upon and judge that knowledge, the processes of its production, and its effects. Assume for the moment that Darwin and his heirs show that the human can, to a considerable degree, dominate the natural world and so engage, if not control, the conditions of its own evolution. Ask, then, if we should not also expect the university to become the place where we learn to reflect upon and discriminate among the many, often unpredictable, emergent outcomes of our thinking and work. In other words, humans have evolved to become self-making. Having become aware of that fact, humans come to know themselves as historical creatures. I do not mean that humans merely change the way they think about themselves; that is not the point here. Such an idea is just weak perspectivism; the simple idea that there is the human, but we just change how we think about it. The point of invoking Darwin here is to get rid of just that sort of notion. The human has objectively evolved to the point where its acts change itself as part of the autopoetic (perhaps homeostatic) living, natural, and cultural systems it dominates. At some point, the human — as we see from its science, philosophy, art, and politics — became aware of its own evolved dominant position, and of its own existence as a historical creature able to alter its niche and to imagine alternative futures for itself and its worlds. The human not only can but does pose a question to itself: What do we want to be and what do we want the world to be? It creates for itself the possibility of the future. A great tradition of Western intellectual life emerged to establish and confront this question, to explain its inescapability, and to find ways to answer it. Many famous names in all fields of human work belong to this tradition. Recently, Michel Foucault put the question in ethical terms that have weight inside the university: “The object of study, then, is not [...] the description of what man is but what he can make of himself.”
I mention Foucault here for a few reasons. He had several degrees, including doctorates, in a variety of fields and had studied at the École Normale Supérieure with some of the great philosophers of science in the mid-20th century. He also wrote important works of philosophy, history, and literary criticism, including a groundbreaking book on the emergence of clinical medicine in the 19th century. His work tries to differentiate among and evaluate the consequences of various disciplines and fields of knowledge that develop in different periods to clarify the relations among them and to understand their common assumptions and points of difference. It is also important for our purposes that he was an academic. In fact, by some measures, he was the most important and prestigious academic in Europe for about 20 years or so. He held the most prestigious chair at the most distinguished European research center, the Collège de France. He held the senior professorship in nothing less than the history of the systems of thought. He believed that the most important ethical question intersected with the most important academic and research question, one that could only exist after people like Darwin had done their work. “What can the human make of itself?” becomes a species question, and as such a question of individual judgment and learning. Therefore, Foucault says to university scholars and students — the most important object of study is not mankind, which is a classic humanistic formulation, but what man can make of himself as species and person. The English poet, Alexander Pope, early in the age of Enlightenment, in a long poem called An Essay on Man, wrote this: “Know then thyself, presume not God to scan; / The proper study of mankind is man.” For Pope, God was dark and mysterious, an infinite being unknowable by his creatures. One way to know God somewhat was to know his creation, and so he offered the poetic and scientific advice to study the human, but as if the human were a fixed creature, a gift of God, a gateway to knowing the divine. We might say there was no future in that.
Darwin’s great work typifies the expansion of knowledge-producing processes and institutions in the 19th century in the North Atlantic world. We can take the 1810 founding of the Humboldt University in Berlin as the forceful launching point of this expansion, and with it the forceful modernization of the university into something like the research university with which we are familiar. I think it is important that we keep in mind the close relationship between modern knowledge production and the university if we hope to see how the question of the human presses seriously upon those of us who study in the university. The highly influential philosopher, Wilhelm Dilthey, who held the same chair in philosophy that Hegel had held at the Humboldt, made a substantial effort to systematize and create foundations for the new order of knowledge that came into existence in and by means of the research university.
Dilthey noted that several intellectuals had presented well-grounded syntheses of modern natural science, but that no one had done the same for what he called the Geisteswissenschaften. This word is hard to translate into English. The choice of translation changes the active meaning of the term. In part, the difficulty comes from the history of the word Geist within German philosophy, and in part from tradition, which sometimes included the study of math and physics within the sciences of the spirit along with law, but excluded the study of the arts. Setting aside this complex history for now, we can focus on what Dilthey apparently intended to do; namely, provide a conceptual ground for the Geisteswissenschaften similar in authority to the foundations that others had already created for the natural sciences. Princeton University Press has created a complete scholarly edition of Dilthey in English, and they have provided a footnote on the meaning of Geisteswissenschaften that works for us today:
The human sciences (Geisteswissenschaften) encompass both the humanities and the social sciences. All previous translations of Dilthey and most of the writings on Dilthey in English have used the term ‘human studies.’ But current conceptions about the role of interpretation have made it possible to refer to the Geisteswissenschaften as either human sciences or human studies.
Dilthey wanted to introduce to the university a study of study. He knew that new forms of university research tied to education in a scholarly and research context created new problems that required a higher-level reflective course of study as part of the very research agenda of the new universities. On this point, Dilthey was right. Notice how this great philosopher stressed the pedagogical civil and social consequences of both new forms of study and training and the problems they created. In very different terms, he shares concerns with Foucault a century later. Dilthey’s metaphors come from his time, so for him society is not a network or autopoetic — it is factory-like and industrial. Yet his fundamental concern does not depend on the historicality of his metaphors. His deep concern is with the mode of education as technical training within the research university. In the research university, education can be and should be more than training in expertise. At a high level, such education can create experts and expertise innovating in various fields of knowledge, indeed capable of producing new fields and new services to society. Dilthey’s point is that the university must take up the study not only of the forces that set in motion these research and teaching processes, but also of their results, which produce certain kinds of human beings within a society, who are capable of changing that society. Without this second level study of study, the entire process will remain opaque. The student-turned-worker will be blind to the forces that place her work and life occupation. The society will be blind to the consequences of its largest social commitments of human and capital resources — or, it will devolve those choices upon those powerful enough to control those resources to their own needs. Let me be clear about a few things. Dilthey was not advocating for the human sciences as such because they were already in being. He was advocating for a systematization of human sciences so that individuals could become agents in social formation, or as I said earlier in the context of Darwin, so the human and its individuals could judge the paths desired toward imagined alternative futures.
Dilthey made a particular error, though, in his great book, The Introduction to the Human Sciences. He assumed the separation of natural and human sciences, and that the nature and consequences of the natural sciences were already clear and established. As a result, he did not say, as he should have, that reflection upon the disciplines’ knowledge production should include reflection upon the knowledge and processes of knowledge production within the natural sciences as well. Some later critics blame this on an important error in understanding technology, with consequences visible in war and ecological disaster. Dilthey worried that universities were not preparing students and scholars to reflect upon the transformative forces released by modern knowledge systems. He knew that these forces and systems had created a rupture in human continuity equivalent to the great shifts that took place in 4th-century Athens, when Aristotle produced a sort of science, and Plato made philosophy dominant over religion- and-myth dependent poetry as the highest forms of legitimate human life. If Darwin’s Descent implied the problem of dealing with the human’s ability to affect the future even by altering the human itself, Dilthey not long after brought Darwin’s topic to the center of thought when he said that universities had a central role to play in analyzing and judging how human resources are directed to create futures with healthy societies. We might set aside all the problems attached to notions of progress for the moment to focus on Dilthey’s stress on universities’ obligation to study the effects of knowledge production, a task equally important as the creation of knowledge. In effect, Dilthey asked the university to expose the bases for judging the value of various knowledge systems and to train scholars and students in the comparative work, in the criteria needed, to make such judgments.
Dilthey’s work made it possible to place within the research university a new, higher-level course of study; one that is essential to all research, scholarship, and teaching. It is not good enough to teach or train deep into a well of expertise, even though that expertise might be useful and its acquisition, reproduction, and extension exciting. The work has political and ethical implications, sometimes unseen, and it takes place within the Darwinian and post-Darwinian context. We can judge work comparatively and pragmatically, with an eye to knowable outcomes, and with an acknowledgment of the unknowable emergent, and so with proper caution and humility.
Take a simple instance. Imagine the immense difference between Aristotelian and neuro-scientific conceptions of the brain, the massively different concepts of the human implied in each, and the way in which individuals of the species live and work differently because of the distance between these two schools of thought. For Aristotle, the brain was a radiator, a cooling device that helped regulate the passions and reason located in the heart. By the 19th century and continuing to today, by contrast, cognitive neuroscience studies the brain in terms of knowledge and its representation. Is it any surprise that about the time William James helped open this door, Dilthey had the qualms and thoughts he had?
If we translate Geisteswissenschaften as human studies rather than as the humanities, we come out at a different place. Dilthey did not take the divisional boundaries between social sciences and humanities seriously. For him, they were part of the same process of producing knowledge of the human in the world, minus biology. Before him, Adam Smith and John Stuart Mill had set the ground for a unified sense of what we now call the humanities and the social sciences. Mill, in particular, called all of these the moral sciences, in that they have to do with human behavior, with human values, with actions in society and nature, and so the production of culture, which results from those actions. Some scholars believe Geisteswissenschaften was Dilthey’s attempt to translate Mill’s term, “moral sciences” — which included economics as much as it did philosophy.
The American university enforces strict disciplinary and divisional distinctions for study purposes, a fact proven by the need for interdisciplinary or multidisciplinary study. Rendering Geisteswissenschaften, however, makes it clear that Dilthey’s project offers us good reason to study all the work done by humans, especially in the research university. In other words, the comparative judgment of knowledge systems cannot accept the divisional boundaries, if it is to do its job. In Dean Stricker’s terms for the University of Pittsburgh’s Honors College, those boundaries make it hard to be wide as well as deep. If Geisteswissenschaften implies the study of all works of the human spirit or done in the world by the human spirit, then neuroscience as much as economics and poetry must take their place in the field of judgment and comparative study.
Let me invoke the greatest of all authorities to make this point: Wikipedia, or Wikipédia. In American English, we speak of the Humanities, often these days assumed to be the equivalent of the Geisteswissenschaften; that is, the place in the university where value is created and judgment is trained. In other parts of the world — and France will serve for the moment — Geisteswissenschaften gets a much sharper and more inclusive rendering. Les sciences humaines does not accept the division so common to the American research university, which has had certain unfortunate results.
Let me start by saying the American humanities have persisted in thinking of their role and of the human in terms derived from very old ideas about the human, and no amount of data-mining or talk of digital humanities will alter the fundamental paradigm of those fields. Let me add that the divisional structure also relegates the critical speculative historical techniques of the humanities away from the so-called empirical sciences, no doubt to the detriment of the latter’s effects upon the human. Consider the recently discovered need to incorporate ethics within medicine as proof of this claim, or community review for experiments and studies.
In Europe, around the 17th century, a transition took place from early forms of modernity — what we used to call “the Renaissance” — to later, more advanced modern forms. We identify the transition by the rise of science and modern philosophy, along with the expansion of empires and the commercialization of slavery. The great Italian philosopher, Giambattista Vico, attempted to think through the issues that emerged in this transition. He studied the problems and possibilities created by the new economies, new politics, new science, and new philosophy that emerged in a still much contested world order. Vico was writing in the late 17th and early 18th century; early days in the Age of Enlightenment, but the old world held on tight. He worked under the constant threat of being burnt alive, like several of his friends, by the Inquisition that had not yet given up its battles against modernity.
I want to stress just three points from Vico’s remarkable body of work. Vico was also a university professor. As a younger academic, he held a relatively low ranking position as professor of rhetoric, and in that position each year he addressed incoming students to advise them on the nature of university life, intellectual responsibility, and the methods of study. It took him seven years to get it right, at which time his university promoted him to Professor of Law, a much higher position. I emphasize again how the issues I am presenting belong very much to the university world and to the intellectual life of serious educators within major universities.
I am going to use Vico’s seventh lecture to incoming students to take away three basic points that should define our study of the human as it decides on what to make of itself and its future. Each of these three points names a very concrete step in university study. First, all study must have a comparative element. If we are to train judgment, we need to compare the outcomes of choices we make when we invest capital and human resources in certain forms of work. Vico offered a rather pragmatic prudent course of judgment: What advantages come from which forms of study? In this lecture, Vico — at a time when modernity struggled with tradition — compared historical periods of intellectual work as well as disciplinary modes. So, he compared traditional modes of philosophy to Cartesian modes. He compared analytic modes of physics to geometrical modes. He judged the consequences of outcomes by considering not only the content of knowledge but also the cultural effects of that knowledge, as they followed upon choices among these modes.
In his welcoming lecture, he stressed four purposes of university study, which of course, includes original scholarship, research, and creativity as well as simply student study for examinations. We can see how Vico prefigures concerns seen in Dilthey and even Foucault. No doubt, Vico’s advice that study should engage the totality of arts and sciences is no longer possible. Therefore, we need different approaches to reach the other goals contained within the purposes of study. We need to find ways to be experts in matters involving the life of the body politic, not merely expert in our own deep well of technical expertise. We need to be aware that human will, which is always obscure, is determinative, so we need to train it and evaluate its workings and realize it is in fact always at work. Knowledge production results from human will. How do we think about and judge that? — a question made difficult by the intersection of will and judgment. Here, once more, the university demands high-level self-reflection of all those who study there. All of this effort works to produce scholar citizens wise and prudent in communal and, we will add after Darwin, species life.
Vico stressed to the students, faculty, and government leaders who made up his audience that judgment, choice, and purpose were the lifeblood of academic study. The society and university must provide the instruments and other “complementary resources” needed for research and study. He meant printing, libraries, freedom from censorship, lecture halls, etc. We might now think of similar things as complementary resources — instruments still, labs, libraries, printers, digital resources, trained faculty, etc. Above all these, Vico insists that all “learners” — and this includes faculty and advanced scholars, as well as students — must consistently ask about the purpose of her work, of the institution’s decision to invest in some work and not others, and so on. At times, scholars struggle to know the purpose of their work. Often, the purpose does not become clear until late in a process of discovery. Sometimes, the consequences, which might be different from the purposes, become clear far in the future or elsewhere, in another field of knowledge. These difficulties require nonetheless the persistent ethical commitment to know the aim of any piece of research or creativity. To what end am I working? — that is a persistent question. Without it, the harder question — what is the value of my work, of my purpose, of my willed action? — becomes invisible and impossible to ask. Work then spills out of control, and one risks becoming the mere technician Dilthey saw universities producing in industrial society. The aim or goal is not merely the imagined end point of an experiment or a planned piece of research. The result of the work is not its purpose. The great Roman Cicero made immortal the question, cui bono — that is, in whose interest is whatever you are doing? This becomes the ethical and perhaps political question at the heart of all the human work done in a non-savage age. The university needs to offer access to and maintain constant awareness of this set of realities, demands, and challenges. It is this, I believe, that we seek to achieve with the demand that we speak to each other clearly.
This essay was originally a Brackenridge Lecture to the Honors College of the University of Pittsburgh, where the majority of the audience consisted of gifted and multidisciplinary undergraduates committed to the educational ideals I express. I want to thank the Honors College and its Dean, Edward Stricker, for creating those conditions and extending me an exciting invitation.