IF YOU TOOK an English survey course when you were an undergraduate, you probably learned about the progression of literary eras. Romanticism was followed by Victorian realism, which was followed by naturalism, modernism, postmodernism, and so on. It’s a good story, this vision of literary history, one that supplies structure to an otherwise baffling amalgam of styles, themes, and idiosyncrasies developed over centuries and distributed around the globe. It might even be true. But if you were an attentive student, you probably had qualms about it. Some of your reservations may have concerned the works you’d actually read. Were Proust and Stein really doing that same, modernist thing? Doesn’t Dickens, in the right light, look an awful lot like Wordsworth? Other doubts, though, would have turned on what was absent from the syllabus.
The usual way in which to pose questions of the second sort — those about the books literary critics don’t read, rather than those they do — involves the biases of the canon. Why so many dead white men? Why not more books from the Global South? Why not more women and working-class writers? Why not more genre fiction and radical literature and everything that isn’t (or wasn’t) always shelved with capital-L Literature? Indeed, these were the pressing questions of a previous generation of literary scholarship. When it turned out that the answers on offer weren’t very good (Greatness! Importance! Tradition!), the canon changed in meaningful, if not radical, ways. Less Scott, Trollope, and Bellow, more Behn, Morrison, and Cisneros.
But changing the canon — or even a proliferation of canons, as literary studies has fractured into a collection of increasingly well-defined subfields — takes us only so far. Readers are finite creatures, capable of making their way through only a tiny fraction of the millions of books published over the centuries. The problem, at this sort of scale, has less to do with canonical selection bias than it does with our inevitable ignorance of nearly everything that has ever been written. It’s one thing to claim that a particular book was influential in its day (though influence is a tricky matter, more sociological and economic than literary) or that a text has been treated as important in subsequent scholarship. It’s something else entirely to argue that the same book is “representative” of a genre’s or an era’s output, especially when even the best-informed critics have read almost none of the material in question.
So how can we know the outlines of literary history without reading an impossible number of books? One answer is that we can’t. At best we tell stories, some of which are more convincing than others for their intended audiences, but all of which are based on vanishingly small slivers of evidence. Alternatively, we abandon the project entirely, preferring instead to make small but more easily defensible claims about individual texts. A third option, though, would be to change the way we work, to preserve large-scale claims by ending the singular identification of literary study with close reading.
If the idea of studying literature without reading it strikes you as somewhere between bizarre and dangerous, you’re not alone. There’s a whole cottage industry devoted to dismissing such projects as hopeless (or trivial, or both) or denouncing them as the death of the humanities. But it’s worth asking what they entail and what they allow before we resign ourselves to living with the tremendous limitations of reading alone.
There’s no single term that captures the range of new, large-scale work currently underway in the literary academy, and that’s probably as it should be. More than a decade ago, the Stanford scholar of world literature Franco Moretti dubbed his quantitative approach to capturing the features and trends of global literary production “distant reading,” a practice that paid particular attention to counting books themselves and owed much to bibliographic and book historical methods. In earlier decades, so-called “humanities computing” joined practitioners of stylometry and authorship attribution, who attempted to quantify the low-level differences between individual texts and writers. More recently, the catchall term “digital humanities” has been used to describe everything from online publishing and new media theory to statistical genre discrimination. In each of these cases, however, the shared recognition — like the impulse behind the earlier turn to cultural theory, albeit with a distinctly quantitative emphasis — has been that there are big gains to be had from looking at literature first as an interlinked, expressive system rather than as something that individual books do well, badly, or typically. At the same time, the gains themselves have as yet been thin on the ground, as much suggestions of future progress as transformative results in their own right. Skeptics could be forgiven for wondering how long the data-driven revolution can remain just around the corner.
Into this uncertain scene comes an important new volume by Matthew Jockers, offering yet another headword (“macroanalysis,” by analogy to macroeconomics) and a range of quantitative studies of 19th-century fiction. Jockers is one of the senior figures in the field, a scholar who has been developing novel ways of digesting large bodies of text for nearly two decades. Despite Jockers’s stature, Macroanalysis is his first book, one that aims to summarize and unify much of his previous research. As such, it covers a lot of ground with varying degrees of technical sophistication. There are chapters devoted to methods as simple as counting the annual number of books published by Irish-American authors and as complex as computational network analysis of literary influence. Aware of this range, Jockers is at pains to draw his material together under the dual headings of literary history and critical method, which is to say that the book aims both to advance a specific argument about the contours of 19th-century literature and to provide a brief in favor of the computational methods that it uses to support such an argument. For some readers, the second half of that pairing — a detailed look into what can be done today with new techniques — will be enough. For others, the book’s success will likely depend on how far they’re persuaded that the literary argument is an important one that can’t be had in the absence of computation.
So, what’s the literary argument? Roughly speaking, it’s that there are real, broadly identifiable differences between whole classes of books from different time periods and genres and written by authors of different genders and nationalities. If that fact seems obvious, we should ask ourselves what we really know about, say, British fiction as a whole and how we could possibly establish that knowledge in any fully persuasive way using our traditional methods. For every purportedly distinguishing feature of British fiction, there are dozens or hundreds of American (or Indian or Russian) books that exhibit it. Eras and periods are the same, with endlessly multiplying forerunners and stragglers outside their temporal bounds; there are thousands of men who write “like” women and vice versa (enough, in fact, that we might wonder if there’s really a difference between them, especially in the modern period). Such difficulties are of course in many ways constitutive. It’s absurd to imagine a definitive answer (computational or otherwise) concerning the nature of British or modernist or women’s writing, or to the singular generic membership of any given book. But if we’re curious and honest, don’t we want to know whether our provisional answers hold up even a little outside the narrow samples on which they’re based? Wouldn’t we like to know if it’s possible to tell the difference between books — all books — published in the United Kingdom and the United States? Or written by women and by men? Not in any individual case susceptible to a dully correct result, but in aggregate?
This is what Jockers has done, at least for a subset of questions about collections of books ranging from a few dozen volumes to a few thousand. In one example, Jockers finds that it’s possible to guess the author, text identity, national origin, author gender, and genre of about 100 19th-century novels by nearly 50 different authors at rates much higher than chance just by examining the frequency of certain common words in those books. Using Jockers’s method — a classification algorithm known as Nearest Shrunken Centroid working on the frequencies of common terms in the corpus — and given an unknown piece of a novel from the same collection, there’s a roughly 90 percent chance of attributing it to the correct author and text (compared to a 1 or 2 percent chance of getting the answer right were we picking at random), about an 80 percent chance of guessing the author’s gender, and better than 50-50 odds of grouping it with other books published in the same decade and genre (from among a dozen or so possibilities).This fact doesn’t tell us much about the books themselves, nor is the list of terms involved — “the,” “she” “about” and so forth — likely to spark our interpretive interest. But it does suggest that there’s a low-level linguistic basis for the category distinctions we’ve long been inclined to draw, which is to say that those distinctions appear not to be epiphenomena of our critical practice. There really are measurable differences between authors, nations, eras, and genres. This fact doesn’t necessarily mean that the categories were built on an invisible or subconscious sense of such linguistic differences, though neither does it rule out that possibility, but it’s an important confirmative result. It’s also one that could be immensely useful for cataloging library books and large digital archives, which typically lack the categorical information the algorithm can supply.
More practically interesting and ambitious are Jockers’s studies of themes and influence in a larger set of novels from the same period (3,346 of them, to be exact, or about five to 10 percent of those published during the 19th century). These are the only chapters of the book that focus on what we usually understand by the intellectual content of the texts in question, seeking to identify and trace the literary use of meaningful clusters of subject-oriented terms across the corpus. The computational method involved is one known as topic modeling, a statistical approach to identifying such clusters (the topics) in the absence of outside input or training data. What’s exciting about topic modeling is that it can be run quickly over huge swaths of text about which we initially know very little. So instead of developing a hunch about the thematic importance of urban poverty or domestic space or Native Americans in 19th-century fiction and then looking for words that might be associated with those themes — that is, instead of searching Google Books more or less at random on the basis of limited and biased close reading — topic models tell us what groups of words tend to co-occur in statistically improbable ways. These computationally derived word lists are for the most part surprisingly coherent and highly interpretable. Specifically in Jockers’s case, they’re both predictable enough to inspire confidence in the method (there are topics “about” poverty, domesticity, Native Americans, Ireland, sea faring, servants, farming, etc.) and unexpected enough to be worth examining in detail.
It’s possible to use topic data to study individual books, though that’s not a very interesting thing to do. It turns out that Moby-Dick is about whaling; if you really need to know details about a specific book, you’re always best off reading it the old-fashioned way. A better approach is to examine how topics vary over time or between different subsets of the corpus, and it’s here that Jockers provides some his most intriguing results. Literary interest in tenants and landlords, for instance, varies widely over the course of the 19th century, but is almost always higher in Irish texts than in their British and American counterparts. At the same time, this interest shows a handful of coordinated international spikes that Jockers links to both specific historical events (the Irish Famine chief among them) and to changes in literary fashion, suggesting the key role of the popular marketplace in determining fictional subject matter. Jockers also finds a quantitative basis for some longstanding stereotypes of the period. Women write much more often about topics related to happiness and affection, female fashion, and infants than do men, who in turn make disproportionate use of villains, enemies, and dueling. American authors, unsurprisingly, devote more attention to slavery and to Native Americans than do their British and Irish counterparts. It would make for a nicely counterintuitive story if these facts were otherwise, but sometimes it’s useful to find that our existing beliefs are supported with new kinds of evidence, especially when part of the task is to establish that the evidence itself is worthy of our trust.
The notoriously difficult problem of literary influence finally unites many of the methods in Macroanalysis. The book’s last substantive chapter presents an approach to finding the most central texts among the 3,346 included in the study. To assess the relative influence of any book, Jockers first combines the frequency measures of the roughly 100 most common words used previously for stylistic analysis with the more than 450 topic frequencies used to assess thematic interest. This process generates a broad measure of each book’s position in a very high-dimensional space, allowing him to calculate the “distance” between every pair of books in the corpus. Pairs that are separated by smaller distances are more similar to each other, assuming we’re okay with a definition of similarity that says two books are alike when they use high-frequency words at the same rates and when they consist of equivalent proportions of topic-modeled terms. The most influential books are then the ones — roughly speaking and skipping some mathematical details — that show the shortest average distance to the other texts in the collection. It’s a nifty approach that produces a fascinatingly opaque result: Tristram Shandy, Laurence Sterne’s famously odd 18th-century bildungsroman, is judged to be the most influential member of the collection, followed by George Gissing’s unremarkable The Whirlpool (1897) and Benjamin Disraeli’s decidedly minor romance Venetia (1837). If you can make sense of this result, you’re ahead of Jockers himself, who more or less throws up his hands and ends both the chapter and the analytical portion of the book a paragraph later. It might help if we knew what else of Gissing’s or Disraeli’s was included in the corpus, but that information is provided in neither Macroanalysis nor its online addenda.
It’s a shame that the analysis of influence isn’t carried further, since it’s the methodological crown jewel and, by Jockers’s own lights, the critical core of the book. In fact, there’s a strangely inverse correlation throughout the book between the sophistication of the computations deployed and the richness of the critical analyses that accompany them. When, in an early chapter, Jockers does nothing more mathematically demanding than count titles published by year, he uses that simple data to build a nuanced and compelling case for the central importance of the western United States to Irish-American fiction and manages in the process to overturn a long-established narrative of Irish-American writing in decline during the first decades of the 20th century. As the methods become more complex, however, the reader is left more often with suggestive leads for further investigation in place of firm conclusions.
But perhaps this movement toward what we might call descriptive data is a consequence less of the specific decisions made in Macroanalysis than of the methods themselves. When it’s suddenly possible to examine hundreds of thematic concerns in thousands of texts across dozens of decades — doing so in a way that generates tens of thousands of individual data points — it seems small-minded to complain that one scholar hasn’t exhausted all that could be done with the data. We might be wise to give thanks that such a repository of new critical evidence is available and to look forward to its continued development in years to come.
This point about the abundance of data raises two other issues. First, it’s clear that data visualization and statistical analysis will play important roles in making sense of the large troves of data that work like Jockers’s is generating. It’s perhaps inevitable that humanists — very rarely trained in dealing with large-scale data — will fumble a bit on the way to discovering the techniques best suited to their particular ends, and Jockers isn’t free of foibles on this score. It would be valuable to find average values reported with confidence intervals, for instance, and to see graphical information presented with error bars, consistently scaled axes, descriptive captions, and the like. But these aren’t damning problems, and it’s probably best to take them as signs of a new field finding its footing on genuinely difficult terrain. Still, there’s an opportunity to learn and to import best practices from other disciplines that we literary critics would be foolish to pass up.
A second issue is the publication form of the scholarly monograph. Macroanalysis bears more than a handful of traces of the compromises required to fit novel, data-driven, potentially interactive work into the static confines of a printed book. It contains far more illustrative figures than your average piece of literary criticism, but only a fraction of those that might have been presented in support of its claims or that readers might like to have seen for their own purposes. And the underlying literary corpus and derived data would be tremendously valuable to other scholars, but neither can be made usefully available in print. Realizing this, the book is accompanied by a handful of online appendices, the most interesting of which is a tool for browsing the topic model used to analyze literary thematic content. But given the range of investigations involved, the difficulty of weaving those studies together in fewer than 200 printed pages, the desirability of data sharing and reuse, and the profusion of questions raised by the new information Jockers presents, you can’t help but wonder whether the future output of similar work wouldn’t be best served by some combination of shorter articles and well-constructed data-exploratory websites.
These issues shouldn’t distract us from the fact that Macroanalysis is a massively important book. While its specific critical conclusions may not satisfy every reader — and indeed will productively frustrate no small number of them — it provides two tremendous services to the literary profession and to interested amateurs. It contains an overwhelming amount of data about the configuration of the 19th-century novelistic field as a whole in ways that allow us to situate the books we read much more thoroughly with respect to what was going on in the literary world around them. And it supplies such a wealth of new techniques, questions, and approaches that you can hardly imagine a reader leaving it without having picked up at least a couple of things to explore on his or her own. If you want to know what a major part of literary criticism will look like in 20 years, you need to read Macroanalysis. It’s destined for syllabuses, academic reading groups, and lists of works cited for a long time to come.