FOR AT LEAST THE PAST DECADE, the term “digital humanities” (DH) has captured the imagination and the ire of scholars across American universities. Supporters of the field, which melds computer science with hermeneutics, champion it as the much needed means to shake up and expand methods of traditional literary interpretation; for most outspoken critics, it is a new fad that symbolizes the neoliberal bean counting destroying American higher education. Somewhere in the middle of these two extremes lies a vast and varied body of work that utilizes and critically examines digital tools in the pursuit of humanistic study. This field is large and increasingly indefinable even by those in its midst. In fact, “digital humanities” seems astoundingly inappropriate for an area of study that includes, on one hand, computational research, digital reading and writing platforms, digital pedagogy, open-access publishing, augmented texts, and literary databases, and on the other, media archaeology and theories of networks, gaming, and wares both hard and soft. As Franco Moretti said to me in the first of these interviews: “‘digital humanities’ means nothing.”
For Pamela Fletcher, professor of art history at Bowdoin College, “digital humanities” has the potential to mean much more when expanded beyond literature and history. In this interview she dismisses DH hierarchies that favor computational approaches and advocates for further disciplinary investigation that privileges the humanities — “ethics, power, values, meaning, ambiguity, and history” — at all times. For Fletcher, a “digital art history” should not look the same as computational computing or literary analysis. While those in literature are perpetually trying to blur disciplinary boundaries, Fletcher calls for a reevaluation of the digital’s impact on specific fields that have been largely ignored in DH debates. Fletcher’s perspective on art history broadens the scope of the “The Digital in the Humanities” series, which aims to explore the surprising lines of overlap, as well as outright disagreement, in DH and beyond, by speaking with both leading practitioners in the field and vocal critics of the field’s impact on humanistic inquiry. But at its heart, this series is a means to explore the intersection of the digital and the humanities, and this intersection’s impact on research and teaching, American higher education, and the increasingly tenuous connection between the ivory tower of elite institutions and the general public.
Although Fletcher claims her research interest in Victorian Art is her “real field” of study, the product of which has been traditional monographs such as Narrating Modernity (2003) and her current project, The Victorian Painting of Modern Life, Fletcher’s work in digital art history is also impressive. Her primary digital intervention The London Gallery Project has opened up new ways to understand and visualize the London art market, and led to the co-edited volume with Anne Helmreich, The Rise of the Art Market in London, 1850–1939 (2011) and “Local/Global: Mapping 19th Century London’s Art Market” (2012), the prize-winning first article published in 19th Century Art Worldwide’s Digital Humanities and Art History series. At the same time Fletcher is adamant about finding a voice for digital art history, a poignant reason for including her in this series. Fletcher serves as the founding editor of “digital humanities and art history” for caa.reviews and wrote a reflection of the digital discipline in 2015. Despite advocating on behalf of digital art history, there is a tension and interdisciplinary motivation in Fletcher’s work symbolized by her position as co-director, alongside the computer scientist Eric Chown, of Bowdoin’s new Digital and Computational Studies initiative. This tension permeated our discussion as Fletcher deftly moved from advocating for “digital art history” at one moment and an opening up of the field beyond the humanities at another.
MELISSA DINSMAN: How did you first come to enter what I am broadly going to call the “digital” field?
PAMELA FLETCHER: My involvement in the digital field started with a research question and a relationship. This was about a decade ago: I had recently moved from Ohio State to Bowdoin (and from women’s studies to art history). I’d finished my first book and was starting a new research project on the emergence of the commercial art gallery in London in the 19th century. I was interested in the history of the gallery because I study Victorian painting and I wanted to understand more about the spaces in which people viewed art. This turned out to be a question that required organizing — and ultimately visualizing — a lot of information drawn from different sources: newspaper reviews, exhibition catalogs, art dealers’ stock books, and so on. So at a certain point, drowning in data, I asked my husband — this is the relationship part — to help me use the computer to draw points on a map. He, very kindly, pointed out that we could do much more than that, and so we began working together on what eventually became the London Gallery Project. I was lucky in that he was pretty much the perfect person to collaborate with: he has a background in both art dealing and in interactive media, and he is very patient. But I think it is actually pretty common for a collaborative digital project to begin through some kind of preexisting intellectual — or just human — connection, and I really like that.
So what would you say is the current role of the digital in your humanities work? Do you think this qualifies as “digital humanities”? Do you care?
In my own scholarship — so far at any rate — I would say the digital is a tool, really, a way of visualizing data in order to see patterns and identify trends and anomalies. This is on the low-impact end of “digital humanities,” in that I’m not coding or tool building or working with massive amounts of data. But I’m not particularly concerned with hierarchies in that way. I don’t have a litmus test for who is a “real” digital humanities scholar, and I think there is room for everything from people just using simple off-the-shelf tools to investigate their data, to people who are working on new modes of computational analysis. This may be connected, of course, to the fact that my own institutional positioning — at a liberal arts college — doesn’t operate in ways that make labels like “digital humanities” very meaningful.
That’s a striking connection. What is it about a liberal arts college (or Bowdoin specifically) that makes “digital humanities” an unproductive or not very meaningful label?
Your question gives me pause, and makes me wonder if this is really a good description of liberal arts colleges, or even of Bowdoin, or if it is just my own perspective. I should say that many liberal arts colleges (Hamilton comes to mind) have used the Digital Humanities category very productively, and my own colleagues at Bowdoin have put together a course cluster on that topic. I guess what I meant is that we are such a small — and collegial — place that limiting our conversations about digital and computational work to scholars in the humanities ultimately seemed limiting. And students simply do not divide themselves into those categories: there are plenty of art history majors who double major in physics or biology. And probably even more of the students in my class — or any humanities class — are also fully immersed in work across the curriculum no matter what their major might be. So that particular way of dividing human knowledge into three broad categories — humanities, natural sciences, social sciences — just doesn’t map onto my experience of liberal arts very well.
Are there any digital or media subfields in particular that you think yield the most benefit to the humanities and why?
Trying to answer this question makes me wonder again about the utility — or non-utility — of the digital humanities umbrella. The term has been very strategically useful in getting funding and constituting interdisciplinary research centers. But I’m not sure it’s going to be useful for much longer. Right now, it really mostly means digital literary scholarship. Opening up this category goes in two directions. On the one hand, I think it’s probably time to investigate the specific disciplinary histories that we bring to digital work. Tom Scheinfeldt and Stephen Robertson have recently called for attending to the “disciplinary differences in digital humanities” and have begun outlining a genealogy of digital history that has its origins in oral history, folklore studies, and public history rather than the more familiar DH origin narrative of humanities computing and text analysis. I think “digital art history” might usefully trace its past through a long trajectory of technologies of reproduction and artistic experimentation. On the other hand, there doesn’t seem to me any particularly compelling reason to limit our interdisciplinary connection to the humanities. Some of the best conversations I’ve had about how to reimagine digital mapping, for example, have happened with computer scientists.
It’s interesting that you say the term “digital humanities” won’t be useful for much longer because advocates often speak of digital work (and more frequently the digital humanities) as a means of making the humanities relevant in the 21st-century university. Do you think this statement is a fair assessment of digital work and its purpose? Do you think it is fair to the humanities?
Crystal Hall, my colleague in Digital and Computational Studies at Bowdoin, has the best answer to this question I’ve ever heard: the question may not be whether or not the digital can save the humanities, but whether or not the humanities can save the digital. By which I think she means that the digital realm — both within the university but maybe more pressingly outside of it — desperately needs the kinds of thinking about ethics, power, values, meaning, ambiguity, and history that the humanities have traditionally undertaken.
In a C21 post titled “The Dark Side of Digital Humanities,” media scholar Richard Grusin draws connections between the emergence of DH and the increased “neoliberalism and corporatization of higher education.” Do you think such a comparison has merit? Is there something about the digital humanities’ desire to produce that creates an alignment with neoliberal thinking?
This is a hard question. That post — and others on the same topic — raises some very good and challenging critiques of the optimism surrounding the discussion of digital humanities in some contexts. And it is absolutely the case that universities are under severe pressure; money is very tight, and a very instrumental view of the purpose of education currently dominates our political and cultural landscape. The relationship of digital humanities to this fact is harder to define. Sure there is a kind of provocative analogy between the language of productivity in each area. But I’m not sure it holds up under pressure. It relies on a pretty narrow view of what DH is, and I think it overstates the actual presence of digital work at most educational institutions. And I have to ask — I can just hear the voices of some of my computer scientist colleagues pressing me on this! — are productivity and efficiency automatically negative qualities? Humanists working in the most traditional ways do produce things, as Grusin points out, though we might prefer to call it creating. And efficiency can be great, as long as it is operating in the service of the right questions. That said, there are some real practical truths to the critique. Digital humanities projects can sometimes rely on the labor of people who are not fairly paid or credited, and a professed ethics of collaboration doesn’t necessarily translate into material equity. Those of us who are tenured faculty have an ethical obligation to work to change that. And it is true that funding is limited and the choice — by faculty, administrators, funding agencies — to allocate money to digital humanities is going to mean something else doesn’t get funded. But I have to say that I think this idea that digital humanists are attracting huge amounts of money is in most cases just not true. I see far more people who can’t even get the most basic tech support for their teaching or research. But maybe this is because of the field I’m in? Art history isn’t nearly as far along in supporting digital research as literature or history, I think.
You are anticipating my next question, which concerns the funding required to put together a solid digital humanities research group. From your experience, how is this funding typically achieved? Are universities willing to pay for DH projects despite massive cutbacks elsewhere, or is funding most likely to be found from external sources?
I think this is a question that my institutional affiliation — teaching at a small, but well-funded, liberal arts college — probably gives me a limited perspective on. We don’t have “solid research groups” on anything; it isn’t how we work — no graduate students, no big research centers. Which isn’t to say we don’t have serious scholars doing significant research that is well funded both internally and externally. We do. But new initiatives tend to grow through the undergraduate curriculum, and if you have interested students and supportive deans — as we do — there is internal support for the infrastructure and training you need to help faculty change their teaching and, by extension, their research.
In the past there has been a line drawn in the digital humanities between those who code and those who don’t. Do you think full engagement with the digital humanities requires programming skills and if so, should programming become a requirement for humanities students?
Yes and no. No, I don’t think you necessarily need to know how to code to do meaningful digital humanities work, not least because collaboration is a central part of DH work and the idea of people bringing different skill sets — and research problems — together is one of its core strengths. Yes, because as a humanist I am deeply committed to the idea that in order to communicate with other people you need to speak their language, and coding is the language of computation. In our new Digital and Computational Studies curriculum at Bowdoin we are starting from the premise that every student who goes through our program needs to understand at least the underlying logic of how computers work and the many layers of abstraction and representation that lie between binary code and what you see on the screen. This is partially about communication: you need to understand what computers are (and aren’t) good at in order to come up with intelligent computational problems and solutions. But it is also because each stage of computational abstraction involves decisions that are essentially acts of interpretation, and you can’t do meaningful work if you don’t understand that. This is equally true, of course, of anyone who uses technology, which is most of us. So I’d say ideally we should be educating all our students to be computationally literate, which is not the same as being expert programmers.
We also hear quite a bit about the significant underrepresentation of women and minorities across digital fields, including the digital humanities. Is there a remedy to this? How has your own work tried to challenge this lack?
This is such an important and complicated question. At one level, the answer is working toward greater representation of women and minorities in all fields of academic life, including a particular emphasis on STEM fields. And we need to be sure that an increased emphasis on digital work in the humanities doesn’t diminish that focus. But at another level, we need to think about how the very norms and processes of computation might be gendered and raced. Tara McPherson has an amazing article called “Why Are the Digital Humanities So White?” in which she tries to think through the complicated connections between cultural assumptions about race and difference, and the cultural assumptions underlying the logic of coding and problem solving. It is a tough article to teach as it relies upon an understanding of racism as a set of deep “common sense” beliefs (and structural practices) about difference, similarity, and sociality, which can be challenging for students more accustomed to thinking about racism as a personal characteristic. But that kind of questioning is precisely what the humanities have to offer the digital world.
Shifting the focus from internal structures to physical location, do you have an opinion on whether the future of digital work lies in individual departments or libraries? What do you think is the best physical place for digital scholarship and what does this say about its future role in the university?
I’m agnostic on this one too. On the one hand, libraries are doing amazing work in supporting digital research; on the other, plenty of terrific digital projects more or less live on individual scholars’ hard drives. I think different kinds of institutions will create different infrastructures for doing digital scholarship — and those might be focused on the humanities or might bring in people from the social and natural sciences.
The next couple of questions are going to ask you to think about public engagement. First of all, how do you think the general public understands the term “digital humanities” or, more broadly, the digital work being done in the humanities (if at all)?
I don’t know. We might start by asking if the category of “the humanities” is a useful or meaningful one outside the university. I know that for my undergraduate students it isn’t always the most relevant category. For them the “digital” pervades everything, so why are the humanities separate from the social sciences? If, for example, you are teaching tools of computational text analysis, does it matter if the text you are analyzing is Shakespeare or the content of censored blog posts in China? On the one hand, it might make a really big difference to the kind of questions you ask of the material. But to the extent that you are working on developing good tools to try to analyze, say, the emotional temperature of a piece of writing, it would probably be better to test (and train) your technology against different kinds of texts.
Second, in an age that has seen a decline in the public intellectual (as Nicholas Kristof opined in The New York Times last year), what role, if any, do you think digital work plays? Could the digital humanities (or the digital in the humanities) be a much needed bridge between the academy and the public, or is this perhaps expecting too much of a discipline?
I think the decline of the public intellectual has to do with much larger social forces, including changes in the structure of the media and changes in public perceptions about expertise. I’m not sure there is a singular “public” anymore, and the status of the “intellectual” is pretty suspect for a lot of people these days. That reality doesn’t strike me as something the digital humanities can overcome.
My last question is going to ask that instead of looking forward, you look back. In my very first interview with Franco Moretti, he pointed out to me that all my questions focus on the future of the digital in the humanities. Perhaps it’s a sign of my own optimism or perhaps it reflects a certain anticipatory tone that is used in media and DH circles, but I’m wondering if you can speak to what you think the digital in the humanities has accomplished so far.
I love this question, and think there are two different ways to go about answering it. There is the literal option: I could list projects in digital art history that I think have made a significant impact on the field. For example, Paul Jaskot and Anne Kelly Knowles’s work on the building history of Auschwitz, and the amazing work that people at UCLA and Duke and the National Gallery in London are doing in digitally reconstructing ancient and medieval monuments and cities. But there is also the bigger methodological answer: I think that the advent of digital and computational tools and methods is an extraordinary intellectual provocation for those of us in the humanities, whether we take up those methods in our scholarship or not. For example, while planning a team-taught course with a computer scientist colleague, he began our conversation with the premise that “everything is data.” On the one hand, I know what he means, of course, but it also made me wonder. Sure, maybe everything can be treated as (or converted into) data, but does the concept of “data” leave out some important kinds of information or meaning, like experience, interaction, ambiguity, morality, aesthetics, taste, emotion? How does calling it “data” differ from other ways we might describe information: texts, images, primary sources, secondary sources? Another example is the question of method and its role in art history. Doing digital projects means including a section on method, generally in the form of a project narrative of some sort. Other disciplines — mostly in the natural and social sciences — already do this as a matter of course (think sociology), but art historians don’t tend to include it in quite the same formulaic way. Why is this? Again, I’m not asserting that we all need to have a method section in our books and articles: I’m asking why we generally don’t — are there other values or beliefs that preclude that mode of writing and interpretation? So whether or not we adopt digital tools and methods in our humanities work, their very existence — and the modes of knowledge production that they make possible — has already changed how we understand and explain and value what we do in the humanities.
Melissa Dinsman is the CLIR Postdoctoral Fellow in Data Curation for Visual Studies at the University of Notre Dame, and the author of Modernism at the Microphone: Radio, Propaganda, and Literary Aesthetics During World War II (2015).