Is Digital Humanities Too Text-Heavy?

Last week was the marvelous international conference for digital humanities, held this year at beautiful University of Nebraska-Lincoln. Over the course of 4 days, I tried desperately to meet people I only knew from tiny Twitter pictures or gitHub or even citations, and in between attempted to catch as many presentations as I could. The work on display, both in the presentations and the posters, ranged from information visualization used to examine poetry to the use of network analysis to study rhetoric, with projects deploying facial recognition, fuzzy GIS, topic modeling, and various other techniques and methods seen in digital humanities scholarship.

But what I took away from DH13 was something else entirely, a feeling that crystallized when I listened to Willard McCarty give his acceptance speech for the Roberto Busa Award, which is given to “recognise outstanding lifetime achievements in the application of information and communications technologies to humanistic research”. It was named for Father Busa, whose work with IBM on the Index Thomisticus is held out as one of the pioneering works in humanities computing, and later digital humanities.

But that transition in name wasn’t simply corporate rebranding. As Willard noted in his speech, the shift from calling the endeavor “humanities computing” to referring to it as “digital humanities” also came with a dramatic increase in popularity. It wasn’t the name that brought in all the new faces, rather the change in name signaled a shift from a practice involving a few scholars focused on analyzing literature to a messy “big tent” that roughly holds digital libraries, historical GIS, information visualization, network analysis, new media, and post-colonial digital theory.

Even had there not been the sudden inclusion of so many different scholarly agendas and methods, the increase in popularity is not so simple. The drastically increased output of humanities scholars using computational methods brings with it new modes of practice, and the significant increase in the accessibility of tools used to enact these methods brings with it practical and cultural effects seen in open source software and commons-based peer production. Ten years ago, someone “doing humanities computing” would have required much more in the way of technical resources and fallen into a much smaller convex hull of possible activities than someone “doing digital humanities” in 2013.

But a quick look at the abstracts shows how much the analysis of English Literature dominates a conference attended by archaeologists, area studies professors and librarians, network scientists, historians, etc. It seemed, at one point, that there was a 4-day author attribution/stylometrics track, while all the geospatial work had to be presented in a single, standing room only session. To be clear, that’s an exaggeration, and I haven’t done a serious analysis of the abstracts to support it, but I know through conversation and attending some extremely low-attendence but exciting sessions on the far side of the conference that I’m not the only one who felt it.

What makes this a difficult thing to measure and consider is that there’s text analysis and then there’s text analysis. Everyone does text analysis now, whether they’re looking at Korean kinship networks or Jane Austen, but there’s a difference between the well-established humanities computing approach that relies almost exclusively upon it and the more synthetic one that sees text analysis as one component of several.

I put a question mark in the title because I’m not sure, and this is based more on feeling than it is on empirical evidence. And even if it is the case that English literary analysis gets overrepresented because of its long history with humanities computing, I don’t think that means it should be rooted out by “good digital humanists” reporting on “known text wranglers”. But an international conference for a vibrant and diverse community of practice should be as reflective of that community as possible, and if that means we lose a couple authorship attribution sessions in favor of a few information visualization for post-colonial geographic network analysis sessions, then I’m okay with that.1

1I’d even be okay with it even if those new sessions didn’t involve information visualization.

 

 

 

This entry was posted in Algorithmic Literacy, Big Data, Natural Law, New Literature, Spatial Humanities, Text Analysis. Bookmark the permalink.

3 Responses to Is Digital Humanities Too Text-Heavy?

  1. Megan Miller says:

    So, I’ve been encountering this from an advising angle, as I do a lot of student advising at Stanford, and mostly get assigned students who’s hearts lie in the arts or humanities, but feel torn between their (what they perceive to be) conflicting interest in the sciences/engineering/etc. I think the rise in popularity and spread of “digital humanities” as a buzzword is one of the first real ways that the concept of what it means to study the humanities is changing in the more popular mindset. I am especially excited to see this shift (and expansion of what the term means) in the minds of younger students who do feel conflicted in their choice of studies, and I hope that Stanford and other colleges can try to enable more talented students to dig deeper and use more readily available tools to do “digital humanities” research (whatever that means and will mean in the future going forward) in a meaningful way in a subject they are passionate about. I think it’s an important step higher ed needs to take in validating humanities scholarship in a world where we are more and more seeing emphasis on tech, programming, and engineering in undergraduate study. Digital humanities is a place for younger student to thrive, discover, and grow.

  2. Elijah, great post. I had an “aha” moment about the dominance of English Lit at DHSI in June. So much source material is available and open-access. No copyright issues! This has had a snowball effect in terms of tools for DH in English lit. I hope as mapping technology and digital map data itself become more available, this will happen with GIS. Some other fields (like Greek and Latin classical Lit) also have editions and translations now in the public domain. But many of s for whatever reason–different language lit, or different methodologies–don’t have public domain digitized data. I hope that in 5 years, with more digitization of diverse data, that will change. Copyright does still limit us, though.

  3. As with any field that is figuring itself out, especially a field that has suddenly become a “big tent” as you aptly describe, there will be a process of sorting things. You may be right about digital humanities being text heavy at the moment. My concern is that it is too dependent upon proprietary software, large grants, and tools that are not widely available or open source. You make some good points, though. Perhaps being text oriented is a necessity in order to make continuity with and get buy-in from traditionalists.