Month: March 2017

DH and Narrative, DH as Narrative, DH-Narrative, by Elisabeth Buzay

While both I—and many others—would argue that those who work in DH agree that they do not agree on what DH means, as I have encountered more and more digital tools and projects, I have begun to think of DH work in a provocative way: DH work should be considered a form of narrative-making or storytelling. For fields such as digital storytelling or tools such as story mapping, this argument may not be that surprising. But what about other types of DH projects and tools? If we think of archival or curated sites, such as those created with Omeka, or book or network conglomerations, such as those made in Scalar, I propose that these forays are equally forms of narrative or story: we pick and choose what to include and exclude, we form paths and groupings and connections to guide or suggest methods of understanding; in other words, we give shape to a narrative. Here I will advance an initial iteration of this argument, which, I believe, ultimately provides another perspective on how DH is truly a part of the humanities.


DH and Narrative

If we take Hayden White’s description of narrative, in conversation with Barthes, in The Content of the Form: Narrative Discourse and Historical Representation, which argues that “[a]rising, as Barthes says, between our experience of the world and our efforts to describe that experience in language, narrative ‘ceaselessly substitutes meaning for the straightforward copy of the events recounted’” (1–2), as one of the basic definitions of this concept, we can see how this term could easily be used in reference to various methodologies and tools used in DH. More particularly, however, we must expand the definition by including not just language, but also image and sound. It is worth a look, for instance, at DH projects that create digital archives, such as The Digital Public Library of America or the Bibliothèque nationale de France’s Gallica, in which digital tools are used to create digitized versions of (an) actual archive(s). Or other such projects, like The Internet Archive, The Sonic Dictionary, or The Story of the Beautiful, in which a digital archive is created. Or we might think of digital editions of texts, such as the Folger Digital Texts or digitized resources such as The ARTFL Project. Or, in a slightly different direction, there are tools one can use to compare versions of texts, like Juxta or JuxtaCommons, or to annotate a text (collaboratively or not), like Annotation Studio. In these varying cases, the digital approach and tools used are the methods through which meaning is provided, whether that meaning be the coherency of an archive, the evolution or development of a text, or the preservation of narratives that themselves might otherwise be lost.


DH as Narrative

A DH approach is not limited, of course, to archival or editorial projects, however. In many cases, DH projects are clearly narrative in form. The case of digital storytelling is, perhaps, the most obvious such example. StoryCenter, previously known as the Center for Digital Storytelling, is a well-known entity whose basic elements of digital storytelling are often cited. And digital storytelling is also being used in a slightly different manner by teachers and students in the field of education in order to teach and learn about topics beyond those of telling personal stories, as can be seen on the University of Houston’s Educational Uses of Digital Storytelling site. Digital storytelling approaches have been expanded in other directions as well, for instance in

  • tying stories to location, with the use of tools like StoryMapJS, Esri Story Maps, or Odyssey, in which specific events and places are linked,
  • tying stories to timing, with the use of tools like TimeLineJS, TimeGlider, or Timetoast, in which specific events and times are linked,
  • or tying stories to time and location, with the use of tools like Neatline or TimeMapper, in which specific events, places, and times are linked so that a user can follow a story both geographically and/or chronologically.

In all of these cases, the digital approach is one that is explicitly used to shape a narrative or story. In other words, here DH is again a form of narrative or narrative-making.



Big data projects, such as those of the Stanford Literary Lab or approaches, such as that of Matthew L. Jockers in his Macroanalysis: Digital Methods and Literary History, may present an exception to my argument in comparison to other DH projects and approaches mentioned thus far; nonetheless, I suggest that even projects or approaches such as these also create narratives or stories, in that they provide meaning to observations, calculations, or data that otherwise would not be comprehensible, given their size. How could they not?

This brief overview brings us to a final point to ponder: in their Digital_Humanities, Anne Burdick, Johanna Drucker, Peter Lunenfeld, Todd Presner, and Jeffrey Schnapp argue that the design of DH tools and projects are themselves essential aspects of the arguments they create:

The parsing of the cultural record in terms of questions of authenticity, origin, transmission, or production is one of the foundation stones of humanistic scholarship upon which all other interpretive work depends. But editing is also productive and generative, and it is the suite of rhetorical devices that make a work. Editing is the creative, imaginative activity of making, and as such, design can be also seen as a kind of editing: It is the means by which an argument takes shape and is given form. (18)

In other words, a narrative-making approach is literally embedded in form, in design. Like these authors, I wonder whether this perspective cannot be extended. They write:




If we apply these points to the entire field of DH, this provides insight into significant food for thought: if
design is the foundation of DH, then isn’t the result of this design necessarily a narrative or a story? And might not this be one further aspect that confirms that DH is indeed a part of the traditional humanities?

These questions invite others: are DH narratives and their design different or new or innovative in comparison to traditional narratives, and if so how? What can DH narratives tell us about ourselves and our world? To circle back to White and Barthes’ view of narrative, if we accept that DH is narrative, what new meanings can be distilled from the events DH recounts?


Elisabeth Herbst Buzay is a doctoral student in French and Francophone Studies and in the Medieval Studies Program at the University of Connecticut. Her research interests include medieval romances, contemporary fantasy, digital humanities, video games, the intersection of text and images, and translation. You can contact her at

Visualizing English Print at the Folger, by Gregory Kneidel (cross-post with Ruff Draughts)

Screen Shot 2017-03-17 at 3.01.53 PMIn December I spent two days at the Folger’s Visualizing English Print seminar. It brought together people from the Folger, the University of Wisconsin, and the University of Strathclyde in Glasgow; about half of us were literature people, half computer science; a third of us were tenure-track faculty, a third grad students, and a third in other types of research positions (i.e., librarians, DH directors, etc.).

Over those two days, we worked our way through a set of custom data visualization tools that can be found here. Before we could visualize, we needed and were given data: a huge corpus of nearly 33,000 EEBO-TCP-derived simple text files that had been cleaned up and spit through a regularizing procedure so that it would be machine-readable (with loss, obviously, of lots of cool, irregular features—the grad students who wanted to do big data studies of prosody were bummed to learn that all contractions and elisions had been scrubbed out). They also gave us a few smaller, curated corpora of texts, two specifically of dramatic texts, two others of scientific texts. Anyone who wants a copy of this data, I’d be happy to hook you up.vep_1

From there, we did (or were shown) a lot of data visualization. Some of this was based on word-frequency counts, but the real novel thing was using a dictionary of sorts called DocuScope—basically a program that sorts 40 million different linguistic patterns into one of about 100 specific rhetorical/verbal categories (DocuScope was developed at CMU as a rhet/comp tool—turned out not to be good at teaching rhet/comp, but it is good at things like picking stocks). DocuScope might make a hash of some words or phrases (and you can revise or modify it; Michael Witmore tailored a DocuScope dictionary to early modern English), but it does so consistently and you’re counting on the law of averages to wash everything out.

After drinking the DocuScope Kool-Aid, we learned how to visualize the results of DocuScoped data analysis. Again, there were a few other cool features and possibilities, and I only comprehended the tip of the data-analysis iceberg, but basically this involved one of two things.

  • Using something called the MetaData Builder, we derived DocuScope data for individual texts or groups of texts within a large corpus of texts. So, for example, we could find out which of the approximately 500 plays in our subcorpus of dramatic texts is the angriest (i.e., has the greatest proportion of words/phrases DocuScope tags as relating to anger)? Or, in an example we discussed at length, within the texts in our science subcorpus, who used more first-person references, Boyle or Hobbes (i.e., which had the greater proportion of words/phrases DocuScope tags as first-person references). The CS people were quite skilled at slicing, dicing, and graphing all this data in cool combinations. Here are some examples. A more polished essay using this kind of data analysis is here. So this is the distribution of DocuScope traits in texts in large and small corpora.
  • We visualized the distribution of DocuScope tags within a single text using something called VEP Slim TV. Using Slim TV, you can track the rise and fall of each trait within a given text AND (and this is the key part) link directly to the text itself. So, for example, this is an image of Margaret Cavendish’s Blazing-World (1667).



Here, the blue line in the right frame charts lexical patterns that DocuScope tags as “Sense Objects.”















The red line charts lexical patterns that DocuScope tags as “Positive Standards.” You’ll see there is lots of blue (compared to red) at the beginning of Cavendish’s novel (when the Lady is interviewing various Bird-Men and Bear-Men about their scientific experiments), but one stretch in the novel where there is more red than blue (when the Lady is conversing with Immaterial Spirits about the traits of nobility). A really cool thing about Slim TV that could make it useful in the classroom: you can move through and link directly to the text itself (that horizontal yellow bar on the right shows which section of the text is currently being displayed).

So 1) regularized EEBO-TCP texts turned into spreadsheets using 2) the DocuScope dictionary; then use that data to visualize either 3) individual texts as data points within a larger corpus of texts or 4) the distribution of DocuScope tags within a single text.

Again, the seminar leaders showed some nice examples of where this kind of research can lead and lots of cool looking graphs. Ultimately, some of the findings were, if not underwhelming, at least just whelming: we had fun discussing the finding that, relatively speaking, Shakespeare’s comedies tend to use “a” and his tragedies tend to use “the.” Do we want to live in a world where that is interesting? As we experimented with the tools they gave us, at times it felt a little like playing with a Magic 8 Ball: no matter what texts you fed it, DocuScope would give you lots of possible answers, but you just couldn’t tell if the original question was important or figure out if the answers had anything to do with the question. So formulating good research questions remains, to no one’s surprise, the real trick.

A few other key takeaways for me:

1) Learn to love csv files or, better, learn to love someone from the CS world who digs graphing software;

2) Curated data corpora might be the new graduate/honors thesis. Create a corpora (e.g.s, sermons, epics, travel narratives, court reports, romances), add some good metadata, and you’ve got yourself a lasting contribution to knowledge (again, the examples here are the drama corpora or the science corpora). A few weeks ago, Alan Liu told me that he requires his dissertation advisees to have a least one chapter that gets off the printed page and has some kind of digital component. A curated data collection, which could be spun through DocuScope or any other kind of textual analysis program, could be just that kind of thing.

3) For classroom use, the coolest thing was VEP Slim TV, which tracks the prominence of certain verbal/rhetorical features within a specific text and links directly to the text under consideration. It’s colorful and customizable, something students might find enjoyable.

All this stuff is publicly available as well. I’d be happy to demo what we did (or what I can do of what we did) to anyone who is interested.

Gregory Kneidel is Associate Professor of English at the Hartford Campus. He specializes in Renaissance poetry and prose, law and literature, and textual editing. He can be reached at

Oral Histories and the Tech Needed to Produce Them, Part 1: Cameras, Audio Recorders, and Media Storage, by Nick Hurley

Screen Shot 2017-03-04 at 4.19.45 PMLast summer I had the pleasure of spending several weeks in southwestern Germany, visiting family and conducting interviews with five local residents who lived through the Second World War. In doing so, I fulfilled a goal I’d had in mind ever since the death of my great-grandmother in 2013. She had been one of a host of relatives and family friends that regaled me with stories from “back then” every time I’d come to visit, and her passing made me realize that I had to do more than just listen if I wanted to preserve these memories for future generations. This time around, I would sit down with each of the participants—the youngest of whom was in their late 70s—record our conversations, and eventually send each of them a copy of their edited interview on DVD. While I had a clear idea of why I was undertaking the project, and had done a lot of reading on oral history practices (including this fantastic online resource), I was less confident in just how I would go about carrying out the actual interviews. I was inexperienced with audiovisual equipment or video editing, and the seemingly endless number of tech-related questions I faced concerning things like cameras, microphones, and recording formats left my head spinning.

It took a significant amount of research and self-instruction before I was comfortable enough to purchase the necessary gear I needed. These two posts are my attempt to share what I learned and hopefully save other oral history novices some of the headaches I endured putting together an interview “kit” which, at a minimum, will consist of a camcorder (possibly), your audio recorder, and a way to store your footage.

The Camera

You’ll need to decide early on whether or not to record video as well as audio for your oral histories. While choosing the latter option will greatly reduce the amount of equipment you’ll need to buy, it really depends on the nature of your project. If you do decide to film, steer clear of mini-DV and DVD camcorders, as these record on formats that are quickly becoming obsolete. Your best bet is to go with a flash memory camcorder, which utilize removable memory cards that can be inserted into your laptop for easy file transfer.

High definition (HD) camcorders are fast becoming the norm over their standard definition (SD) counterparts, and they’ve become affordable enough to make them a viable option for amateur filmmakers. In terms of capture quality, AVCHD usually means a higher quality image but a bigger file, while MP4 files are compressed to reduce size and are a bit more versatile in terms of how they can be manipulated and uploaded. Either way, you can’t go wrong, and will get a great looking picture. I’ve shot exclusively in AVCHD so far with my Canon camcorder and have had no issues.

The Audio Recorder

If you’re going to splurge on anything, it should be this. You may or may not elect to include video in your project, but you will always have audio, and the quality should be as clear as possible—especially if you plan on doing any kind of editing or transcribing. There are a few things to consider when choosing a recorder:

  1. Whichever model you go with should have at least one 3.5mm (1/8”)Screen Shot 2017-03-04 at 4.10.03 PM stereo line input, to give you the option of connecting an external microphone, and one 3.5mm (1/8”) output, so you can plug in a pair of headphones to monitor your audio.
  2. If you know you’re going to use an external microphone, having one or more XLR inputs is a plus. XLR refers to the type of connector used on some microphones; they are more robust than a 3.5mm jack and harder to accidentally unplug, making them an industry standard.
  3. Some recorders are meant for high-end professional use and have a plethora of features and buttons you’ll simply never use. Look for one with an easy to use interface.
  4. WAV and MP3 will be the most common options you’ll see format-wise, and many devices can record in either. WAV files are uncompressed, meaning they contain the entire recorded signal and are therefore much larger than MP3 recordings, which are easier to move and download but sometimes experience a slight loss in audio quality.

Media Storage

The three main types of memory cards that you’ll encounter are SD (Secure Digital, up to 2GB), SDHC (Secure Digital High Capacity, 4-32GB), and SDXC (Secure Digital eXtended Capacity 64GB-2TB). Almost all cameras, computers, and other tech manufactured after 2010 should be compatible with all three types, and the cards themselves are fairly inexpensive. Useful as they are, memory cards shouldn’t be considered a means of long-term storage for your files. For one thing, you’ll run out of room fast; while things like compression and format will determine the exact amounts, for planning purposes you can expect to fit only about 5 hours of HD video on a 64GB SDXC card and 12-49 hours of WAV audio on a 16GB SDHC card. Even if you’ll only be doing one or two short interviews, you should still plan on migrating your files to a more secure storage media as soon as possible after you’re done recording. Cards can be broken or lost, and digital files, like their analog counterparts, will “decay” over time if simply left sitting.

Screen Shot 2017-03-04 at 4.12.54 PMMy raw footage is stored on two external hard drives. Any editing work is done using one of them, while the other is stored in a separate location as a backup. Edited interviews are likewise copied to both hard drives once they’re completed. (This practice of having multiple copies of the same material stored in separate locations is known as replication, and is an important aspect to any digital preservation plan; for more info, check out this great page from the Library of Congress.)

Again, these three pieces are the minimum you’ll need to properly record and store audio and (if you desire) video footage. Depending on the circumstances and scope of your project, however, you may want to utilize some optional gear and accessories, which I’ll bring up in Part 2. Until then, feel free to contact me with any questions, and thanks for reading!

Nick Hurley is a Research Services Assistant at UConn Archives & Special Collections, part-time Curator of the New England Air Museum, and an artillery officer in the Army National Guard. He received his B.A. and M.A. in History from the University of Connecticut, where his work focused on issues of state and society in 20th-century Europe. You can contact Nick at and follow him on Twitter @hurley_nick.