Brain Bytes: DHMS Blog

DH and Narrative, DH as Narrative, DH-Narrative, by Elisabeth Buzay

While both I—and many others—would argue that those who work in DH agree that they do not agree on what DH means, as I have encountered more and more digital tools and projects, I have begun to think of DH work in a provocative way: DH work should be considered a form of narrative-making or storytelling. For fields such as digital storytelling or tools such as story mapping, this argument may not be that surprising. But what about other types of DH projects and tools? If we think of archival or curated sites, such as those created with Omeka, or book or network conglomerations, such as those made in Scalar, I propose that these forays are equally forms of narrative or story: we pick and choose what to include and exclude, we form paths and groupings and connections to guide or suggest methods of understanding; in other words, we give shape to a narrative. Here I will advance an initial iteration of this argument, which, I believe, ultimately provides another perspective on how DH is truly a part of the humanities.


DH and Narrative

If we take Hayden White’s description of narrative, in conversation with Barthes, in The Content of the Form: Narrative Discourse and Historical Representation, which argues that “[a]rising, as Barthes says, between our experience of the world and our efforts to describe that experience in language, narrative ‘ceaselessly substitutes meaning for the straightforward copy of the events recounted’” (1–2), as one of the basic definitions of this concept, we can see how this term could easily be used in reference to various methodologies and tools used in DH. More particularly, however, we must expand the definition by including not just language, but also image and sound. It is worth a look, for instance, at DH projects that create digital archives, such as The Digital Public Library of America or the Bibliothèque nationale de France’s Gallica, in which digital tools are used to create digitized versions of (an) actual archive(s). Or other such projects, like The Internet Archive, The Sonic Dictionary, or The Story of the Beautiful, in which a digital archive is created. Or we might think of digital editions of texts, such as the Folger Digital Texts or digitized resources such as The ARTFL Project. Or, in a slightly different direction, there are tools one can use to compare versions of texts, like Juxta or JuxtaCommons, or to annotate a text (collaboratively or not), like Annotation Studio. In these varying cases, the digital approach and tools used are the methods through which meaning is provided, whether that meaning be the coherency of an archive, the evolution or development of a text, or the preservation of narratives that themselves might otherwise be lost.


DH as Narrative

A DH approach is not limited, of course, to archival or editorial projects, however. In many cases, DH projects are clearly narrative in form. The case of digital storytelling is, perhaps, the most obvious such example. StoryCenter, previously known as the Center for Digital Storytelling, is a well-known entity whose basic elements of digital storytelling are often cited. And digital storytelling is also being used in a slightly different manner by teachers and students in the field of education in order to teach and learn about topics beyond those of telling personal stories, as can be seen on the University of Houston’s Educational Uses of Digital Storytelling site. Digital storytelling approaches have been expanded in other directions as well, for instance in

  • tying stories to location, with the use of tools like StoryMapJS, Esri Story Maps, or Odyssey, in which specific events and places are linked,
  • tying stories to timing, with the use of tools like TimeLineJS, TimeGlider, or Timetoast, in which specific events and times are linked,
  • or tying stories to time and location, with the use of tools like Neatline or TimeMapper, in which specific events, places, and times are linked so that a user can follow a story both geographically and/or chronologically.

In all of these cases, the digital approach is one that is explicitly used to shape a narrative or story. In other words, here DH is again a form of narrative or narrative-making.



Big data projects, such as those of the Stanford Literary Lab or approaches, such as that of Matthew L. Jockers in his Macroanalysis: Digital Methods and Literary History, may present an exception to my argument in comparison to other DH projects and approaches mentioned thus far; nonetheless, I suggest that even projects or approaches such as these also create narratives or stories, in that they provide meaning to observations, calculations, or data that otherwise would not be comprehensible, given their size. How could they not?

This brief overview brings us to a final point to ponder: in their Digital_Humanities, Anne Burdick, Johanna Drucker, Peter Lunenfeld, Todd Presner, and Jeffrey Schnapp argue that the design of DH tools and projects are themselves essential aspects of the arguments they create:

The parsing of the cultural record in terms of questions of authenticity, origin, transmission, or production is one of the foundation stones of humanistic scholarship upon which all other interpretive work depends. But editing is also productive and generative, and it is the suite of rhetorical devices that make a work. Editing is the creative, imaginative activity of making, and as such, design can be also seen as a kind of editing: It is the means by which an argument takes shape and is given form. (18)

In other words, a narrative-making approach is literally embedded in form, in design. Like these authors, I wonder whether this perspective cannot be extended. They write:




If we apply these points to the entire field of DH, this provides insight into significant food for thought: if
design is the foundation of DH, then isn’t the result of this design necessarily a narrative or a story? And might not this be one further aspect that confirms that DH is indeed a part of the traditional humanities?

These questions invite others: are DH narratives and their design different or new or innovative in comparison to traditional narratives, and if so how? What can DH narratives tell us about ourselves and our world? To circle back to White and Barthes’ view of narrative, if we accept that DH is narrative, what new meanings can be distilled from the events DH recounts?


Elisabeth Herbst Buzay is a doctoral student in French and Francophone Studies and in the Medieval Studies Program at the University of Connecticut. Her research interests include medieval romances, contemporary fantasy, digital humanities, video games, the intersection of text and images, and translation. You can contact her at

Visualizing English Print at the Folger, by Gregory Kneidel (cross-post with Ruff Draughts)

Screen Shot 2017-03-17 at 3.01.53 PMIn December I spent two days at the Folger’s Visualizing English Print seminar. It brought together people from the Folger, the University of Wisconsin, and the University of Strathclyde in Glasgow; about half of us were literature people, half computer science; a third of us were tenure-track faculty, a third grad students, and a third in other types of research positions (i.e., librarians, DH directors, etc.).

Over those two days, we worked our way through a set of custom data visualization tools that can be found here. Before we could visualize, we needed and were given data: a huge corpus of nearly 33,000 EEBO-TCP-derived simple text files that had been cleaned up and spit through a regularizing procedure so that it would be machine-readable (with loss, obviously, of lots of cool, irregular features—the grad students who wanted to do big data studies of prosody were bummed to learn that all contractions and elisions had been scrubbed out). They also gave us a few smaller, curated corpora of texts, two specifically of dramatic texts, two others of scientific texts. Anyone who wants a copy of this data, I’d be happy to hook you up.vep_1

From there, we did (or were shown) a lot of data visualization. Some of this was based on word-frequency counts, but the real novel thing was using a dictionary of sorts called DocuScope—basically a program that sorts 40 million different linguistic patterns into one of about 100 specific rhetorical/verbal categories (DocuScope was developed at CMU as a rhet/comp tool—turned out not to be good at teaching rhet/comp, but it is good at things like picking stocks). DocuScope might make a hash of some words or phrases (and you can revise or modify it; Michael Witmore tailored a DocuScope dictionary to early modern English), but it does so consistently and you’re counting on the law of averages to wash everything out.

After drinking the DocuScope Kool-Aid, we learned how to visualize the results of DocuScoped data analysis. Again, there were a few other cool features and possibilities, and I only comprehended the tip of the data-analysis iceberg, but basically this involved one of two things.

  • Using something called the MetaData Builder, we derived DocuScope data for individual texts or groups of texts within a large corpus of texts. So, for example, we could find out which of the approximately 500 plays in our subcorpus of dramatic texts is the angriest (i.e., has the greatest proportion of words/phrases DocuScope tags as relating to anger)? Or, in an example we discussed at length, within the texts in our science subcorpus, who used more first-person references, Boyle or Hobbes (i.e., which had the greater proportion of words/phrases DocuScope tags as first-person references). The CS people were quite skilled at slicing, dicing, and graphing all this data in cool combinations. Here are some examples. A more polished essay using this kind of data analysis is here. So this is the distribution of DocuScope traits in texts in large and small corpora.
  • We visualized the distribution of DocuScope tags within a single text using something called VEP Slim TV. Using Slim TV, you can track the rise and fall of each trait within a given text AND (and this is the key part) link directly to the text itself. So, for example, this is an image of Margaret Cavendish’s Blazing-World (1667).



Here, the blue line in the right frame charts lexical patterns that DocuScope tags as “Sense Objects.”















The red line charts lexical patterns that DocuScope tags as “Positive Standards.” You’ll see there is lots of blue (compared to red) at the beginning of Cavendish’s novel (when the Lady is interviewing various Bird-Men and Bear-Men about their scientific experiments), but one stretch in the novel where there is more red than blue (when the Lady is conversing with Immaterial Spirits about the traits of nobility). A really cool thing about Slim TV that could make it useful in the classroom: you can move through and link directly to the text itself (that horizontal yellow bar on the right shows which section of the text is currently being displayed).

So 1) regularized EEBO-TCP texts turned into spreadsheets using 2) the DocuScope dictionary; then use that data to visualize either 3) individual texts as data points within a larger corpus of texts or 4) the distribution of DocuScope tags within a single text.

Again, the seminar leaders showed some nice examples of where this kind of research can lead and lots of cool looking graphs. Ultimately, some of the findings were, if not underwhelming, at least just whelming: we had fun discussing the finding that, relatively speaking, Shakespeare’s comedies tend to use “a” and his tragedies tend to use “the.” Do we want to live in a world where that is interesting? As we experimented with the tools they gave us, at times it felt a little like playing with a Magic 8 Ball: no matter what texts you fed it, DocuScope would give you lots of possible answers, but you just couldn’t tell if the original question was important or figure out if the answers had anything to do with the question. So formulating good research questions remains, to no one’s surprise, the real trick.

A few other key takeaways for me:

1) Learn to love csv files or, better, learn to love someone from the CS world who digs graphing software;

2) Curated data corpora might be the new graduate/honors thesis. Create a corpora (e.g.s, sermons, epics, travel narratives, court reports, romances), add some good metadata, and you’ve got yourself a lasting contribution to knowledge (again, the examples here are the drama corpora or the science corpora). A few weeks ago, Alan Liu told me that he requires his dissertation advisees to have a least one chapter that gets off the printed page and has some kind of digital component. A curated data collection, which could be spun through DocuScope or any other kind of textual analysis program, could be just that kind of thing.

3) For classroom use, the coolest thing was VEP Slim TV, which tracks the prominence of certain verbal/rhetorical features within a specific text and links directly to the text under consideration. It’s colorful and customizable, something students might find enjoyable.

All this stuff is publicly available as well. I’d be happy to demo what we did (or what I can do of what we did) to anyone who is interested.

Gregory Kneidel is Associate Professor of English at the Hartford Campus. He specializes in Renaissance poetry and prose, law and literature, and textual editing. He can be reached at

Oral Histories and the Tech Needed to Produce Them, Part 1: Cameras, Audio Recorders, and Media Storage, by Nick Hurley

Screen Shot 2017-03-04 at 4.19.45 PMLast summer I had the pleasure of spending several weeks in southwestern Germany, visiting family and conducting interviews with five local residents who lived through the Second World War. In doing so, I fulfilled a goal I’d had in mind ever since the death of my great-grandmother in 2013. She had been one of a host of relatives and family friends that regaled me with stories from “back then” every time I’d come to visit, and her passing made me realize that I had to do more than just listen if I wanted to preserve these memories for future generations. This time around, I would sit down with each of the participants—the youngest of whom was in their late 70s—record our conversations, and eventually send each of them a copy of their edited interview on DVD. While I had a clear idea of why I was undertaking the project, and had done a lot of reading on oral history practices (including this fantastic online resource), I was less confident in just how I would go about carrying out the actual interviews. I was inexperienced with audiovisual equipment or video editing, and the seemingly endless number of tech-related questions I faced concerning things like cameras, microphones, and recording formats left my head spinning.

It took a significant amount of research and self-instruction before I was comfortable enough to purchase the necessary gear I needed. These two posts are my attempt to share what I learned and hopefully save other oral history novices some of the headaches I endured putting together an interview “kit” which, at a minimum, will consist of a camcorder (possibly), your audio recorder, and a way to store your footage.

The Camera

You’ll need to decide early on whether or not to record video as well as audio for your oral histories. While choosing the latter option will greatly reduce the amount of equipment you’ll need to buy, it really depends on the nature of your project. If you do decide to film, steer clear of mini-DV and DVD camcorders, as these record on formats that are quickly becoming obsolete. Your best bet is to go with a flash memory camcorder, which utilize removable memory cards that can be inserted into your laptop for easy file transfer.

High definition (HD) camcorders are fast becoming the norm over their standard definition (SD) counterparts, and they’ve become affordable enough to make them a viable option for amateur filmmakers. In terms of capture quality, AVCHD usually means a higher quality image but a bigger file, while MP4 files are compressed to reduce size and are a bit more versatile in terms of how they can be manipulated and uploaded. Either way, you can’t go wrong, and will get a great looking picture. I’ve shot exclusively in AVCHD so far with my Canon camcorder and have had no issues.

The Audio Recorder

If you’re going to splurge on anything, it should be this. You may or may not elect to include video in your project, but you will always have audio, and the quality should be as clear as possible—especially if you plan on doing any kind of editing or transcribing. There are a few things to consider when choosing a recorder:

  1. Whichever model you go with should have at least one 3.5mm (1/8”)Screen Shot 2017-03-04 at 4.10.03 PM stereo line input, to give you the option of connecting an external microphone, and one 3.5mm (1/8”) output, so you can plug in a pair of headphones to monitor your audio.
  2. If you know you’re going to use an external microphone, having one or more XLR inputs is a plus. XLR refers to the type of connector used on some microphones; they are more robust than a 3.5mm jack and harder to accidentally unplug, making them an industry standard.
  3. Some recorders are meant for high-end professional use and have a plethora of features and buttons you’ll simply never use. Look for one with an easy to use interface.
  4. WAV and MP3 will be the most common options you’ll see format-wise, and many devices can record in either. WAV files are uncompressed, meaning they contain the entire recorded signal and are therefore much larger than MP3 recordings, which are easier to move and download but sometimes experience a slight loss in audio quality.

Media Storage

The three main types of memory cards that you’ll encounter are SD (Secure Digital, up to 2GB), SDHC (Secure Digital High Capacity, 4-32GB), and SDXC (Secure Digital eXtended Capacity 64GB-2TB). Almost all cameras, computers, and other tech manufactured after 2010 should be compatible with all three types, and the cards themselves are fairly inexpensive. Useful as they are, memory cards shouldn’t be considered a means of long-term storage for your files. For one thing, you’ll run out of room fast; while things like compression and format will determine the exact amounts, for planning purposes you can expect to fit only about 5 hours of HD video on a 64GB SDXC card and 12-49 hours of WAV audio on a 16GB SDHC card. Even if you’ll only be doing one or two short interviews, you should still plan on migrating your files to a more secure storage media as soon as possible after you’re done recording. Cards can be broken or lost, and digital files, like their analog counterparts, will “decay” over time if simply left sitting.

Screen Shot 2017-03-04 at 4.12.54 PMMy raw footage is stored on two external hard drives. Any editing work is done using one of them, while the other is stored in a separate location as a backup. Edited interviews are likewise copied to both hard drives once they’re completed. (This practice of having multiple copies of the same material stored in separate locations is known as replication, and is an important aspect to any digital preservation plan; for more info, check out this great page from the Library of Congress.)

Again, these three pieces are the minimum you’ll need to properly record and store audio and (if you desire) video footage. Depending on the circumstances and scope of your project, however, you may want to utilize some optional gear and accessories, which I’ll bring up in Part 2. Until then, feel free to contact me with any questions, and thanks for reading!

Nick Hurley is a Research Services Assistant at UConn Archives & Special Collections, part-time Curator of the New England Air Museum, and an artillery officer in the Army National Guard. He received his B.A. and M.A. in History from the University of Connecticut, where his work focused on issues of state and society in 20th-century Europe. You can contact Nick at and follow him on Twitter @hurley_nick.

Digital Spaces and Designing for Access, by Gabriel Morrison

AccessThere has been a lot of talk about how digital humanities scholarship has the potential to be democratizing, and the internet allows for connectivity that extends across cultural, geographical, and institutional boundaries. DH scholarship can directly reach the public outside of academia, and digital spaces allow for collaborative enterprises that have seldom been attempted by humanities scholars. But are all things digital inherently more accessible, or do we simply imagine them to be so? Are we designing for access or just assuming that access is no longer an issue?

Tara McPherson points out that exclusionary practices and ideologies (based on class, gender, race, sexuality, language, or ability) are often built into software in ways that are not always immediately visible to privileged users. This limits not only who has access to and ownership of DH work but also how diverse users can develop their work. One of these exclusionary ideologies is what disability theorist Tobin Siebers has termed the ideology of ability. This ideology assumes able-bodiedness as a “default” state. It either elides difference or else assumes that the disabled body must find a way to be “accommodated” rather than acknowledging any responsibility for designers to create spaces and environments that are inclusive to the diverse range of human ability.

Just as physical spaces are often inaccessible by design (e.g., stairs and stairsdoorways that do not permit wheelchair access or loud, brightly lit public spaces that can result in sensory overload for persons with autism), there are many ways in which digital space is constructed to include only the able-bodied, including text fields with small or difficult-to-read fonts, videos without captioning, podcasts without transcripts, images without descriptions that can be read by screen readers, web spaces that cannot be manipulated by users, and so-called “accessible” software that is built for the able-bodied and only retrofitted to “accommodate” diverse users when they complain.

Those engaging in digital humanities scholarship cannot hope to dismantle oppressive ideologies (something which is part of the core work of the humanities) while uncritically using technology that reifies these same oppressive structures. We must realize that part of digital humanities scholarship involves critical and intentional design. In order to truly encourage access, digital scholarship should include principals of universal design.

How can we do this? While it’s true that no design can be said to be truly universal, the Web Accessibility Initiative offers important guidelines for more inclusive digital publishing, and Yergeau et al. lay out a theoretical groundwork for accessibility in digital and multimedia work. The National Center on Universal Design for Learning, CAST, and Jay Dolmage address concerns specific to integrating digital media and technology for access in the classroom, and Composing Access advises on how to prepare for conferences. Here are a few tips for more accessible design:

  • Think critically about the implicit ideologies coded into the platforms you use, and consider the affordances of your technology before using it. As Johanna Drucker and Patrik BO Svensson point out, middleware incorporates various rhetorical limitations—do these constraints limit access?
  • Aim for commensurability across modes. While multimodality can be a great way for users to interact with your text in different ways and with different senses, if information is not presented redundantly through different modes, it increases the chance that users may not be able to access your text. For instance, if a video delivers information both visually and aurally but doesn’t include captioning and description, then it becomes inaccessible for both blind and deaf users. And of course, delivering information through more than one mode helps all Captions, for example, allow hearing users to access the text in a noisy place, on an airplane with someone sleeping in the next seat, or on a device without audio capability.
  • Digital projects are more accessible when they are easily manipulable by users. For example, text that cannot be copied/pasted, as is the case in an image or some publishing platforms, might not be easily read with assistive technologies such as screen readers or braille pads.

Though digital media can present accessibility issues, when used critically and conscientiously, multimodal affordances open up the possibility of creating content that is more accessible to all users, regardless of level of ability.

Gabe Morrison is a first-year doctoral student in Rhetoric and Composition at the University of Connecticut. His research interests include multimodal writing and graduate student writing instruction. You can contact him at

Jennifer Snow, Digital Scholarship Librarian

Screen Shot 2017-02-24 at 11.29.44 AM1. What initially intrigued you about research/teaching in digital humanities or media studies?

I began my work at UConn as the History Librarian six years ago, and I have slowly grown my skills and interests from there.  I have a Master’s in History, although I was trained in the traditional research methodologies.  Digital humanities didn’t really feature in my education.  However, as I worked with scholars and colleagues on various projects, I saw key ways that the Library could be more involved in digital humanities.  As research and scholarship change, the Library must adapt as well to remain relevant.  My skills and knowledge in this area are mostly self-taught, and I enjoy teaching others and seeing students become excited over the research possibilities opened up by a digital approach. 

2. Has entering the DHMS realm changed your approach to research and teaching in general? If so, how?

Absolutely!  I find my research and teaching to be much more collaborative now.  I’ve learned as much from students and scholars as they have from me.  We each bring our own expertise to the table, whether it’s a technological skill or subject knowledge.  I also actively seek out from others what they would like to learn, so I can tailor workshops and research consultations to their specific needs.  Whenever I work on a new project, I immediately think about who else might be interested and have something to contribute.  It’s a very different experience from individual work on an article for publication.  The projects I work on are multidisciplinary, and I have grown as a researcher from these collaborative opportunities.

3. You have three (commitment-free) wishes to receive support for your research/teaching in DH or media studies: what are they?

First, I would love to have more staff in the library dedicated to DH.  Web developers, graphic designers, coders!  We are always trying to do more with less.  It would be nice to never worry about finding time to work on a project because there is plenty of people to work on it.  Second, the opportunity to offer student internships or assistantships would be great.  I think this will be forthcoming in the future, though, so I am very much looking forward to that.  It would be a wonderful opportunity for students to learn more about DHMS and to work on interesting projects.  And third, more time is always welcome!  There are so many fantastic projects out there that I want to be a part of, but unfortunately, there are only so many hours in a day, and I have other responsibilities.  

4. First struggles and successes: do you have any best-practice advice?

My advice is really to just dive in!  If there’s something you’re interested in learning about, whether it’s a new tool, platform, or something else, don’t hesitate to start working with it.  Try and find other people who have a similar interest, and you can help each other.  Look for workshops, seminars, and meet-and-greets related to digital scholarship.  DH is collaborative by nature, so networking is hugely important.  There will definitely be struggles.  You may not master a particular tool as quickly or easily as you had hoped.  You will have other things competing for your time.  My advice is to not get discouraged and keep plugging away.  Don’t be afraid to ask for help when you need it, whether from the library or from your own departments. 

 5. How would you like to challenge yourself in DH or media studies? Or what is a project you most seek to realize? 

As the Digital Scholarship Librarian, I am tasked with working beyond the humanities and branching out into the social sciences and sciences.  This is certainly a challenge for me as my background is squarely in the humanities.  However, I am working on developing skills in areas such as data visualization that can be of benefit to people in the sciences.  I would absolutely love to work with a researcher outside of the humanities who is new to digital scholarship.  We can educate each other and become more well-rounded researchers because of our collaboration.  I somewhat actively avoided the sciences in my academic career (to this day, I have never set foot inside the science buildings at my alma mater!) so this is definitely a new area for me.  The silos between the disciplines have begun to break down as research becomes more multidisciplinary, and I’m very excited to be part of that.

Jennifer Snow has a BA in History from Vassar College and an MA in History and Master in Library Science from Florida State University.  She currently serves as the Digital Scholarship/Humanities and Social Sciences Librarian for UConn.  Her academic background is in early modern French history, and she has worked on a number of digital scholarship projects on a variety of subjects.  She has published articles and a book chapter on topics related to digital scholarship and critical

Watch Your .edu, Know Your Repositories

fineprintIn a January 2017 Forbes article on scholarly publishing, historian Sarah Bond takes aim at platforms ready to host academic articles or chapters. For pay. Her case in point is

As privatized platforms like look to monetize scholarly writing even further, researchers, scientists and academics across the globe must now consider alternatives to proprietary companies that aim to profit from our writing and offer little transparency as to how our work will be used in the future. In other words: It is time to delete your account.

In order to broadcast our academic work beyond the conference panel or occasional tweet or personal webpage – and depending on the copyright and marketing arrangements we have with our print publishers – hosts like LinkedIn,, ResearchGate and others have become common “marketplaces.” Here is another opportunity to connect with international scholarship, browse, and
offer our own to share and discuss. But as we saunter and sample, how many of us look at the fine print to know how these repositories actually work? Do we understand what happens with our work once it gets uploaded? How is it distributed? Who can access it? Does it get altered when it’s downloaded? Who owns the copyright?

“Monetizing scholarship” is the big, mysterious, compound noun Bond seeks to warn us about, and she has a point. Copyright issues, including where and how we share our finished work, are usually only part of our research conversations when keeping ideas close to our chest. We don’t always trumpet copyrightissuesthe thesis of our next book or article out into the world, partly because it has not been tested, partly because we might be wary of someone else snatching it up. Yet, how many of us are well trained, or at least reasonably conversant in, the minutiae of legalese it takes to comprehend a publisher’s contract? Do you know or remember what media rights you signed off on in your last contract? I can only speak for myself, but getting to my first contract had me so thrilled and excited that all I needed to comprehend was that there was a line for my signature. Exclamation mark.

green-publishingThat has changed. In a landscape of oscillating international copyright law, the Digital Millennium Copyright Act (DMCA, which has also become a verb), and increasing hybridization or digitization of scholarship, your old contract arrangements are no more. Your scholarship now has the potential to move or be translated into many different media, and for-profits such as are just one way to monetize your work.

Publishers and librarians have long been aware of these trends as they impact purchasing, disseminating, curating, and archiving. Scholars? Not so much – unless you had the good fortune of receiving detailed advice from a mentor or peer group or learned the hard way over time. And the dismissive will argue that most of our books or articles are not on the fast track to be signed as a major motion picture deal or radio show anyhow. Still, we often sign away rights to repurpose our work, host our work elsewhere or don’t take advantage of how our ideas and scholarship can work in a world of media convergences.dice

To address some of these issues, Jennifer Snow, a Digital Librarian at UCONN, is organizing a mini-conference on copyright issues in (digital) publishing on April 14th, 8:30am-2pm. Understanding your rights in scholarly publishing is key to maneuvering the treacherous territory of multi-media and multimodal communication, including open access outlets and platforms. And often, we don’t even know of the repositories that are directly available to us from our home institutions: for those of you interested in learning more about UCONN’s own Digital Commons, please take advantage of Marisol Ramos’ workshop this coming Monday at 3pm!

New Graduate Certificate in Digital Humanities and Media Studies

graduates-4555027fc37869a5The first brainbytes blog of the spring semester serves as an announcement: UCONN has a brand new Graduate Certificate! Welcome back. Pending final approval by the Board of Trustees, the Humanities Institute is pleased to announce a Graduate Certificate in DHMS. This certificate will supply interested graduate students with crucial training and with marketable skills and approaches for careers within and outside of academia. As the initiating director of this certificate, I am providing a summary of the contents below.

Need for the DHMS grad certificate

The UCONN grad certificate in DHMS is unique insofar as it is fundamentally interdisciplinary: it will not be solely oriented, as certificate programs are at other schools, towards digital humanities methods, research, and practice, but also towards integrating media studies as an interdisciplinary and international field of critical inquiry and theory. It seeks to enhance the talents, interests, and success rates of our humanities graduate students entering the academic job market, as digital humanities and media studies research and scholarship has proliferated across North American campuses at the undergraduate and graduate levels, as well as internationally. In addition, employment opportunities for graduate students with training in digital humanities and media studies have increased in non-governmental organizations, libraries, museums, and other public and corporate entities as such training is often closely linked to public humanities.

Educational Objectives of the Graduate Certificate

The certificate prepares students to conduct humanities research with digital tools by providing digbookparticipants with the knowledge about same tools, about methods, and, importantly, about theoretical issues central to the interfaces between digital humanities and media studies. These may include: text analysis, data mining, visualization, geo-spatial inquiries and mapping, multimedia and digital storytelling, hybrid and digital publishing, information or knowledge design, network analysis in combination with the history of media, media archeology, media aesthetics, media theory, media philosophy, digital cultures and game studies.

Outcomes include:

  • a DHMS Portfolio (see requirements below)
  • a deepened and theoretically sound understanding of the interfaces between Digital Humanities and Media Studies
  • an in-depth practical and theoretical understanding of the humanities in the digital age as they apply to sectors within and beyond the academy
  • an understanding of and experience with collaborative practice in the humanities, social sciences, and the arts as such practice applies to research and teaching with digital tools

Course Sequence and Educational Objectives

The Graduate Certificate in DHMS for graduate students enrolled in CLAS or Fine Arts PhD or MA/MFA programs will require a total of twelve credits: 3 credits in one of the core courses, two 3-credit electives, and one 3-credit independent study, working on the DHMS Portfolio.

Electives (students take two electives and one independent study, with 3 credits each)

Electives will be chosen based on the student’s major field of inquiry, her/his departmental home, and her/his dissertation or thesis research, in consultation with the student’s PhD or MA/MFA advisor and the director of the DHMS grad certificate. One of the courses as well as the independent study can overlap with the requirements in the home department. Other courses might qualify as electives if they meet the following criteria: electives should deepen the student’s understanding and theoretical and practical application of DH and Media Studies and facilitate her/his direct translation of these skills and knowledge to her/his scholarship.

DHMS Portfolio

dig_scholThe DHMS Portfolio serves as an independent research project, realized alongside and as a product of the independent study and culled from work accomplished over the course of working on the DHMS grad certificate. Students should be able to communicate the intellectual rigor and theoretical foundations of their project. They should also address some of the evaluation guidelines put forth by the Modern Language Association, the American Historical Association, or the College Art Association, as listed below:

  • describe the process underlying creation of work in digital media (e.g., the creation of infrastructure as well as content) and their particular contributions
  • describe how work in digital media requires new collaborative relationships with clients, publics, other departments, colleagues, and students
  • explain and document its development and progress and its contributions to scholarship
  • include colleagues and take advantage of opportunities to explain how your work contributes to the scholarly conversation in on-campus forums, professional meetings, and print or online publications
  • consider process as a form of scholarship and as a valid, even essential, part of knowledge creation

The final product must be publicly accessible on the web and include examples of the student’s work as well as how the project contributed to the student’s growth as a scholar (process writing). The portfolio must include a short statement of purpose.

More information on the application process and certificate details will be available on the DHMS website. The first core course, “Digital Humanities, Media Studies and the Multimodal Scholar” (LCL5020), is on offer this semester. Feel free to ask questions, share with colleagues, and join in on the conversations and events at DHMS in 2017!



Jennifer Terni, Associate Professor of French (Literatures, Cultures, and Languages)

screen-shot-2016-10-18-at-1-20-12-pm1. What initially intrigued you about research/teaching in digital humanities or media studies?  

My interest in media studies is a longstanding one and is no doubt rooted in my inter-disciplinarity.   My MA in history was about the ideological valences of one of France’s most successful early pulp fiction writers, Eugène Sue.  Working on Sue forced me to consider the problem of distribution and audience. I realized from correspondence about him that different groups had very different investments in Sue’s work. Because of the difficulties of describing reception, the explanations of these differences were often uncomfortably reductive (class interest, commercial distraction for the masses, etc…).  As I developed a potential subject for my Ph.D., about the ways in which theater was rooted within Parisian cultural networks, I realized that what I was really after was to imagine new ways to account for and describe these differences.  This is how my Ph.D. project evolved into a much broader study of early mass culture.  The research has led me, over the years, to explore the countless ways that media in the 1830s, ’40s and ‘50s transformed how people were positioned—and positioned themselves—in terms of a reality that was increasingly defined by large numbers of other people. And though it is true that media could be a weapon, what we find when we look more closely was that it was more often—and very self-consciously—a resource that helped people to make sense of how they were being redefined as individuals and as collectives.  The active investments of individuals in media as well as in countless other practices that media helped support played a major role in propelling the rise of a culture increasingly defined by scale.

2. Has entering the DHMS realm changed your approach to research and teaching in general? If so, how?

This past semester I taught a new graduate course on 19th-century media.  It would have been impossible to give this course even a decade ago, since it was built on the shoulders of major digitized archives including Gallica at the Bibliothèque Nationale de France, Hathi Trust, and ARTFL, to name but a few.  To make use of them effectively, however, I had to build an extensive website as a platform from which to organize the many primary sources that we explored as a group as well as to give a picture of what 19th-century media would have looked like. What is more, I tried, as much as possible, to get the students to experience what it would have been like to consume media in the 19th century, for instance, by reading a pulp fiction novel in installments in a newspaper.  This experiment was more successful than I could have hoped.  What is more, occasionally I sent the students to the Dodd archive to encounter 19th-century artifacts more directly (illustrated newspapers, daguerreotype, stereoscopes, photographic technology).  The impact of those encounters was intense in large part because the students had been engaged with primary sources throughout the semester: they had seen the exploding variety of media forms in the 1800s, but also knew firsthand how even very disparate forms were interconnected. They had also read theoretical and historical articles that helped them think about what kinds of cultural work these different genres and platforms were performing.  Touching the actual artifact was meaningful because to them it was already embedded in a web of references and ways of thinking about media, but also because it contrasted with all of the digital content they had been using throughout the semester.  It was thus doubly a material encounter with material culture.

3. You have three (commitment-free) wishes to receive support for your research/teaching in DH or media studies: what are they?

  1. I would love to get support for teaching students how to conduct big-data research on historical sources and to also become critical about the strengths and limits of such research.
  2. I used a digital lab for this graduate class.  It was essential to the conduct of the course.  Each student was seated at a desktop computer.  We often looked at things together in class (illustrations, plays, newsprint, exhibition sites, audio and early film recordings) either as a group or did individual research on a question that was raised in class in real time (this is where the website I built was indispensable as all the resources we needed were centralized and accessible).  Making sure such classrooms are available was essential (and it was, it turns out, a struggle to find such space especially for a graduate class).
  3. I put in a good hundred hours building my website.  Some support for this would have been great too.

4. First struggles and successes: do you have any best-practice advice?

  1. Imagine your course as an overall process of active engagement and experimentation as opposed to being about a set of materials or themes.  Organize it it in terms of activities versus readings.
  2. Try to step out of your normal reflexes in designing your class.  As I got more deeply into the selection of my materials, I worried about the quality of individual samples — their canonicity in other words.  It finally dawned on me that this was exactly the wrong approach.  Letting the students be exposed to media more randomly and giving them tools for engaging with it meaningfully was far more useful pedagogically in terms of the development of their critical and interpretive skills.  They learned how to make fine distinctions between medium, platform, genre, device, formula, topos, so they actually learned to become better readers in a literary sense.  Even as they became increasingly fluent in these distinctions, they also became aware of how much, especially after the 1830s, the development of one genre and platform affected the development of others, even if they did not seem to share obvious affinities.
  3. Do not worry about the interest of any one artifact.  It turned out that the students were interested in everything, precisely because it fit into a whole web, both at the level of 19th-century media production, but also in the conversations we were having about it in class.

5. How would you like to challenge yourself in DH or media studies? Or what is a project you most seek to realize?

To learn about a new media form as a tool for teaching, but even more as a subject for research.  This applies as much to 19th-century media as to the latest platforms.  I am most invested in considering the problems media raises and solves in terms of communication, cognition, socialization, cultural impacts.  I am always interested in mastering new digital tools to be able to create new learning environments (while noting that in many of my classes I have a “no computer/phone” policy).  Ideally, I’d love to happen upon a question in my research that might lead, organically, to being able to develop a meaningful crowd-based research project. qa

Where is the Body in Digital Activism? By Bhakti Shringarpure

screen-shot-2016-11-10-at-11-09-17-amDuring the recent protests against the Dakota Access Pipeline, staged by the Standing Rock Sioux, Facebook users were asked to “check in” to the site of the protest taking place at the reservation. Ostensibly, it was meant to throw police surveillance off track since Facebook locations were being used to track and arrest protesters. As the post to check-in went viral, it generated a counterpoint response almost simultaneously. Blogs decrying social media solidarity that appears lazy and without any actual risk or effort involved sprung up within hours of the check-in going viral. “No, Checking In at the Standing Rock Pipeline Protests on Facebook Will Not Confuse the Police. It’s a waste of time,” wrote Mother Jones. Another friend cringed on social media: “Solidarity is great, and so is media attention, but if you really want to help protesters: donate to the Sacred Stone Camp Legal Defense Fund, call elected officials, or fly your ass over there if you’re able – your body is a lot more useful than your “check-in” at Standing Rock.” Elsewhere, an article mocking the phenomenon appeared: “ISIS Flees After Millions of Americans ‘Check In’ to Mosul on Facebook.” It had not even been 12 hours.

Debates about digital activism have always questioned the significance of the protester’s body. The Arab Spring was heralded as crucial in allowing for a “digital revolution” to take root. But in hindsight, the moment of cyber euphoria seemed to have yielded mixed outcomes. Though digital media was seen as a catalyzing force and has resulted in an extraordinary collective convergence in Egypt and Tunisia, it was also seen as having negative effects; most notably the counter-use by governments to intercept dissent and use the digital for amping up surveillance. More recently, the #BlackLivesMatter movement started as a hashtag on social media and has since been used 12 million times according to studies conducted by the Pew Research Center. That the hashtag became a movement and that the movement is now ushering in the most resistant and radical thinking around race in the USA is not debatable. Yet the actual role of social media is moot unless the physical impact of these hashtags eventually becomes apparent, and this can generally be gauged through the amount of actual protesters, or through shifts it may bring about in local or governmental spheres.

While digital activism has certainly become entrenched in individual and institutional realms, it seems to have also given rise to the narcissistic clicktivist. Detached from any real action and armed with no more than a do-gooder mentality, the clicktivist tends to like, share, tweet, tumblr and post generously but is seen as not necessarily showing up when it counts and when the going gets tough. It is argued that their contribution cannot be ignored and that if solidarity goes viral so does the cause. screen-shot-2016-11-10-at-11-09-31-amThe Dakota Access Pipeline check-in protests on Facebook were less interesting because of the virality of the phenomenon itself; but they were unusual because they illustrated how much we have come to dislike the online activist, how quickly there has come about an “obvious” knowledge that there is a real dichotomy between the body and the digital. In my opinion, the significance of this particular clicktivist is yet uncertain, and I hope we can arrive at more incisive understandings of this phenomenon.

This conversation will be taken up in some depth by the Digital Humanities reading group sponsored by the UConn Humanities Institute. We will meet on November 17th, Thursday from 12-2pm at the Homer Babbidge Library, 4th floor, room 4-153. We will be working through a variety of readings that include the Black Lives Matter syllabus, surveys that ask how social media users see, share and discuss race and the rise of hashtags like #BlackLivesMatter, skeptic critics such as Robert McChesney, Micah White and Jessy Hempel who make strong arguments against online activism, and academics who have engaged in longer and sustained ways with the impact of these new media.

This meeting is open to all faculty and graduate students. Please email for the full list of readings.

Bhakti Shringarpure is Assistant Professor of English and editor-in-chief of Warscapes magazine.

A Crash Course on Digital Mapping for the Moderately Technologically Savvy, by Nathan Braccio

screen-shot-2016-11-03-at-4-09-50-pmFor scholars in the humanities interested in making maps there is a wide range of available tools. At least half-a-dozen programs exist that allow a scholar to upload data, visualize it, analyze it, and then share it with colleagues and the public. These tools provide the enterprising scholar the ability to augment their arguments with exciting visual components or to reveal new questions or patterns that can provide strong evidence or push research in new directions.

In this blog post I will discuss some of the options available, focusing on how each tool matches with different kinds of projects and skill levels. While not an expert in GIS or mapping, I have been working on a mapping project on 17th-century New England that has plunged me into an overwhelming array of websites and software. I made the time-consuming mistake of experimenting with each new software I came across, but hopefully after reading this post others can avoid this quagmire and get to making exciting and fun maps.

Before continuing a little should be said about the different uses for maps (from my perspective as a history PhD candidate). Maps make a striking visual argument that can both stand on its own when crafted well or can complement a text or webpage. For example, while I can point out that dozens of towns in New England were destroyed during King Philip’s War, actually mapping this destruction with intensity bubbles across the region makes a powerful statement. As an analytical tool, maps allow scholars to repurpose heavily used sources in order to find new patterns or to compile relatively insignificant data from ignored sources into more useful aggregated forms. Continuing with examples I know, by plotting something as mundane as the dates of town settlement throughout New England, the chronology of English settlers breaking away from their coastal and riverine settlements becomes clear. Simply reading dates and locations would not have yielded this conclusion. Richard White has presented a particular strong argument for spatial history.


Getting Started

Before you actually start to use any mapping programs, you will need a few things, including something to map! You will also need to know your goal. There are three types of things you can do with mapping software: you can make cool visualizations or tell stories, you can plot and analyze vector data, or you can overlay historical maps on contemporary maps (and even plot your vector data on them or extract data from them). Vector data is usually stored in a spreadsheet, although most programs also allow you to add data points internally in a time consuming process. With your goal in mind, gather your vector data, an image of a historical map, or information/a story you want to visualize.


If you are using vector data, you probably already have a location associated with what you want to plot. If your location is expressed in lat/lon numbers you are ready. If it is a town name or street address, you will want to convert it to lat/lon coordinates unless you are using Google Maps or Carto (they can do it automatically). A simple google search will yield some websites that will convert your locational data, but they are somewhat clunky. A better method is to upload your spreadsheet into Google Sheets (Google’s version of excel) and to create a macro. While that may sound intimidating, it really only involves copy and pasting a line of code that can be found here.

Google Maps

For the digital neophyte looking to either visualize or begin an analytical project, the best place to start is Google Maps. While you may be familiar with using Google Maps to get directions or look at a street view of your house, it also has the ability to plot vector data, or become a complex map imbedded in a website. For the purposes of the only moderately technologically savvy, this is best done through “My Maps.” My Maps has built in georeferencing and is linked to Google Sheets, so it is easy to transfer vector data over with basic locational information and have it quickly plotted through a series of intuitive commands. Your data can be classified by Google Maps in a few basic ways including simple color coding. You can also upload customized images to act as icons for your data in addition to Google’s icon library. You are able to easily manipulate your data in the program, change how it is classified or displayed, and isolate specific ranges of data with a few simple clicks.

While a more skilled user may be able to use the Google Map API coupled with coding skill and a web page to do some fancy things, for the normal user Google Maps has several limitations. Without using the API along with your own website, Google Maps has limited customizability. There is limited styling, no ability to apply JavaScript (like you can with the API), and no ability to make a customizable interface. You are also limited to using one of Google’s nine base-maps. While the base-maps can look nice, they often contain contemporary information or labels that may be anachronistic.



Using Carto will avoid several of these problems while requiring some additional technical skill. In general Carto is similar to Google Maps My Maps. It is good for plotting and visualizing vector data and you can modify uploaded spreadsheets within the webpage. screen-shot-2016-11-03-at-4-08-16-pmWhile at times a little less user friendly, it is also enables you to use custom base-maps and to apply limited coding to change the style and interface of your maps. Like Google Maps, it is extremely easy to share on social media or embedded in your own webpage. Overlaying images, while possible, is still a difficult task in this program. Additionally, it only has a limited ability to create a fully customized interface.



Perhaps the strongest tools for analysis, although not visualization, are ArcGIS and QGIS. While very similar, QGIS is open source and free, but ArcGIS is proprietary and very expensive for those without access through their institution. These programs provide powerful sandboxes for mapping and uploading data, but are relatively difficult to use. When you open these programs, you are presented with a blank canvas.

It is up to you to upload base maps and data, or plot data in the program. Because you have a blank canvas, anything you upload needs to be georeferenced. If you are uploading a spreadsheet this means lat/long geographical coordinates, although some formats can be georeferenced within the program. The spreadsheet to be uploaded needs to first be converted into a .csv format, which can be done from excel or Google Sheets using “save as.”


Additionally, all layers of data need to have a consistent CRS (Coordinate Reference System) value applied to them.

Once uploaded, you can add to your data, categorize it in several ways, and style it with great freedom. Historical maps and images can easily be uploaded to your project and stretched and overlaid wherever and however you want. These programs are not ideal for creating visually polished maps for internet distribution (frequently people augment them with photoshop or illustrator for this purpose). Despite stylistic limitations, the Arc/QGIS alone are fine for making the mind of maps that can be included in printed publications. QGIS, which I have more experience with, also allows you to make maps into a format that you can integrate into a webpage through one of its many useful plugins. If you are interested in learning how to actually use GIS, the Programming Historian has a few useful tutorials.



For mapping more focused on an extremely customizable visualization, the choices are limited unless you know JavaScript. Neatline (via Omeka) is one of the only exceptions, and as a tradeoff it has some issues uploading spreadsheets of vector data. Instead, Neatline is extremely good at creating a dynamic and interactive exhibit/story, in which the data and objects are created within Neatline. Installing Neatline is relatively simple, although it is important to know that you cannot do it on the Omeka hosted, but only through an Omeka platform hosted elsewhere (not a free wordpress either, meaning you would have to pay for hosting). I will not go into detail on how to install Omeka on your website here, as it is well documented here and here.

Once installed Neatline has several plugins of its own that allow you to add timelines to your map, create an interactive text in which words are linked to points on the map, and for you to upload images to overlay on the map, or even a custom background. These demos really show its power. Its greatest strength is its friendly interface and plentiful documentation. Oddly enough, some of its issues emerge when doing things that one would imagine would be relatively simple for it, like trying to batch transfer items with lat/lon associated from an Omeka database (luckily there is a helpful internet community for both Neatline and Omeka).


Summing Up

While I will not explore it fully here, if you have some experience with Javascript and want to work on embedding an interactive map into your webpage, there are a few different places you can start. Leaflet (QGIS compatibility with QGIS Web App), Openlayers, Google Maps API, and Timemap are all worth looking into. In fact, you will probably use some combination of these programs if you are trying to make an interactive map. As you can see, this blog post just scratches the surface of the programs available. If you are interested in analyzing old maps, take a look at MapAnalyst. If you want to make an atlas of maps you have or want to overlay multiple historic maps onto a contemporary map, check out Map Scholar made by historians. The Library of Congress made Viewshare for mapping out a digital collection on a map and storymap has a self-explanatory name.

Hopefully, this post has given those of you interested in mapping a guide on where you might want to start your project. Personally, I find myself using a combination of Google Maps, QGIS, and Neatline for different aspects of my project, with the intention of eventually taking advantage of the Google Maps API and Leaflet to bring my project online. Please feel free to contact me with any questions or suggestions.

Nathan Braccio is a Ph.D candidate in the UCONN History Department. He received his B.A. and M.A. in history from American University. His research focuses on the conflux of geography and identity in 17th and 18th century New England. More information on mapping and his research can be found on his webpage Contact him at