Here at the UConn Library, one of the tenets of our Purposeful Path Forward is to engage in the driving of UConn’s ‘Scholarly Engine’, or the processes of research and knowledge creation. One of the core activities in our approach is educating our community on the importance of Open Access. Open Access (OA), as defined by SPARC (the Scholarly Publishing and Academic Resources Coalition), refers to the “free, immediate, online availability of research articles, coupled with the rights to use these articles fully in the digital environment.”
Why Open? Open changes the way we discover knowledge. It can turn ideas into reality, break down barriers to learning, and lay the groundwork for breakthrough research.
This month we are embracing the challenge provided by the 2017 International Open Access Week by answering the question, “Open in order to…” through a series of programs and initiatives.
The UConn Library is proud to announce the re-launch of the University’s institutional repository, OpenCommons@UConn, a showcase of the scholarship and creative works of the UConn community. The renaming of this services emphasizes the Library’s role in providing the tools to enable independent learning, research, and scholarship. By making the University’s diverse and unique resources openly accessible worldwide, we hope to inspire groundbreaking research and advance learning, teaching, and entrepreneurial thinking.
Open in order to…provide access to UConn’s scholarship
Open Educational Resources @ UConn Exhibit: published teaching and learning materials under an open license
October 18-31, 2017
HBL, Plaza Level
Open Access and Open Educational Resources (OER) are related but distinct, with the commonality of providing high quality learning materials at no cost. In an academic setting, the lines of Open Access publishing for research materials and Open Educational Resources for teaching and learning overlap in significant ways. UConn’s OER Initiative began only 2 years ago and to date has saved our undergraduates over $500,000 in textbook costs. View some OER textbooks and learn more about the faculty who are working towards making UConn more affordable.
Open in order to…save students money
Is this open access journal any good?
Thursday, October 19, 9:30-11:00am
Homer Babbidge Library, Collaborative Learning Classroom
Faculty often struggle to identify good quality open access journals in which to publish or to serve as an editor or reviewer. Many new open access journals exist now – some are good quality, some are exploitative, and some are in-between. This workshop will include a brief discussion of faculty concerns about identifying journals. The majority of the session will be devoted to identifying and demonstrating indicator web-based tools which can help faculty to appraise a journal’s quality. Please register at http://cetl.uconn.edu/seminars
Open in order to…find quality teaching materials
Paywall: A Conversation about the Business of Scholarship with Filmmaker Jason Schmitt
Wednesday, October 25, 2:30-4:00pm
Konover Auditorium, Thomas J. Dodd Research Center
Help us celebrate Open Access Week by joining award-winning filmmaker Jason Schmitt as we screen and discuss footage from his in-progress documentary Paywall: The Business of Scholarship. Schmitt will be accompanied in the discussion by a panel of UConn faculty who will share their views on making the results of academic research freely accessible online. Co-sponsored by UConn Humanities Institute
Open in order to…talk about the business of scholarship
Flyer in pdf
Release in pdf
Open Data In Action
Thursday, October 26, 11:00am-2:00pm
Hartford Public Library Atrium
Open Data In Action brings together a wide range of researchers to showcase how their work has benefited from openly and freely accessible data. Presenters from the public, private, and academic sectors will discuss how open data, ranging from historical documents to statistical analyses, is being used to create projects, change policies, or conduct research and highlight the importance open data has on shaping the world around us.
Tyler Kleykamp, Chief Data Officer, State of Connecticut
- Steve Batt, UConn Hartford/CT State Data Center, Tableau Public and CT Census Data
- Jason Cory Brunson, UConn Health Center, Modeling Incidence and Severity of Disease using Administrative Healthcare Data
- Stephen Busemeyer, The Hartford Courant,Journalism and the Freedom of Information
- Brett Flodine, GIS Project Leader, City of Hartford Open Data
- Rachel Leventhal-Weiner, CT Data Collaborative, CT Data Academy
- Anna Lindemann/Graham Stinnett, UConn/DM&D, & Archives, Teaching Motion Graphics with Human Rights Archives
- Thomas Long, UConn Nursing, Dolan Collection Nursing History Blog
- Tina Panik, Avon Public Library, World War II Newsletters from the CTDA
- Jennifer Snow, UConn Library, Puerto Rico Citizenship Archives: Government Documents as Open Data
- Rebecca Sterns, Korey Stringer Institute, Athlete Sudden Death Registry
- Andrew Wolf, UConn Digital Media & Design, Omeka Everywhere
Co-sponsored by the Hartford Public Library
Open in order to…share data
Flyer in pdf
Introduction to Data Visualization using Tableau Public
Monday, October 30, 3:00-4:15pm
Homer Babbidge Library, Level 2 Electronic Classroom
Tableau Public is a free version of Tableau business intelligence / visual analytics software, which allows anyone to explore and present any quantitative information in compelling, interactive visualizations. In this hands-on session you will work with different prepared datasets to create online interactive bar graphs, scatterplots, thematic maps and much more, which can be linked to or embedded in blogs or on web sites. Please register at http://workshops.lib.uconn.edu/
Open in order to…visualize research
Digital Scholarship: Partnering for the Future
Joan K. Lippincott, Associate Executive Director, Coalition for Networked Information
Tuesday, November 7, 2-3:30
Homer Babbidge Library, Heritage Room
Researchers in many disciplines are finding that they can ask new kinds of research questions as a result of the rapid growth in the availability of digital content and tools. In addition, the outputs of their research can include many more types of products such as data visualizations, geo-referenced representations, text augmented with images and audio, exhibits on the web, and virtual reality environments. Developing these projects takes a team of people who have a variety of skill sets. These individuals may come from academic departments, the library, the information technology unit, and other specialties. Graduate and undergraduate students are also often part of teams working on digital scholarship projects. In this presentation, Lippincott will provide an update on developments in digital scholarship and will describe existing programs and projects, discuss the importance of physical space, and encourage the development of a campus digital scholarship community. Co-sponsored by UConn Humanities Institute
Open in order to…develop digital scholarship
The original blog post available here.
1. What initially intrigued you about research/teaching in digital humanities or media studies?
I was immersed in media studies from the very start of grad school, but not really before then. One of the first courses I took as a masters student at Columbia focused on media studies, and later, when I was a Ph.D. student at NYU, there was much intellectual activity around media, mediation, etc.; NYU was – and still is, I would think – an exciting place to study those problems. The intrigue for me was in the tensions between literature and what is prioritized by other communications media and technologies, and I wrote a media studies-influenced dissertation on Romantic poetry. As for digital humanities: that too was becoming a part of the conversation during grad school, especially in relation to what has been the main interpretative technique of literary study, “close reading,” but also in connection with (then newish) scholarly resources like ECCO (“Eighteenth Century Collections Online”). Since then, I participated in a UK project based at Cambridge called The Concept Lab, which involved intense – and sometimes insane – debates about how computationally to model concepts, which brought us into linguistics and other disciplines. It was a lot of fun working with that group, and we all piled into a van at one point for a brainy road trip in California. Around that time, I also sat in on a computational linguistics course at UConn, and that was very informative. A little before I joined The Concept Lab, I wrote an essay on early 20th-century word frequency counts and their unlikely ties to the advent of close reading. That was a fun essay to write, dealing with the pre-digital history of digital “distant reading,” and it drew a little bit on all of the above.
2. How has entering the DHMS realm changed your approach to research and teaching in general? If so, how?
As for research: I tend to be invested in Romantic poetry, media studies, and digital humanities (among other things) and try to keep up with all three, but I don’t ever feel that I exclusively or primarily belong to any one of the them. Some of my ideas come from wandering somewhere between those three coordinates, and at other times I’m more deliberate about relating them to one another. In terms of teaching: this semester, I’m teaching undergraduate courses, an already memorable and boisterous one on Vladimir Nabokov, the other the gateway course for the English major, and we’re reading Lydia Davis in that one at the moment. All that to say, “DHMS” doesn’t figure too much in my teaching currently. But, a few years ago, I taught a grad seminar on “Literature, Media, and Technology,” in which we read a lot of media studies and adjacent things, including some of my favorites (e.g., Raymond Williams’s Television which often feels like Williams saying, “Let me show you how to do it right.”) And I’ll be teaching the new “Introduction to Digital Humanities” graduate seminar next year, the first time it’s being offered as such, and I’m looking forward to that.
3. You have three (commitment-free) wishes to receive support for your research/teaching in DH or media studies: what are they?
What a cruel constraint that one is introduced to the elusive genie but the three wishes must be about “research/teaching in DH or media studies”! Time is important and so maybe something that frees up the time of faculty and grad students to learn new disciplines from scratch and be uncomfortable? See below.
4. First struggles and successes: do you have any best-practice advice?
I don’t, but I would say that I am very behind the idea of this DHMS initiative and graduate certificate, covering and combining as it does “digital humanities” and “media studies.” It’s still very early on with DHMS endeavors at UConn, but I would like to see more connections with disciplines and departments like linguistics, computer science, statistics, digital media and design, and so on. I realize that English folks, for example – I include myself – have a repertoire of things they like to say and do with literary and other cultural works (Rita Felski’s The Limits of Critique describes well this now familiar repertoire). But I think it would be exciting too if there were more attempts to go very deep into very different disciplines with very different ways of looking at things. There would be a lot of learning of new languages, terms, concepts, technical skills, etc. This is maybe my idiosyncratic view not of “best practices” but “potentially promising practices.”
5. How would you like to challenge yourself in DH or media studies? Or what is a project you most seek to realize?
Maybe one challenge for me is that I’m what Zadie Smith calls a Person 1.0, which makes certain DHMS topics difficult for me to write on. The Smith essay is now an old essay, but it touches on things like Facebook, and it still sums up a how I feel, for the most part. But I know too that there are interesting questions to consider in areas like digital sociality, and that might be a new challenge for me, at some point. In the book I’m finishing writing, I talk at one point about how poetry can model a form of mediated interaction that can both encourage and discourage connection – it is another way to talk about introversion and the pressing need for both of those options – and so I might be beginning to engage some of these topics, but in my own oblique way.
As an aspiring social studies teacher, I recognize the importance of integrating digital history into the classroom. Students have grown up in the digital age, and, as such, consume a majority of their information online. Gone are the days of searching through a library for primary and secondary sources to support a historical argument or reading a newspaper to discover that day’s events. All this information, and more, can now be found online. This vast availability of information has greatly expanded the possibilities for studying history, which presents us, as educators, with a unique opportunity to integrate digital history into our classrooms. By doing so, we will enable students to utilize digital media to advance historical analysis and understanding. To do this, however, we must first provide students with models of digital history. What follows are several examples of digital history projects that could be used in classrooms (and beyond) to equip students with the skills required to contribute to our knowledge of world contexts in a digital way.
HistoryMatters is a digital history project that resulted from collaboration between George Mason University and the City University of New York. The project began in 1998 with the intent of providing teachers and students with digital resources that could improve their instruction and understanding of United States history. It was funded by the Kellogg Foundation, the Rockefeller Foundation, and the National Endowment for the Humanities. There are over a thousand primary sources on a variety of topics that range from photographs to text documents to audio files, all of which can be used with students to help them construct a narrative of the past. What’s unique about this project is that it takes full advantage of the digital space by using audio files from everyday Americans to help participants co-construct the history of the United States as well as from scholars on how to teach major aspects of US history. In addition, since there are over a thousand primary sources available, there is a “full search” feature that was developed to assist in locating resources by time, topic, or keyword. With the large number of primary sources available, this digital history project would be an excellent resource for students to use for research papers. Students could use this project to develop a research question based on an area of inquiry, examine primary sources related to their topic, arrive at conclusions based on their research, and publish their findings in order to advance our understanding of history. Doing so would expose them to conducting research digitally while also developing their ability to think critically, evaluate evidence, and articulate their thoughts clearly.
Mapping Inequality is a digital history project that was created through the collaboration of three research teams from the University of Maryland, the University of Richmond, Virginia Tech, and Johns Hopkins University. This project showcases 150 maps that were drafted by the Home Owners’ Loan Corporation (HOLC) from 1935 to 1940. These maps were color-coded to show the credit-worthiness of different neighborhoods in each town. Mortgage lenders then used these maps to determine whether someone would qualify for a loan. This project was developed to show that, when these maps are compared to the layout of neighborhoods in the United States today, it becomes apparent that many of the racial and class inequities that exist are a direct result of the HOLC’s maps. In fact, many of these maps were produced such that they were to codify racial segregation into real estate practice. This project could be used with students for multiple purposes. For example, when teaching about the New Deal, students could use the site to determine how the HOLC reflected a problematic legacy of the New Deal. Students could also be asked to cite specific examples from the map of how the HOLC’s practices led to the racial and class segregation that is seen today. For example, if they examined the areas around Hartford, Connecticut, they would observe that the HOLC deemed that West Hartford had the “best,” most credit worthy neighborhoods, whereas Hartford had the “hazardous,” least credit worthy neighborhoods. If this map is compared to today’s, it becomes evident that the HOLC’s maps led to racial and class segregation, with West Hartford and Hartford reflecting mostly unchanged neighborhoods. In addition, showcasing a digital history project of this nature in class would familiarize students with what digital history can look like. Through this project, teachers could expose students to some of the digital tools and resources—such as mapping software and online databases—that would be required to design it. This would create incentives to work collaboratively with other scholars—especially those who could provide the digital resources for projects like this.
3) The Valley of the Shadow: Two Communities in the American Civil War
The Valley of the Shadow is a digital history project constructed by the Virginia Center for DigitalHistory at the University of Virginia. This project narrates the countless stories of two different communities from the American Civil War—one from the North and one from the South—through letters, newspapers, diaries, speeches, and other primary sources. The project is organized through a series of image maps that direct the viewer to various search engines. This project functions similarly to the HistoryMatters project—they are both databases of primary sources that employ search engines to enable the viewer to locate information—but there is a key difference between the two worth mentioning: while HistoryMatters contains a large amount of primary source information on a wide variety of topics across United States history, this project only provides information that is relevant to a specific time and topic. The narrow focus is relevant to the work historians do on a daily basis, as most of a history scholar’s research explores questions in a specific niche of the past. As such, teachers could use this project to show students how they might approach a digital history research project. This would help transition students away from the traditional way of communicating their thoughts on history through a research paper and, instead, provide them with the opportunity to disseminate their ideas digitally. For example, rather than writing a paper about the significant World War II battles, students could create an online timeline that lays out those events chronologically while also providing descriptions of the significance of each battle. Exposing students to and allowing them to engage in this sort of work would enable them to practice the craft of a historian in a very familiar context and equip them with the skills to pose their own questions about a certain niche of the world.
Matthew Ferraro is a masters’ student in the Neag School of Education’s Integrated Bachelors’ / Masters’ (I/BM) Program. He is currently interning at Conard High School, where his research interests include how to best integrate human rights education into social studies classrooms. He is studying to become a social studies teacher at the high school level. He can be reached at firstname.lastname@example.org.
As the semester gets under way, DHMS is ready to roll out a number of updates, news, and events for the new academic year 2017/18. Welcome back, everyone!
First a quick review of Year 1: following the creation of the brand new graduate certificate in Digital Humanities and Media Studies in February – with much-appreciated support from a number of colleagues in CLAS and the Graduate School – two students already graduated from the program. Britta Meredith (LCL/German Studies) and Elisabeth Buzay (LCL/French & Francophone Studies) each completed their course work and DHMS portfolios in the nick of time and with great aplomb (and, incidentally, helped the director navigate the new learning curve of certification processing). Importantly, both DHMS certificate holders are off to a tight conference schedule: Elisabeth Buzay received two invitations already, with presentations directly related to her DHMS certificate work, and Britta Meredith is continuing her jam-packed presentation tour with next-phase talks on her DHMS portfolio that is now getting integrated with her dissertation. Congratulations to both of them!
On video now from Year 1 on the Humanities Institute youtube channel: two of the events from last year, the inaugural DHMS Talks presentation by renowned University of Santa Barbara professor Alan Liu; and pivotal information on copyright issues (both analog and digital) for academics by University of Massachusetts lawyer/librarian Laura Quilter and our own UConn-local librarian Michael Rodriguez. Alan Liu’s talk is a must-see (my humble opinion) should you have missed his thought-provoking, à propos, and widely applicable discussion of “Toward Critical Infrastructure Studies: Digital Humanities, New Media Studies, and the Culture of Infrastructure.” Ditto for Laura Quilter’s and Michael Rodriguez’s talks, for very different reasons, of course. Both point to crucial elements concerning copyright and authors’ rights that take minutiae to a new level: watch for ALL the fine print in your contracts with book and online publishers to make sure you not only understand your intellectual copyright, but also what happens (or can happen!) once you’ve published that book or article and it all goes digital and multimedia… And, yes, there IS “Fair Use in Digital Scholarship.”
Thank you again to Jennifer Snow, one of UConn’s Digital Scholarship Librarians, for making this important event possible.
Which brings us to Year 2. This fall and spring, we will take a break from the Digital Humanities Reading Group (for further notice please check DHMS Upcoming Events), but things will rev up in other directions. The list of Scholars’ Collaborative workshops for this semester is all set, with 5 workshops scheduled throughout the fall. Look out for Michael Young’s presentation on “Images and Permissions for Publications” (NEW) and two workshops on the popular Tableau by Steve Batt (also NEW). Suggestions for more workshops/tool intros always welcome.
Year 2 in DHMS will also inaugurate a new Fall/Spring rhythm with a roundtable discussion in the fall semester and the DHMS Talk in the spring. For the DHMS Roundtable, media studies scholar and NYU English professor Lisa Gitelman, interdisciplinary artist Emma Hogarth (RISD), and UConn’s own DMD department head Tom Scheinfeldt and I will gather to discuss “Interfacing Digital Humanities and Media Studies.” Please join us on October 12 at 2:30 on the 4th floor of Babbidge Library to participate in this conversation across disciplines and across media.
Another event to take part in is collaboration #2 between DHMS and the library on the occasion of Open Access Week in October. Director Jason Schmitt will come to campus to present his documentary film
“Paywall” (2018), a topic that is bound to invite debate on a number of fronts and issues. The screening and Q&A will take place in Konover on October 25 from 2-4pm. More information forthcoming very soon. Bring your students.
Finally, our once-a-semester DHMS Meet&Greet luncheon will take place after Thanksgiving on November 30 from 2-3:30pm. My colleague Jacqueline Loss (LCL/Spanish) will provide a glimpse into her work on “Finotype” that has been selected as one of the first Greenhouse Studios projects. The Digital Coffee Hour (ad hoc gatherings next to fountains of hot coffee!) will continue as well – however, the venue has switched from the Humanities Institute to Scholarly Communications in Babbidge Library – exact location TBA.
The brain bytes blog (bi-weekly as of this year) will continue to post ideas, information, events and more – and readers are welcome to contribute a guest blog and/or recruit more readers who might have an interest in sharing their DHMS-related work (or quests). While there are several Q&A features in the making, you should feel free to suggest topics of potential interest or colleagues whose work deserves to be noticed. Two new items have been added to the Resources page: a Social Media Guide for Academics (JustPublics@365 Toolkit) and Guidelines for Digital Dissertations in History and Art History (GMU). If you have any resources or projects to share that need to find their place on the DHMS website, please just send an email to email@example.com. Better yet: join the DHMS mailing list or the DHMS facebook group. Wishing everyone a productive and inspiring academic year!
In an article published online last month by The Guardian—“AI programs exhibit racial and gender biases, research reveals”—the computer scientists behind the technology were careful to emphasize that this reflects not prejudice on the part of artificial intelligence, but AI’s learning of our own prejudices as encoded within language.
“Word embedding”, “already used in web search and machine translation, works by building up a mathematical representation of language, in which the meaning of a word is distilled into a series of numbers (known as a word vector) based on which other words most frequently appear alongside it. Perhaps surprisingly, this purely statistical approach appears to capture the rich cultural and social context of what a word means in the way that a dictionary definition would be incapable of.”
This tool’s ability to reproduce complex and nuanced word associations is probably not surprising to anyone familiar with digital humanities—and the fact that it returned associations that match pleasant words with whiteness and unpleasant ones with blackness, or that associate “woman” with the arts and interpretative disciplines and “man” with the STEM fields shouldn’t be surprising to anyone who has been paying attention. The distressing prospect that AI and other digital programs and platforms will only reinforce existing bias and inequality has certainly garnered the attention of scholars in media studies and DH, but one could argue that it has received equal attention in the social sciences.
As a graduate student in cultural anthropology drawn to DH, I sometimes find myself considering what exactly demarcates digital humanities from social science when apprehending these kinds of topics; somehow, with the addition of ‘digital’, the lines seem to have blurred. Both ultimately represent an investigation of how humans create meaning through or in relation to the digital universe, and the diverse methodologies at the disposal of each are increasingly overlapping. Below are just a few reasons, from my limited experience, as to why social scientists can benefit from involvement with digital humanities—and vice-versa.
1) Tools developed in DH can serve as methodologies in the social sciences.
Text mining, a process that derives patterns and trends from textual sources similar to the phenomenon described above, is particularly suited for social science analysis of primary sources. Programs like Voyant and Textalyser are free and easily available on the web, no downloads or installations required, and can pull data from PDFs, URLs, and Microsoft Word, plain text and more. Interview transcripts can also be analyzed using these programs, and the graphs and word clouds they create provide a unique way to “see” an argument, a theme, bias, etc.
Platforms like Omeka and Scalar can provide an opportunity not only to display ethnographic information for visual anthropologists, but can give powerful form to arguments in a way that textual forms cannot (see, for example, Performing Archive: Curtis + “the vanishing race”, which turns Edward S. Curtis’ famous photos of Native Americans on their heads by visualizing the categories instead of the categorized).
2) Both fields are tackling the same issues.
Miriam Posner writes that she “would like us to start understanding markers like gender and race not as givens but as constructions…I want us to stop acting as though the data models for identity are containers to be filled in order to produce meaning and recognize instead that these structures themselves constitute data.” Drucker and Svensson echo that creating data structures that expose inequality or incorporate diversity is not as straightforward as it seems, given that “the organization of the fields and tag sets already prescribes what can be included and how these inclusions are put into signifying relations with each other” (10). Anthropologist Sally Engle Merry, in The Seductions of Quantification, expounds on this idea in the realm of Human Rights, proving that indicators can obscure as much or more than they reveal. Alliances between DHers as builders and analyzers of digital tools and platforms, and social scientists as suppliers of information on the effects of these on the ground in various cultural contexts, provide benefit to both.
3) Emerging fields in the social sciences can learn a lot from established DH communities and scholarship.
Digital anthropology, digital sociology, cyberanthropology, digital ethnography, and virtual anthropology are all sub-disciplines emerging from the social sciences with foci and methods that often overlap with those of digital humanities. Studies of Second Life, World of Warcraft, or hacking; the ways diasporic communities use social media platforms to maintain relationships; or projects that focus on digitizing indigenous languages all have counterparts within digital humanities. Theoretically, there is much to compare: Richard Grusin’s work on mediation intersects with
anthropologists leading the “ontological turn” like Philippe Descola and Eduardo Viveiros de Castro; Florian Cramer’s work on the ‘post-digital’ pairs interestingly with Shannon Lee Dawdy’s concept of “clockpunk” anthropology, influenced by thinkers both disciplines share like Walter Benjamin and Bruno Latour.
Though I am still relatively new to DH, one theme I find repeated often, and which represents much of the promise and the excitement of digital humanities for me, is the push for collaboration and the breaking down of disciplinary boundaries. Technologies like AI remind us that we all share the collective responsibility to build digital worlds that don’t simply reflect the restrictions and biases of our textual and social worlds.
Kitty O’Riordan is a doctoral student in cultural anthropology at the University of Connecticut. Her research interests include anthropology of media and public discourse, comparative science studies, and contemporary indigenous issues in New England. You can reach her at caitlin.o’firstname.lastname@example.org.
Welcome back! To pick up where my last post left off, I’d like to discuss some of the accessories and optional equipment you can use to augment your basic interview “kit,” as well as several editing programs that can be used for post-production work on your footage.
An external microphone might be a good investment if you’re interviewing multiple people at once and want to ensure you are recording clear, distinct audio for each person. Almost all of the microphones you’ll come across will fall into one of two categories: dynamic and condenser. The difference has to do with how each converts sound vibrations into electrical signals. In addition, condenser microphones require a power source, provided by batteries or whatever device they’re plugged into (this is known as phantom power).Within these two broad categories, there are a number of different patterns in which microphones record sound.
True to their name, omnidirectional mics pick up sound in every direction equally. This pattern is utilized by many lavalier (aka lapel) microphones, the “clip-on” types you’ve probably seen on TV and elsewhere. If you’re going to go with a lavalier, make sure whoever you’re working with is comfortable wearing one. It seems like a trivial concern, but it could be significant depending on the circumstances of your interview. One of my participants had never been interviewed before, and was visibly nervous before we started. In cases like that, the less invasive you are, the better.
In addition, an omnidirectional lavalier isn’t ideal for multiple-person interviews; in these circumstances, a cardioid microphone is a better choice. Named for its heart-shaped sound pattern, cardioids will capture audio well from the front and sides, and, though they’re usually a bit more expensive, cancel out ambient noise better than an omnidirectional mic. There are also shotgun microphones, named for the linear pattern by which it picks up sound. Like a shotgun, it must be pointed directly at its “target” in order to properly record it. This results in a “tighter” sound when compared to a cardioid mic, but again isn’t ideal for multiple-person interviews, where you will have more than one source of audio.
There are plenty of options out there for camcorder tripods, ranging from the too-cheap to the ridiculously expensive. Unless you’re going to be conducting the interview outdoors or will be moving around with your subject while he/she talks, you don’t need anything heavy duty. Just make sure you get one that breaks down easily and is relatively compact.
Bags and cases are another instance where you don’t need to go too crazy. Overseas I was able to fit everything I needed (minus the camera tripod) in a padded laptop case. If you’re going to invest in cases, buy them for the camcorder and audio recorder, although in many instances one might be included when you buy these items.
In a perfect world, you’ll be able to have your camcorder plugged into a wall outlet for an indefinite power supply while conducting an interview. Since that won’t always be feasible, you should look into a spare battery. A tip: if you use a Canon device, purchase a decoded battery for your backup. These batteries are manufactured by a third party and don’t have the Canon microchip to track things like number of shots, battery charge, etc. but otherwise behave exactly the same as their name-brand counterparts—and cost significantly less. Make sure you read the reviews however, as not all decoded batteries are created equal and some manufacturers are more reliable than others.
I’ve used Adobe Premiere Pro CC for most of my post-interview editing. While truthfully a bit more than
what I needed, it offers a lot in terms of manipulating audio tracks and syncing them up with video footage. Burning DVDs is easier as well (the software you need will be included in your Premiere subscription). Another upside to Adobe is the flexibility of their subscription plans. Individuals have the option of choosing which apps from the “Creative Cloud” they’d like to utilize or subscribing to the entire package, and can sign on for an entire year.
If you’re just looking to apply some simple edits like a title slide, transitions, and captions, you may be able to get away with using free video editing software like Windows Movie Maker. Here’s a short clip I put together to illustrate what can be done with that program:
If you simply need to import your audio files into a program where you can listen to them, transcribe, and do some basic editing, I would recommend Audacity. It’s free, relatively easy to use, and available on a number of operating systems.
Tech challenges notwithstanding, I found my entire project to be an incredibly worthwhile endeavor. Because the Second World War had until recently been somewhat of a taboo subject in post-war Germany, most of my participants had never discussed the topic at length with anyone. The fact that I was the first to hear, record, and preserve these stories made every ounce of effort worth it. I’m still not quite sure what I’ll do with the 5+ hours of footage I collected, but I could see using it as material for a series of small “episodes” featured on a personal website, a longer documentary, or a written collection of oral histories or narrative work.
I wish others similar success in their oral history endeavors, and I hope that these two posts will help simplify the process when purchasing the necessary equipment. Please feel free to contact me with more questions, or if you’d like to know more about anything I discussed here. Thanks again for reading!
Nick Hurley is a Research Services Assistant at UConn Archives & Special Collections, part-time Curator of the New England Air Museum, and an artillery officer in the Army National Guard. He received his B.A. and M.A. in History from the University of Connecticut, where his work focused on issues of state and society in twentieth century Europe. You can contact Nick at email@example.com and follow him on Twitter @hurley_nick.
While both I—and many others—would argue that those who work in DH agree that they do not agree on what DH means, as I have encountered more and more digital tools and projects, I have begun to think of DH work in a provocative way: DH work should be considered a form of narrative-making or storytelling. For fields such as digital storytelling or tools such as story mapping, this argument may not be that surprising. But what about other types of DH projects and tools? If we think of archival or curated sites, such as those created with Omeka, or book or network conglomerations, such as those made in Scalar, I propose that these forays are equally forms of narrative or story: we pick and choose what to include and exclude, we form paths and groupings and connections to guide or suggest methods of understanding; in other words, we give shape to a narrative. Here I will advance an initial iteration of this argument, which, I believe, ultimately provides another perspective on how DH is truly a part of the humanities.
DH and Narrative
If we take Hayden White’s description of narrative, in conversation with Barthes, in The Content of the Form: Narrative Discourse and Historical Representation, which argues that “[a]rising, as Barthes says, between our experience of the world and our efforts to describe that experience in language, narrative ‘ceaselessly substitutes meaning for the straightforward copy of the events recounted’” (1–2), as one of the basic definitions of this concept, we can see how this term could easily be used in reference to various methodologies and tools used in DH. More particularly, however, we must expand the definition by including not just language, but also image and sound. It is worth a look, for instance, at DH projects that create digital archives, such as The Digital Public Library of America or the Bibliothèque nationale de France’s Gallica, in which digital tools are used to create digitized versions of (an) actual archive(s). Or other such projects, like The Internet Archive, The Sonic Dictionary, or The Story of the Beautiful, in which a digital archive is created. Or we might think of digital editions of texts, such as the Folger Digital Texts or digitized resources such as The ARTFL Project. Or, in a slightly different direction, there are tools one can use to compare versions of texts, like Juxta or JuxtaCommons, or to annotate a text (collaboratively or not), like Annotation Studio. In these varying cases, the digital approach and tools used are the methods through which meaning is provided, whether that meaning be the coherency of an archive, the evolution or development of a text, or the preservation of narratives that themselves might otherwise be lost.
DH as Narrative
A DH approach is not limited, of course, to archival or editorial projects, however. In many cases, DH projects are clearly narrative in form. The case of digital storytelling is, perhaps, the most obvious such example. StoryCenter, previously known as the Center for Digital Storytelling, is a well-known entity whose basic elements of digital storytelling are often cited. And digital storytelling is also being used in a slightly different manner by teachers and students in the field of education in order to teach and learn about topics beyond those of telling personal stories, as can be seen on the University of Houston’s Educational Uses of Digital Storytelling site. Digital storytelling approaches have been expanded in other directions as well, for instance in
- tying stories to location, with the use of tools like StoryMapJS, Esri Story Maps, or Odyssey, in which specific events and places are linked,
- tying stories to timing, with the use of tools like TimeLineJS, TimeGlider, or Timetoast, in which specific events and times are linked,
- or tying stories to time and location, with the use of tools like Neatline or TimeMapper, in which specific events, places, and times are linked so that a user can follow a story both geographically and/or chronologically.
In all of these cases, the digital approach is one that is explicitly used to shape a narrative or story. In other words, here DH is again a form of narrative or narrative-making.
Big data projects, such as those of the Stanford Literary Lab or approaches, such as that of Matthew L. Jockers in his Macroanalysis: Digital Methods and Literary History, may present an exception to my argument in comparison to other DH projects and approaches mentioned thus far; nonetheless, I suggest that even projects or approaches such as these also create narratives or stories, in that they provide meaning to observations, calculations, or data that otherwise would not be comprehensible, given their size. How could they not?
This brief overview brings us to a final point to ponder: in their Digital_Humanities, Anne Burdick, Johanna Drucker, Peter Lunenfeld, Todd Presner, and Jeffrey Schnapp argue that the design of DH tools and projects are themselves essential aspects of the arguments they create:
The parsing of the cultural record in terms of questions of authenticity, origin, transmission, or production is one of the foundation stones of humanistic scholarship upon which all other interpretive work depends. But editing is also productive and generative, and it is the suite of rhetorical devices that make a work. Editing is the creative, imaginative activity of making, and as such, design can be also seen as a kind of editing: It is the means by which an argument takes shape and is given form. (18)
In other words, a narrative-making approach is literally embedded in form, in design. Like these authors, I wonder whether this perspective cannot be extended. They write:
DESIGN EMERGES AS THE NEW FOUNDATION FOR THE CONCEPTUALIZATION AND PRODUCTION OF KNOWLEDGE.
DESIGN METHODS INFORM ALL ASPECTS OF HUMANISTIC PRACTICE, JUST AS RHETORIC ONCE SERVED AS BOTH ITS GLUE AND COMPOSITIONAL TECHNIQUE.
CONTEMPORARY ELOQUENCE, POWER, AND PERSUASION MERGE TRADITIONAL VERBAL AND ARGUMENTATIVE SKILLS WITH THE PRACTICE OF MULTIMEDIA LITERACY SHAPED BY AN UNDERSTANDING OF THE PRINCIPLE OF DESIGN. (117–118)
If we apply these points to the entire field of DH, this provides insight into significant food for thought: if
design is the foundation of DH, then isn’t the result of this design necessarily a narrative or a story? And might not this be one further aspect that confirms that DH is indeed a part of the traditional humanities?
These questions invite others: are DH narratives and their design different or new or innovative in comparison to traditional narratives, and if so how? What can DH narratives tell us about ourselves and our world? To circle back to White and Barthes’ view of narrative, if we accept that DH is narrative, what new meanings can be distilled from the events DH recounts?
Elisabeth Herbst Buzay is a doctoral student in French and Francophone Studies and in the Medieval Studies Program at the University of Connecticut. Her research interests include medieval romances, contemporary fantasy, digital humanities, video games, the intersection of text and images, and translation. You can contact her at firstname.lastname@example.org.
In December I spent two days at the Folger’s Visualizing English Print seminar. It brought together people from the Folger, the University of Wisconsin, and the University of Strathclyde in Glasgow; about half of us were literature people, half computer science; a third of us were tenure-track faculty, a third grad students, and a third in other types of research positions (i.e., librarians, DH directors, etc.).
Over those two days, we worked our way through a set of custom data visualization tools that can be found here. Before we could visualize, we needed and were given data: a huge corpus of nearly 33,000 EEBO-TCP-derived simple text files that had been cleaned up and spit through a regularizing procedure so that it would be machine-readable (with loss, obviously, of lots of cool, irregular features—the grad students who wanted to do big data studies of prosody were bummed to learn that all contractions and elisions had been scrubbed out). They also gave us a few smaller, curated corpora of texts, two specifically of dramatic texts, two others of scientific texts. Anyone who wants a copy of this data, I’d be happy to hook you up.
From there, we did (or were shown) a lot of data visualization. Some of this was based on word-frequency counts, but the real novel thing was using a dictionary of sorts called DocuScope—basically a program that sorts 40 million different linguistic patterns into one of about 100 specific rhetorical/verbal categories (DocuScope was developed at CMU as a rhet/comp tool—turned out not to be good at teaching rhet/comp, but it is good at things like picking stocks). DocuScope might make a hash of some words or phrases (and you can revise or modify it; Michael Witmore tailored a DocuScope dictionary to early modern English), but it does so consistently and you’re counting on the law of averages to wash everything out.
After drinking the DocuScope Kool-Aid, we learned how to visualize the results of DocuScoped data analysis. Again, there were a few other cool features and possibilities, and I only comprehended the tip of the data-analysis iceberg, but basically this involved one of two things.
- Using something called the MetaData Builder, we derived DocuScope data for individual texts or groups of texts within a large corpus of texts. So, for example, we could find out which of the approximately 500 plays in our subcorpus of dramatic texts is the angriest (i.e., has the greatest proportion of words/phrases DocuScope tags as relating to anger)? Or, in an example we discussed at length, within the texts in our science subcorpus, who used more first-person references, Boyle or Hobbes (i.e., which had the greater proportion of words/phrases DocuScope tags as first-person references). The CS people were quite skilled at slicing, dicing, and graphing all this data in cool combinations. Here are some examples. A more polished essay using this kind of data analysis is here. So this is the distribution of DocuScope traits in texts in large and small corpora.
- We visualized the distribution of DocuScope tags within a single text using something called VEP Slim TV. Using Slim TV, you can track the rise and fall of each trait within a given text AND (and this is the key part) link directly to the text itself. So, for example, this is an image of Margaret Cavendish’s Blazing-World (1667).
Here, the blue line in the right frame charts lexical patterns that DocuScope tags as “Sense Objects.”
The red line charts lexical patterns that DocuScope tags as “Positive Standards.” You’ll see there is lots of blue (compared to red) at the beginning of Cavendish’s novel (when the Lady is interviewing various Bird-Men and Bear-Men about their scientific experiments), but one stretch in the novel where there is more red than blue (when the Lady is conversing with Immaterial Spirits about the traits of nobility). A really cool thing about Slim TV that could make it useful in the classroom: you can move through and link directly to the text itself (that horizontal yellow bar on the right shows which section of the text is currently being displayed).
So 1) regularized EEBO-TCP texts turned into spreadsheets using 2) the DocuScope dictionary; then use that data to visualize either 3) individual texts as data points within a larger corpus of texts or 4) the distribution of DocuScope tags within a single text.
Again, the seminar leaders showed some nice examples of where this kind of research can lead and lots of cool looking graphs. Ultimately, some of the findings were, if not underwhelming, at least just whelming: we had fun discussing the finding that, relatively speaking, Shakespeare’s comedies tend to use “a” and his tragedies tend to use “the.” Do we want to live in a world where that is interesting? As we experimented with the tools they gave us, at times it felt a little like playing with a Magic 8 Ball: no matter what texts you fed it, DocuScope would give you lots of possible answers, but you just couldn’t tell if the original question was important or figure out if the answers had anything to do with the question. So formulating good research questions remains, to no one’s surprise, the real trick.
A few other key takeaways for me:
1) Learn to love csv files or, better, learn to love someone from the CS world who digs graphing software;
2) Curated data corpora might be the new graduate/honors thesis. Create a corpora (e.g.s, sermons, epics, travel narratives, court reports, romances), add some good metadata, and you’ve got yourself a lasting contribution to knowledge (again, the examples here are the drama corpora or the science corpora). A few weeks ago, Alan Liu told me that he requires his dissertation advisees to have a least one chapter that gets off the printed page and has some kind of digital component. A curated data collection, which could be spun through DocuScope or any other kind of textual analysis program, could be just that kind of thing.
3) For classroom use, the coolest thing was VEP Slim TV, which tracks the prominence of certain verbal/rhetorical features within a specific text and links directly to the text under consideration. It’s colorful and customizable, something students might find enjoyable.
All this stuff is publicly available as well. I’d be happy to demo what we did (or what I can do of what we did) to anyone who is interested.
Gregory Kneidel is Associate Professor of English at the Hartford Campus. He specializes in Renaissance poetry and prose, law and literature, and textual editing. He can be reached at email@example.com.
Last summer I had the pleasure of spending several weeks in southwestern Germany, visiting family and conducting interviews with five local residents who lived through the Second World War. In doing so, I fulfilled a goal I’d had in mind ever since the death of my great-grandmother in 2013. She had been one of a host of relatives and family friends that regaled me with stories from “back then” every time I’d come to visit, and her passing made me realize that I had to do more than just listen if I wanted to preserve these memories for future generations. This time around, I would sit down with each of the participants—the youngest of whom was in their late 70s—record our conversations, and eventually send each of them a copy of their edited interview on DVD. While I had a clear idea of why I was undertaking the project, and had done a lot of reading on oral history practices (including this fantastic online resource), I was less confident in just how I would go about carrying out the actual interviews. I was inexperienced with audiovisual equipment or video editing, and the seemingly endless number of tech-related questions I faced concerning things like cameras, microphones, and recording formats left my head spinning.
It took a significant amount of research and self-instruction before I was comfortable enough to purchase the necessary gear I needed. These two posts are my attempt to share what I learned and hopefully save other oral history novices some of the headaches I endured putting together an interview “kit” which, at a minimum, will consist of a camcorder (possibly), your audio recorder, and a way to store your footage.
You’ll need to decide early on whether or not to record video as well as audio for your oral histories. While choosing the latter option will greatly reduce the amount of equipment you’ll need to buy, it really depends on the nature of your project. If you do decide to film, steer clear of mini-DV and DVD camcorders, as these record on formats that are quickly becoming obsolete. Your best bet is to go with a flash memory camcorder, which utilize removable memory cards that can be inserted into your laptop for easy file transfer.
High definition (HD) camcorders are fast becoming the norm over their standard definition (SD) counterparts, and they’ve become affordable enough to make them a viable option for amateur filmmakers. In terms of capture quality, AVCHD usually means a higher quality image but a bigger file, while MP4 files are compressed to reduce size and are a bit more versatile in terms of how they can be manipulated and uploaded. Either way, you can’t go wrong, and will get a great looking picture. I’ve shot exclusively in AVCHD so far with my Canon camcorder and have had no issues.
The Audio Recorder
If you’re going to splurge on anything, it should be this. You may or may not elect to include video in your project, but you will always have audio, and the quality should be as clear as possible—especially if you plan on doing any kind of editing or transcribing. There are a few things to consider when choosing a recorder:
- Whichever model you go with should have at least one 3.5mm (1/8”) stereo line input, to give you the option of connecting an external microphone, and one 3.5mm (1/8”) output, so you can plug in a pair of headphones to monitor your audio.
- If you know you’re going to use an external microphone, having one or more XLR inputs is a plus. XLR refers to the type of connector used on some microphones; they are more robust than a 3.5mm jack and harder to accidentally unplug, making them an industry standard.
- Some recorders are meant for high-end professional use and have a plethora of features and buttons you’ll simply never use. Look for one with an easy to use interface.
- WAV and MP3 will be the most common options you’ll see format-wise, and many devices can record in either. WAV files are uncompressed, meaning they contain the entire recorded signal and are therefore much larger than MP3 recordings, which are easier to move and download but sometimes experience a slight loss in audio quality.
The three main types of memory cards that you’ll encounter are SD (Secure Digital, up to 2GB), SDHC (Secure Digital High Capacity, 4-32GB), and SDXC (Secure Digital eXtended Capacity 64GB-2TB). Almost all cameras, computers, and other tech manufactured after 2010 should be compatible with all three types, and the cards themselves are fairly inexpensive. Useful as they are, memory cards shouldn’t be considered a means of long-term storage for your files. For one thing, you’ll run out of room fast; while things like compression and format will determine the exact amounts, for planning purposes you can expect to fit only about 5 hours of HD video on a 64GB SDXC card and 12-49 hours of WAV audio on a 16GB SDHC card. Even if you’ll only be doing one or two short interviews, you should still plan on migrating your files to a more secure storage media as soon as possible after you’re done recording. Cards can be broken or lost, and digital files, like their analog counterparts, will “decay” over time if simply left sitting.
My raw footage is stored on two external hard drives. Any editing work is done using one of them, while the other is stored in a separate location as a backup. Edited interviews are likewise copied to both hard drives once they’re completed. (This practice of having multiple copies of the same material stored in separate locations is known as replication, and is an important aspect to any digital preservation plan; for more info, check out this great page from the Library of Congress.)
Again, these three pieces are the minimum you’ll need to properly record and store audio and (if you desire) video footage. Depending on the circumstances and scope of your project, however, you may want to utilize some optional gear and accessories, which I’ll bring up in Part 2. Until then, feel free to contact me with any questions, and thanks for reading!
Nick Hurley is a Research Services Assistant at UConn Archives & Special Collections, part-time Curator of the New England Air Museum, and an artillery officer in the Army National Guard. He received his B.A. and M.A. in History from the University of Connecticut, where his work focused on issues of state and society in 20th-century Europe. You can contact Nick at firstname.lastname@example.org and follow him on Twitter @hurley_nick.
There has been a lot of talk about how digital humanities scholarship has the potential to be democratizing, and the internet allows for connectivity that extends across cultural, geographical, and institutional boundaries. DH scholarship can directly reach the public outside of academia, and digital spaces allow for collaborative enterprises that have seldom been attempted by humanities scholars. But are all things digital inherently more accessible, or do we simply imagine them to be so? Are we designing for access or just assuming that access is no longer an issue?
Tara McPherson points out that exclusionary practices and ideologies (based on class, gender, race, sexuality, language, or ability) are often built into software in ways that are not always immediately visible to privileged users. This limits not only who has access to and ownership of DH work but also how diverse users can develop their work. One of these exclusionary ideologies is what disability theorist Tobin Siebers has termed the ideology of ability. This ideology assumes able-bodiedness as a “default” state. It either elides difference or else assumes that the disabled body must find a way to be “accommodated” rather than acknowledging any responsibility for designers to create spaces and environments that are inclusive to the diverse range of human ability.
Just as physical spaces are often inaccessible by design (e.g., stairs and doorways that do not permit wheelchair access or loud, brightly lit public spaces that can result in sensory overload for persons with autism), there are many ways in which digital space is constructed to include only the able-bodied, including text fields with small or difficult-to-read fonts, videos without captioning, podcasts without transcripts, images without descriptions that can be read by screen readers, web spaces that cannot be manipulated by users, and so-called “accessible” software that is built for the able-bodied and only retrofitted to “accommodate” diverse users when they complain.
Those engaging in digital humanities scholarship cannot hope to dismantle oppressive ideologies (something which is part of the core work of the humanities) while uncritically using technology that reifies these same oppressive structures. We must realize that part of digital humanities scholarship involves critical and intentional design. In order to truly encourage access, digital scholarship should include principals of universal design.
How can we do this? While it’s true that no design can be said to be truly universal, the Web Accessibility Initiative offers important guidelines for more inclusive digital publishing, and Yergeau et al. lay out a theoretical groundwork for accessibility in digital and multimedia work. The National Center on Universal Design for Learning, CAST, and Jay Dolmage address concerns specific to integrating digital media and technology for access in the classroom, and Composing Access advises on how to prepare for conferences. Here are a few tips for more accessible design:
- Think critically about the implicit ideologies coded into the platforms you use, and consider the affordances of your technology before using it. As Johanna Drucker and Patrik BO Svensson point out, middleware incorporates various rhetorical limitations—do these constraints limit access?
- Aim for commensurability across modes. While multimodality can be a great way for users to interact with your text in different ways and with different senses, if information is not presented redundantly through different modes, it increases the chance that users may not be able to access your text. For instance, if a video delivers information both visually and aurally but doesn’t include captioning and description, then it becomes inaccessible for both blind and deaf users. And of course, delivering information through more than one mode helps all Captions, for example, allow hearing users to access the text in a noisy place, on an airplane with someone sleeping in the next seat, or on a device without audio capability.
- Digital projects are more accessible when they are easily manipulable by users. For example, text that cannot be copied/pasted, as is the case in an image or some publishing platforms, might not be easily read with assistive technologies such as screen readers or braille pads.
Though digital media can present accessibility issues, when used critically and conscientiously, multimodal affordances open up the possibility of creating content that is more accessible to all users, regardless of level of ability.
Gabe Morrison is a first-year doctoral student in Rhetoric and Composition at the University of Connecticut. His research interests include multimodal writing and graduate student writing instruction. You can contact him at email@example.com.