blogguest

UConn Celebrates Open Access Week, by Jean Nelson – cross post with Babbidge Library

Here at the UConn Library, one of the tenets of our Purposeful Path Forward is to engage in the driving of UConn’s ‘Scholarly Engine’, or the processes of research and knowledge creation. One of the core activities in our approach is educating our community on the importance of Open Access. Open Access (OA), as defined by SPARC (the Scholarly Publishing and Academic Resources Coalition), refers to the “free, immediate, online availability of research articles, coupled with the rights to use these articles fully in the digital environment.”

Why Open? Open changes the way we discover knowledge. It can turn ideas into reality,  break down barriers to learning, and lay the groundwork for breakthrough research.

This month we are embracing the challenge provided by the 2017 International Open Access Week by answering the question, “Open in order to…” through a series of programs and initiatives.

OpenCommons@UConn
The UConn Library is proud to announce the re-launch of the University’s institutional repository, OpenCommons@UConn, a showcase of the scholarship and creative works of the UConn community. The renaming of this services emphasizes the Library’s role in providing the tools to enable independent learning, research, and scholarship. By making the University’s diverse and unique resources openly accessible worldwide, we hope to inspire groundbreaking research and advance learning, teaching, and entrepreneurial thinking.
Open in order to…provide access to UConn’s scholarship

 

Open Educational Resources @ UConn Exhibit: published teaching and learning materials under an open license
October 18-31, 2017

HBL, Plaza Level
Open Access and Open Educational Resources (OER) are related but distinct, with the commonality of providing high quality learning materials at no cost. In an academic setting, the lines of Open Access publishing for research materials and Open Educational Resources for teaching and learning overlap in significant ways. UConn’s OER Initiative began only 2 years ago and to date has saved our undergraduates over $500,000 in textbook costs. View some OER textbooks and learn more about the faculty who are working towards making UConn more affordable.
Open in order to…save students money

 

Is this open access journal any good?
Thursday, October 19, 9:30-11:00am
Homer Babbidge Library, Collaborative Learning Classroom
Faculty often struggle to identify good quality open access journals in which to publish or to serve as an editor or reviewer. Many new open access journals exist now – some are good quality, some are exploitative, and some are in-between. This workshop will include a brief discussion of faculty concerns about identifying journals. The majority of the session will be devoted to identifying and demonstrating indicator web-based tools which can help faculty to appraise a journal’s quality.  Please register at http://cetl.uconn.edu/seminars
Open in order to…find quality teaching materials

 

Paywall: A Conversation about the Business of Scholarship with Filmmaker Jason Schmitt
Wednesday, October 25, 2:30-4:00pm

Konover Auditorium, Thomas J. Dodd Research Center
Help us celebrate Open Access Week by joining award-winning filmmaker Jason Schmitt as we screen and discuss footage from his in-progress documentary Paywall: The Business of Scholarship. Schmitt will be accompanied in the discussion by a panel of UConn faculty who will share their views on making the results of academic research freely accessible online.  Co-sponsored by UConn Humanities Institute
Open in order to…talk about the business of scholarship
Flyer in pdf
Release in pdf

Open Data In Action
Thursday, October 26, 11:00am-2:00pm

Hartford Public Library Atrium
Open Data In Action brings together a wide range of researchers to showcase how their work has benefited from openly and freely accessible data. Presenters from the public, private, and academic sectors will discuss how open data, ranging from historical documents to statistical analyses, is being used to create projects, change policies, or conduct research and highlight the importance open data has on shaping the world around us.

Opening Remarks:
Tyler Kleykamp, Chief Data Officer, State of Connecticut

Presenters:

  • Steve Batt, UConn Hartford/CT State Data Center, Tableau Public and CT Census Data
  • Jason Cory Brunson, UConn Health Center, Modeling Incidence and Severity of Disease using Administrative Healthcare Data
  • Stephen Busemeyer, The Hartford Courant,Journalism and the Freedom of Information
  • Brett Flodine, GIS Project Leader, City of Hartford Open Data
  • Rachel Leventhal-Weiner, CT Data Collaborative, CT Data Academy
  • Anna Lindemann/Graham Stinnett, UConn/DM&D, & Archives, Teaching Motion Graphics with Human Rights Archives
  • Thomas Long, UConn Nursing, Dolan Collection Nursing History Blog
  • Tina Panik, Avon Public Library, World War II Newsletters from the CTDA
  • Jennifer Snow, UConn Library, Puerto Rico Citizenship Archives: Government Documents as Open Data
  • Rebecca Sterns, Korey Stringer Institute, Athlete Sudden Death Registry
  • Andrew Wolf, UConn Digital Media & Design, Omeka Everywhere

Co-sponsored by the Hartford Public Library
Open in order to…share data
Flyer in pdf

Introduction to Data Visualization using Tableau Public
Monday, October 30, 3:00-4:15pm
Homer Babbidge Library, Level 2 Electronic Classroom
Tableau Public is a free version of Tableau business intelligence / visual analytics software, which allows anyone to explore and present any quantitative information in compelling, interactive visualizations. In this hands-on session you will work with different prepared datasets to create online interactive bar graphs, scatterplots, thematic maps and much more, which can be linked to or embedded in blogs or on web sites. Please register at http://workshops.lib.uconn.edu/
Open in order to…visualize research

Digital Scholarship: Partnering for the Future
Joan K. Lippincott, Associate Executive Director, Coalition for Networked Information

Tuesday, November 7, 2-3:30
Homer Babbidge Library, Heritage Room
Researchers in many disciplines are finding that they can ask new kinds of research questions as a result of the rapid growth in the availability of digital content and tools. In addition, the outputs of their research can include many more types of products such as data visualizations, geo-referenced representations, text augmented with images and audio, exhibits on the web, and virtual reality environments. Developing these projects takes a team of people who have a variety of skill sets. These individuals may come from academic departments, the library, the information technology unit, and other specialties. Graduate and undergraduate students are also often part of teams working on digital scholarship projects. In this presentation, Lippincott will provide an update on developments in digital scholarship and will describe existing programs and projects, discuss the importance of physical space, and encourage the development of a campus digital scholarship community.  Co-sponsored by UConn Humanities Institute
Open in order to…develop digital scholarship

The original blog post available here.

Integrating Digital History into the Classroom, by Matthew Ferraro

As an aspiring social studies teacher, I recognize the importance of integrating digital history into the classroom. Students have grown up in the digital age, and, as such, consume a majority of their information online. Gone are the days of searching through a library for primary and secondary sources to support a historical argument or reading a newspaper to discover that day’s events. All this information, and more, can now be found online. This vast availability of information has greatly expanded the possibilities for studying history, which presents us, as educators, with a unique opportunity to integrate digital history into our classrooms. By doing so, we will enable students to utilize digital media to advance historical analysis and understanding. To do this, however, we must first provide students with models of digital history. What follows are several examples of digital history projects that could be used in classrooms (and beyond) to equip students with the skills required to contribute to our knowledge of world contexts in a digital way.

1) History Matters

HistoryMatters is a digital history project that resulted from collaboration between George Mason University and the City University of New York. The project began in 1998 with the intent of providing teachers and students with digital resources that could improve their instruction and understanding of United States history. It was funded by the Kellogg Foundation, the Rockefeller Foundation, and the National Endowment for the Humanities. There are over a thousand primary sources on a variety of topics that range from photographs to text documents to audio files, all of which can be used with students to help them construct a narrative of the past. What’s unique about this project is that it takes full advantage of the digital space by using audio files from everyday Americans to help participants co-construct the history of the United States as well as from scholars on how to teach major aspects of US history. In addition, since there are over a thousand primary sources available, there is a “full search” feature that was developed to assist in locating resources by time, topic, or keyword. With the large number of primary sources available, this digital history project would be an excellent resource for students to use for research papers. Students could use this project to develop a research question based on an area of inquiry, examine primary sources related to their topic, arrive at conclusions based on their research, and publish their findings in order to advance our understanding of history. Doing so would expose them to conducting research digitally while also developing their ability to think critically, evaluate evidence, and articulate their thoughts clearly.

2) Mapping Inequality  

Mapping Inequality is a digital history project that was created through the collaboration of three research teams from the University of Maryland, the University of Richmond, Virginia Tech, and Johns Hopkins University. This project showcases 150 maps that were drafted by the Home Owners’ Loan Corporation (HOLC) from 1935 to 1940. These maps were color-coded to show the credit-worthiness of different neighborhoods in each town. Mortgage lenders then used these maps to determine whether someone would qualify for a loan. This project was developed to show that, when these maps are compared to the layout of neighborhoods in the United States today, it becomes apparent that many of the racial and class inequities that exist are a direct result of the HOLC’s maps. In fact, many of these maps were produced such that they were to codify racial segregation into real estate practice. This project could be used with students for multiple purposes. For example, when teaching about the New Deal, students could use the site to determine how the HOLC reflected a problematic legacy of the New Deal. Students could also be asked to cite specific examples from the map of how the HOLC’s practices led to the racial and class segregation that is seen today. For example, if they examined the areas around Hartford, Connecticut, they would observe that the HOLC deemed that West Hartford had the “best,” most credit worthy neighborhoods, whereas Hartford had the “hazardous,” least credit worthy neighborhoods. If this map is compared to today’s, it becomes evident that the HOLC’s maps led to racial and class segregation, with West Hartford and Hartford reflecting mostly unchanged neighborhoods. In addition, showcasing a digital history project of this nature in class would familiarize students with what digital history can look like. Through this project, teachers could expose students to some of the digital tools and resources—such as mapping software and online databases—that would be required to design it. This would create incentives to work collaboratively with other scholars—especially those who could provide the digital resources for projects like this.

3) The Valley of the Shadow: Two Communities in the American Civil War

The Valley of the Shadow is a digital history project constructed by the Virginia Center for DigitalHistory at the University of Virginia. This project narrates the countless stories of two different communities from the American Civil War—one from the North and one from the South—through letters, newspapers, diaries, speeches, and other primary sources. The project is organized through a series of image maps that direct the viewer to various search engines. This project functions similarly to the HistoryMatters project—they are both databases of primary sources that employ search engines to enable the viewer to locate information—but there is a key difference between the two worth mentioning: while HistoryMatters contains a large amount of primary source information on a wide variety of topics across United States history, this project only provides information that is relevant to a specific time and topic. The narrow focus is relevant to the work historians do on a daily basis, as most of a history scholar’s research explores questions in a specific niche of the past. As such, teachers could use this project to show students how they might approach a digital history research project. This would help transition students away from the traditional way of communicating their thoughts on history through a research paper and, instead, provide them with the opportunity to disseminate their ideas digitally. For example, rather than writing a paper about the significant World War II battles, students could create an online timeline that lays out those events chronologically while also providing descriptions of the significance of each battle. Exposing students to and allowing them to engage in this sort of work would enable them to practice the craft of a historian in a very familiar context and equip them with the skills to pose their own questions about a certain niche of the world.

Matthew Ferraro is a masters’ student in the Neag School of Education’s Integrated Bachelors’ / Masters’ (I/BM) Program. He is currently interning at Conard High School, where his research interests include how to best integrate human rights education into social studies classrooms. He is studying to become a social studies teacher at the high school level. He can be reached at matthew.ferraro@uconn.edu.

Digital Humanities Is for Humans, Not Just Humanists: Social Science and DH, by Kitty O’Riordan

In an article published online last month by The Guardian—“AI programs exhibit racial and gender biases, research reveals”—the computer scientists behind the technology were careful to emphasize that this reflects not prejudice on the part of artificial intelligence, but AI’s learning of our own prejudices as encoded within language.

“Word embedding”, “already used in web search and machine translation, works by building up a mathematical representation of language, in which the meaning of a word is distilled into a series of numbers (known as a word vector) based on which other words most frequently appear alongside it. Perhaps surprisingly, this purely statistical approach appears to capture the rich cultural and social context of what a word means in the way that a dictionary definition would be incapable of.”

This tool’s ability to reproduce complex and nuanced word associations is probably not surprising to anyone familiar with digital humanities—and the fact that it returned associations that match pleasant words with whiteness and unpleasant ones with blackness, or that associate “woman” with the arts and interpretative disciplines and “man” with the STEM fields shouldn’t be surprising to anyone who has been paying attention. The distressing prospect that AI and other digital programs and platforms will only reinforce existing bias and inequality has certainly garnered the attention of scholars in media studies and DH, but one could argue that it has received equal attention in the social sciences.

As a graduate student in cultural anthropology drawn to DH, I sometimes find myself considering what exactly demarcates digital humanities from social science when apprehending these kinds of topics; somehow, with the addition of ‘digital’, the lines seem to have blurred. Both ultimately represent an investigation of how humans create meaning through or in relation to the digital universe, and the diverse methodologies at the disposal of each are increasingly overlapping. Below are just a few reasons, from my limited experience, as to why social scientists can benefit from involvement with digital humanities—and vice-versa.

1) Tools developed in DH can serve as methodologies in the social sciences.

Text mining, a process that derives patterns and trends from textual sources similar to the phenomenon described above, is particularly suited for social science analysis of primary sources. Programs like Voyant and Textalyser are free and easily available on the web, no downloads or installations required, and can pull data from PDFs, URLs, and Microsoft Word, plain text and more. Interview transcripts can also be analyzed using these programs, and the graphs and word clouds they create provide a unique way to “see” an argument, a theme, bias, etc.

Platforms like Omeka and Scalar can provide an opportunity not only to display ethnographic information for visual anthropologists, but can give powerful form to arguments in a way that textual forms cannot (see, for example, Performing Archive: Curtis + “the vanishing race”, which turns Edward S. Curtis’ famous photos of Native Americans on their heads by visualizing the categories instead of the categorized).

2) Both fields are tackling the same issues.

Miriam Posner writes that she “would like us to start understanding markers like gender and race not as givens but as constructions…I want us to stop acting as though the data models for identity are containers to be filled in order to produce meaning and recognize instead that these structures themselves constitute data.” Drucker and Svensson echo that creating data structures that expose inequality or incorporate diversity is not as straightforward as it seems, given that “the organization of the fields and tag sets already prescribes what can be included and how these inclusions are put into signifying relations with each other” (10). Anthropologist Sally Engle Merry, in The Seductions of Quantification, expounds on this idea in the realm of Human Rights, proving that indicators can obscure as much or more than they reveal. Alliances between DHers as builders and analyzers of digital tools and platforms, and social scientists as suppliers of information on the effects of these on the ground in various cultural contexts, provide benefit to both.

3) Emerging fields in the social sciences can learn a lot from established DH communities and scholarship.


Digital anthropology
, digital sociology, cyberanthropology, digital ethnography, and virtual anthropology are all sub-disciplines emerging from the social sciences with foci and methods that often overlap with those of digital humanities. Studies of Second Life, World of Warcraft, or hacking; the ways diasporic communities use social media platforms to maintain relationships; or projects that focus on digitizing indigenous languages all have counterparts within digital humanities.  Theoretically, there is much to compare: Richard Grusin’s work on mediation intersects with
anthropologists leading the “ontological turn” like Philippe Descola and Eduardo Viveiros de Castro; Florian Cramer’s work on the ‘post-digital’ pairs interestingly with Shannon Lee Dawdy’s concept of “clockpunk” anthropology, influenced by thinkers both disciplines share like Walter Benjamin and Bruno Latour.

Though I am still relatively new to DH, one theme I find repeated often, and which represents much of the promise and the excitement of digital humanities for me, is the push for collaboration and the breaking down of disciplinary boundaries. Technologies like AI remind us that we all share the collective responsibility to build digital worlds that don’t simply reflect the restrictions and biases of our textual and social worlds.

 

Kitty O’Riordan is a doctoral student in cultural anthropology at the University of Connecticut. Her research interests include anthropology of media and public discourse, comparative science studies, and contemporary indigenous issues in New England. You can reach her at caitlin.o’riordan@uconn.edu.

Oral Histories and the Tech Needed to Produce Them, Part 2: Microphones, Accessories, and Editing Software, by Nick Hurley

Welcome back! To pick up where my last post left off, I’d like to discuss some of the accessories and optional equipment you can use to augment your basic interview “kit,” as well as several editing programs that can be used for post-production work on your footage.

The Microphone

An external microphone might be a good investment if you’re interviewing multiple people at once and want to ensure you are recording clear, distinct audio for each person. Almost all of the microphones you’ll come across will fall into one of two categories: dynamic and condenser. The difference has to do with how each converts sound vibrations into electrical signals. In addition, condenser microphones require a power source, provided by batteries or whatever device they’re plugged into (this is known as phantom power).Within these two broad categories, there are a number of different patterns in which microphones record sound.

True to their name, omnidirectional mics pick up sound in every direction equally. This pattern is utilized by many lavalier (aka lapel) microphones, the “clip-on” types you’ve probably seen on TV and elsewhere. If you’re going to go with a lavalier, make sure whoever you’re working with is comfortable wearing one. It seems like a trivial concern, but it could be significant depending on the circumstances of your interview. One of my participants had never been interviewed before, and was visibly nervous before we started. In cases like that, the less invasive you are, the better.
In addition, an omnidirectional lavalier isn’t ideal for multiple-person interviews; in these circumstances, a cardioid microphone is a better choice. Named for its heart-shaped sound pattern, cardioids will capture audio well from the front and sides, and, though they’re usually a bit more expensive, cancel out ambient noise better than an omnidirectional mic. There are also shotgun microphones, named for the linear pattern by which it picks up sound. Like a shotgun, it must be pointed directly at its “target” in order to properly record it. This results in a “tighter” sound when compared to a cardioid mic, but again isn’t ideal for multiple-person interviews, where you will have more than one source of audio.

Accessories

There are plenty of options out there for camcorder tripods, ranging from the too-cheap to the ridiculously expensive. Unless you’re going to be conducting the interview outdoors or will be moving around with your subject while he/she talks, you don’t need anything heavy duty. Just make sure you get one that breaks down easily and is relatively compact.

Bags and cases are another instance where you don’t need to go too crazy. Overseas I was able to fit everything I needed (minus the camera tripod) in a padded laptop case. If you’re going to invest in cases, buy them for the camcorder and audio recorder, although in many instances one might be included when you buy these items.

In a perfect world, you’ll be able to have your camcorder plugged into a wall outlet for an indefinite power supply while conducting an interview. Since that won’t always be feasible, you should look into a spare battery. A tip: if you use a Canon device, purchase a decoded battery for your backup. These batteries are manufactured by a third party and don’t have the Canon microchip to track things like number of shots, battery charge, etc. but otherwise behave exactly the same as their name-brand counterparts—and cost significantly less. Make sure you read the reviews however, as not all decoded batteries are created equal and some manufacturers are more reliable than others.

Editing Software

I’ve used Adobe Premiere Pro CC for most of my post-interview editing. While truthfully a bit more than

what I needed, it offers a lot in terms of manipulating audio tracks and syncing them up with video footage. Burning DVDs is easier as well (the software you need will be included in your Premiere subscription). Another upside to Adobe is the flexibility of their subscription plans. Individuals have the option of choosing which apps from the “Creative Cloud” they’d like to utilize or subscribing to the entire package, and can sign on for an entire year.

If you’re just looking to apply some simple edits like a title slide, transitions, and captions, you may be able to get away with using free video editing software like Windows Movie Maker. Here’s a short clip I put together to illustrate what can be done with that program:

If you simply need to import your audio files into a program where you can listen to them, transcribe, and do some basic editing, I would recommend Audacity. It’s free, relatively easy to use, and available on a number of operating systems.

Future Plans

Tech challenges notwithstanding, I found my entire project to be an incredibly worthwhile endeavor. Because the Second World War had until recently been somewhat of a taboo subject in post-war Germany, most of my participants had never discussed the topic at length with anyone. The fact that I was the first to hear, record, and preserve these stories made every ounce of effort worth it. I’m still not quite sure what I’ll do with the 5+ hours of footage I collected, but I could see using it as material for a series of small “episodes” featured on a personal website, a longer documentary, or a written collection of oral histories or narrative work.

I wish others similar success in their oral history endeavors, and I hope that these two posts will help simplify the process when purchasing the necessary equipment. Please feel free to contact me with more questions, or if you’d like to know more about anything I discussed here. Thanks again for reading!

Nick Hurley is a Research Services Assistant at UConn Archives & Special Collections, part-time Curator of the New England Air Museum, and an artillery officer in the Army National Guard. He received his B.A. and M.A. in History from the University of Connecticut, where his work focused on issues of state and society in twentieth century Europe. You can contact Nick at nicholas.hurley@uconn.edu and follow him on Twitter @hurley_nick.

DH and Narrative, DH as Narrative, DH-Narrative, by Elisabeth Buzay

While both I—and many others—would argue that those who work in DH agree that they do not agree on what DH means, as I have encountered more and more digital tools and projects, I have begun to think of DH work in a provocative way: DH work should be considered a form of narrative-making or storytelling. For fields such as digital storytelling or tools such as story mapping, this argument may not be that surprising. But what about other types of DH projects and tools? If we think of archival or curated sites, such as those created with Omeka, or book or network conglomerations, such as those made in Scalar, I propose that these forays are equally forms of narrative or story: we pick and choose what to include and exclude, we form paths and groupings and connections to guide or suggest methods of understanding; in other words, we give shape to a narrative. Here I will advance an initial iteration of this argument, which, I believe, ultimately provides another perspective on how DH is truly a part of the humanities.

 

DH and Narrative

If we take Hayden White’s description of narrative, in conversation with Barthes, in The Content of the Form: Narrative Discourse and Historical Representation, which argues that “[a]rising, as Barthes says, between our experience of the world and our efforts to describe that experience in language, narrative ‘ceaselessly substitutes meaning for the straightforward copy of the events recounted’” (1–2), as one of the basic definitions of this concept, we can see how this term could easily be used in reference to various methodologies and tools used in DH. More particularly, however, we must expand the definition by including not just language, but also image and sound. It is worth a look, for instance, at DH projects that create digital archives, such as The Digital Public Library of America or the Bibliothèque nationale de France’s Gallica, in which digital tools are used to create digitized versions of (an) actual archive(s). Or other such projects, like The Internet Archive, The Sonic Dictionary, or The Story of the Beautiful, in which a digital archive is created. Or we might think of digital editions of texts, such as the Folger Digital Texts or digitized resources such as The ARTFL Project. Or, in a slightly different direction, there are tools one can use to compare versions of texts, like Juxta or JuxtaCommons, or to annotate a text (collaboratively or not), like Annotation Studio. In these varying cases, the digital approach and tools used are the methods through which meaning is provided, whether that meaning be the coherency of an archive, the evolution or development of a text, or the preservation of narratives that themselves might otherwise be lost.

 

DH as Narrative

A DH approach is not limited, of course, to archival or editorial projects, however. In many cases, DH projects are clearly narrative in form. The case of digital storytelling is, perhaps, the most obvious such example. StoryCenter, previously known as the Center for Digital Storytelling, is a well-known entity whose basic elements of digital storytelling are often cited. And digital storytelling is also being used in a slightly different manner by teachers and students in the field of education in order to teach and learn about topics beyond those of telling personal stories, as can be seen on the University of Houston’s Educational Uses of Digital Storytelling site. Digital storytelling approaches have been expanded in other directions as well, for instance in

  • tying stories to location, with the use of tools like StoryMapJS, Esri Story Maps, or Odyssey, in which specific events and places are linked,
  • tying stories to timing, with the use of tools like TimeLineJS, TimeGlider, or Timetoast, in which specific events and times are linked,
  • or tying stories to time and location, with the use of tools like Neatline or TimeMapper, in which specific events, places, and times are linked so that a user can follow a story both geographically and/or chronologically.

In all of these cases, the digital approach is one that is explicitly used to shape a narrative or story. In other words, here DH is again a form of narrative or narrative-making.

 

DH-Narrative

Big data projects, such as those of the Stanford Literary Lab or approaches, such as that of Matthew L. Jockers in his Macroanalysis: Digital Methods and Literary History, may present an exception to my argument in comparison to other DH projects and approaches mentioned thus far; nonetheless, I suggest that even projects or approaches such as these also create narratives or stories, in that they provide meaning to observations, calculations, or data that otherwise would not be comprehensible, given their size. How could they not?

This brief overview brings us to a final point to ponder: in their Digital_Humanities, Anne Burdick, Johanna Drucker, Peter Lunenfeld, Todd Presner, and Jeffrey Schnapp argue that the design of DH tools and projects are themselves essential aspects of the arguments they create:

The parsing of the cultural record in terms of questions of authenticity, origin, transmission, or production is one of the foundation stones of humanistic scholarship upon which all other interpretive work depends. But editing is also productive and generative, and it is the suite of rhetorical devices that make a work. Editing is the creative, imaginative activity of making, and as such, design can be also seen as a kind of editing: It is the means by which an argument takes shape and is given form. (18)

In other words, a narrative-making approach is literally embedded in form, in design. Like these authors, I wonder whether this perspective cannot be extended. They write:

DESIGN EMERGES AS THE NEW FOUNDATION FOR THE CONCEPTUALIZATION AND PRODUCTION OF KNOWLEDGE.

DESIGN METHODS INFORM ALL ASPECTS OF HUMANISTIC PRACTICE, JUST AS RHETORIC ONCE SERVED AS BOTH ITS GLUE AND COMPOSITIONAL TECHNIQUE.

CONTEMPORARY ELOQUENCE, POWER, AND PERSUASION MERGE TRADITIONAL VERBAL AND ARGUMENTATIVE SKILLS WITH THE PRACTICE OF MULTIMEDIA LITERACY SHAPED BY AN UNDERSTANDING OF THE PRINCIPLE OF DESIGN. (117–118)

If we apply these points to the entire field of DH, this provides insight into significant food for thought: if
design is the foundation of DH, then isn’t the result of this design necessarily a narrative or a story? And might not this be one further aspect that confirms that DH is indeed a part of the traditional humanities?

These questions invite others: are DH narratives and their design different or new or innovative in comparison to traditional narratives, and if so how? What can DH narratives tell us about ourselves and our world? To circle back to White and Barthes’ view of narrative, if we accept that DH is narrative, what new meanings can be distilled from the events DH recounts?

 

Elisabeth Herbst Buzay is a doctoral student in French and Francophone Studies and in the Medieval Studies Program at the University of Connecticut. Her research interests include medieval romances, contemporary fantasy, digital humanities, video games, the intersection of text and images, and translation. You can contact her at elisabeth.buzay@uconn.edu.

Visualizing English Print at the Folger, by Gregory Kneidel (cross-post with Ruff Draughts)

Screen Shot 2017-03-17 at 3.01.53 PMIn December I spent two days at the Folger’s Visualizing English Print seminar. It brought together people from the Folger, the University of Wisconsin, and the University of Strathclyde in Glasgow; about half of us were literature people, half computer science; a third of us were tenure-track faculty, a third grad students, and a third in other types of research positions (i.e., librarians, DH directors, etc.).

Over those two days, we worked our way through a set of custom data visualization tools that can be found here. Before we could visualize, we needed and were given data: a huge corpus of nearly 33,000 EEBO-TCP-derived simple text files that had been cleaned up and spit through a regularizing procedure so that it would be machine-readable (with loss, obviously, of lots of cool, irregular features—the grad students who wanted to do big data studies of prosody were bummed to learn that all contractions and elisions had been scrubbed out). They also gave us a few smaller, curated corpora of texts, two specifically of dramatic texts, two others of scientific texts. Anyone who wants a copy of this data, I’d be happy to hook you up.vep_1

From there, we did (or were shown) a lot of data visualization. Some of this was based on word-frequency counts, but the real novel thing was using a dictionary of sorts called DocuScope—basically a program that sorts 40 million different linguistic patterns into one of about 100 specific rhetorical/verbal categories (DocuScope was developed at CMU as a rhet/comp tool—turned out not to be good at teaching rhet/comp, but it is good at things like picking stocks). DocuScope might make a hash of some words or phrases (and you can revise or modify it; Michael Witmore tailored a DocuScope dictionary to early modern English), but it does so consistently and you’re counting on the law of averages to wash everything out.

After drinking the DocuScope Kool-Aid, we learned how to visualize the results of DocuScoped data analysis. Again, there were a few other cool features and possibilities, and I only comprehended the tip of the data-analysis iceberg, but basically this involved one of two things.

  • Using something called the MetaData Builder, we derived DocuScope data for individual texts or groups of texts within a large corpus of texts. So, for example, we could find out which of the approximately 500 plays in our subcorpus of dramatic texts is the angriest (i.e., has the greatest proportion of words/phrases DocuScope tags as relating to anger)? Or, in an example we discussed at length, within the texts in our science subcorpus, who used more first-person references, Boyle or Hobbes (i.e., which had the greater proportion of words/phrases DocuScope tags as first-person references). The CS people were quite skilled at slicing, dicing, and graphing all this data in cool combinations. Here are some examples. A more polished essay using this kind of data analysis is here. So this is the distribution of DocuScope traits in texts in large and small corpora.
  • We visualized the distribution of DocuScope tags within a single text using something called VEP Slim TV. Using Slim TV, you can track the rise and fall of each trait within a given text AND (and this is the key part) link directly to the text itself. So, for example, this is an image of Margaret Cavendish’s Blazing-World (1667).

vep_2

 

Here, the blue line in the right frame charts lexical patterns that DocuScope tags as “Sense Objects.”

vep_3vep_4

 

 

 

 

 

 

 

 

 

 

 

 

 

The red line charts lexical patterns that DocuScope tags as “Positive Standards.” You’ll see there is lots of blue (compared to red) at the beginning of Cavendish’s novel (when the Lady is interviewing various Bird-Men and Bear-Men about their scientific experiments), but one stretch in the novel where there is more red than blue (when the Lady is conversing with Immaterial Spirits about the traits of nobility). A really cool thing about Slim TV that could make it useful in the classroom: you can move through and link directly to the text itself (that horizontal yellow bar on the right shows which section of the text is currently being displayed).

So 1) regularized EEBO-TCP texts turned into spreadsheets using 2) the DocuScope dictionary; then use that data to visualize either 3) individual texts as data points within a larger corpus of texts or 4) the distribution of DocuScope tags within a single text.

Again, the seminar leaders showed some nice examples of where this kind of research can lead and lots of cool looking graphs. Ultimately, some of the findings were, if not underwhelming, at least just whelming: we had fun discussing the finding that, relatively speaking, Shakespeare’s comedies tend to use “a” and his tragedies tend to use “the.” Do we want to live in a world where that is interesting? As we experimented with the tools they gave us, at times it felt a little like playing with a Magic 8 Ball: no matter what texts you fed it, DocuScope would give you lots of possible answers, but you just couldn’t tell if the original question was important or figure out if the answers had anything to do with the question. So formulating good research questions remains, to no one’s surprise, the real trick.

A few other key takeaways for me:

1) Learn to love csv files or, better, learn to love someone from the CS world who digs graphing software;

2) Curated data corpora might be the new graduate/honors thesis. Create a corpora (e.g.s, sermons, epics, travel narratives, court reports, romances), add some good metadata, and you’ve got yourself a lasting contribution to knowledge (again, the examples here are the drama corpora or the science corpora). A few weeks ago, Alan Liu told me that he requires his dissertation advisees to have a least one chapter that gets off the printed page and has some kind of digital component. A curated data collection, which could be spun through DocuScope or any other kind of textual analysis program, could be just that kind of thing.

3) For classroom use, the coolest thing was VEP Slim TV, which tracks the prominence of certain verbal/rhetorical features within a specific text and links directly to the text under consideration. It’s colorful and customizable, something students might find enjoyable.

All this stuff is publicly available as well. I’d be happy to demo what we did (or what I can do of what we did) to anyone who is interested.

Gregory Kneidel is Associate Professor of English at the Hartford Campus. He specializes in Renaissance poetry and prose, law and literature, and textual editing. He can be reached at gregory.kneidel@uconn.edu.

Oral Histories and the Tech Needed to Produce Them, Part 1: Cameras, Audio Recorders, and Media Storage, by Nick Hurley

Screen Shot 2017-03-04 at 4.19.45 PMLast summer I had the pleasure of spending several weeks in southwestern Germany, visiting family and conducting interviews with five local residents who lived through the Second World War. In doing so, I fulfilled a goal I’d had in mind ever since the death of my great-grandmother in 2013. She had been one of a host of relatives and family friends that regaled me with stories from “back then” every time I’d come to visit, and her passing made me realize that I had to do more than just listen if I wanted to preserve these memories for future generations. This time around, I would sit down with each of the participants—the youngest of whom was in their late 70s—record our conversations, and eventually send each of them a copy of their edited interview on DVD. While I had a clear idea of why I was undertaking the project, and had done a lot of reading on oral history practices (including this fantastic online resource), I was less confident in just how I would go about carrying out the actual interviews. I was inexperienced with audiovisual equipment or video editing, and the seemingly endless number of tech-related questions I faced concerning things like cameras, microphones, and recording formats left my head spinning.

It took a significant amount of research and self-instruction before I was comfortable enough to purchase the necessary gear I needed. These two posts are my attempt to share what I learned and hopefully save other oral history novices some of the headaches I endured putting together an interview “kit” which, at a minimum, will consist of a camcorder (possibly), your audio recorder, and a way to store your footage.

The Camera

You’ll need to decide early on whether or not to record video as well as audio for your oral histories. While choosing the latter option will greatly reduce the amount of equipment you’ll need to buy, it really depends on the nature of your project. If you do decide to film, steer clear of mini-DV and DVD camcorders, as these record on formats that are quickly becoming obsolete. Your best bet is to go with a flash memory camcorder, which utilize removable memory cards that can be inserted into your laptop for easy file transfer.

High definition (HD) camcorders are fast becoming the norm over their standard definition (SD) counterparts, and they’ve become affordable enough to make them a viable option for amateur filmmakers. In terms of capture quality, AVCHD usually means a higher quality image but a bigger file, while MP4 files are compressed to reduce size and are a bit more versatile in terms of how they can be manipulated and uploaded. Either way, you can’t go wrong, and will get a great looking picture. I’ve shot exclusively in AVCHD so far with my Canon camcorder and have had no issues.

The Audio Recorder

If you’re going to splurge on anything, it should be this. You may or may not elect to include video in your project, but you will always have audio, and the quality should be as clear as possible—especially if you plan on doing any kind of editing or transcribing. There are a few things to consider when choosing a recorder:

  1. Whichever model you go with should have at least one 3.5mm (1/8”)Screen Shot 2017-03-04 at 4.10.03 PM stereo line input, to give you the option of connecting an external microphone, and one 3.5mm (1/8”) output, so you can plug in a pair of headphones to monitor your audio.
  2. If you know you’re going to use an external microphone, having one or more XLR inputs is a plus. XLR refers to the type of connector used on some microphones; they are more robust than a 3.5mm jack and harder to accidentally unplug, making them an industry standard.
  3. Some recorders are meant for high-end professional use and have a plethora of features and buttons you’ll simply never use. Look for one with an easy to use interface.
  4. WAV and MP3 will be the most common options you’ll see format-wise, and many devices can record in either. WAV files are uncompressed, meaning they contain the entire recorded signal and are therefore much larger than MP3 recordings, which are easier to move and download but sometimes experience a slight loss in audio quality.

Media Storage

The three main types of memory cards that you’ll encounter are SD (Secure Digital, up to 2GB), SDHC (Secure Digital High Capacity, 4-32GB), and SDXC (Secure Digital eXtended Capacity 64GB-2TB). Almost all cameras, computers, and other tech manufactured after 2010 should be compatible with all three types, and the cards themselves are fairly inexpensive. Useful as they are, memory cards shouldn’t be considered a means of long-term storage for your files. For one thing, you’ll run out of room fast; while things like compression and format will determine the exact amounts, for planning purposes you can expect to fit only about 5 hours of HD video on a 64GB SDXC card and 12-49 hours of WAV audio on a 16GB SDHC card. Even if you’ll only be doing one or two short interviews, you should still plan on migrating your files to a more secure storage media as soon as possible after you’re done recording. Cards can be broken or lost, and digital files, like their analog counterparts, will “decay” over time if simply left sitting.

Screen Shot 2017-03-04 at 4.12.54 PMMy raw footage is stored on two external hard drives. Any editing work is done using one of them, while the other is stored in a separate location as a backup. Edited interviews are likewise copied to both hard drives once they’re completed. (This practice of having multiple copies of the same material stored in separate locations is known as replication, and is an important aspect to any digital preservation plan; for more info, check out this great page from the Library of Congress.)

Again, these three pieces are the minimum you’ll need to properly record and store audio and (if you desire) video footage. Depending on the circumstances and scope of your project, however, you may want to utilize some optional gear and accessories, which I’ll bring up in Part 2. Until then, feel free to contact me with any questions, and thanks for reading!

Nick Hurley is a Research Services Assistant at UConn Archives & Special Collections, part-time Curator of the New England Air Museum, and an artillery officer in the Army National Guard. He received his B.A. and M.A. in History from the University of Connecticut, where his work focused on issues of state and society in 20th-century Europe. You can contact Nick at nicholas.hurley@uconn.edu and follow him on Twitter @hurley_nick.

Digital Spaces and Designing for Access, by Gabriel Morrison

AccessThere has been a lot of talk about how digital humanities scholarship has the potential to be democratizing, and the internet allows for connectivity that extends across cultural, geographical, and institutional boundaries. DH scholarship can directly reach the public outside of academia, and digital spaces allow for collaborative enterprises that have seldom been attempted by humanities scholars. But are all things digital inherently more accessible, or do we simply imagine them to be so? Are we designing for access or just assuming that access is no longer an issue?

Tara McPherson points out that exclusionary practices and ideologies (based on class, gender, race, sexuality, language, or ability) are often built into software in ways that are not always immediately visible to privileged users. This limits not only who has access to and ownership of DH work but also how diverse users can develop their work. One of these exclusionary ideologies is what disability theorist Tobin Siebers has termed the ideology of ability. This ideology assumes able-bodiedness as a “default” state. It either elides difference or else assumes that the disabled body must find a way to be “accommodated” rather than acknowledging any responsibility for designers to create spaces and environments that are inclusive to the diverse range of human ability.

Just as physical spaces are often inaccessible by design (e.g., stairs and stairsdoorways that do not permit wheelchair access or loud, brightly lit public spaces that can result in sensory overload for persons with autism), there are many ways in which digital space is constructed to include only the able-bodied, including text fields with small or difficult-to-read fonts, videos without captioning, podcasts without transcripts, images without descriptions that can be read by screen readers, web spaces that cannot be manipulated by users, and so-called “accessible” software that is built for the able-bodied and only retrofitted to “accommodate” diverse users when they complain.

Those engaging in digital humanities scholarship cannot hope to dismantle oppressive ideologies (something which is part of the core work of the humanities) while uncritically using technology that reifies these same oppressive structures. We must realize that part of digital humanities scholarship involves critical and intentional design. In order to truly encourage access, digital scholarship should include principals of universal design.

How can we do this? While it’s true that no design can be said to be truly universal, the Web Accessibility Initiative offers important guidelines for more inclusive digital publishing, and Yergeau et al. lay out a theoretical groundwork for accessibility in digital and multimedia work. The National Center on Universal Design for Learning, CAST, and Jay Dolmage address concerns specific to integrating digital media and technology for access in the classroom, and Composing Access advises on how to prepare for conferences. Here are a few tips for more accessible design:

  • Think critically about the implicit ideologies coded into the platforms you use, and consider the affordances of your technology before using it. As Johanna Drucker and Patrik BO Svensson point out, middleware incorporates various rhetorical limitations—do these constraints limit access?
  • Aim for commensurability across modes. While multimodality can be a great way for users to interact with your text in different ways and with different senses, if information is not presented redundantly through different modes, it increases the chance that users may not be able to access your text. For instance, if a video delivers information both visually and aurally but doesn’t include captioning and description, then it becomes inaccessible for both blind and deaf users. And of course, delivering information through more than one mode helps all Captions, for example, allow hearing users to access the text in a noisy place, on an airplane with someone sleeping in the next seat, or on a device without audio capability.
  • Digital projects are more accessible when they are easily manipulable by users. For example, text that cannot be copied/pasted, as is the case in an image or some publishing platforms, might not be easily read with assistive technologies such as screen readers or braille pads.

Though digital media can present accessibility issues, when used critically and conscientiously, multimodal affordances open up the possibility of creating content that is more accessible to all users, regardless of level of ability.

Gabe Morrison is a first-year doctoral student in Rhetoric and Composition at the University of Connecticut. His research interests include multimodal writing and graduate student writing instruction. You can contact him at gabriel.morrison@uconn.edu.

Where is the Body in Digital Activism? By Bhakti Shringarpure

screen-shot-2016-11-10-at-11-09-17-amDuring the recent protests against the Dakota Access Pipeline, staged by the Standing Rock Sioux, Facebook users were asked to “check in” to the site of the protest taking place at the reservation. Ostensibly, it was meant to throw police surveillance off track since Facebook locations were being used to track and arrest protesters. As the post to check-in went viral, it generated a counterpoint response almost simultaneously. Blogs decrying social media solidarity that appears lazy and without any actual risk or effort involved sprung up within hours of the check-in going viral. “No, Checking In at the Standing Rock Pipeline Protests on Facebook Will Not Confuse the Police. It’s a waste of time,” wrote Mother Jones. Another friend cringed on social media: “Solidarity is great, and so is media attention, but if you really want to help protesters: donate to the Sacred Stone Camp Legal Defense Fund, call elected officials, or fly your ass over there if you’re able – your body is a lot more useful than your “check-in” at Standing Rock.” Elsewhere, an article mocking the phenomenon appeared: “ISIS Flees After Millions of Americans ‘Check In’ to Mosul on Facebook.” It had not even been 12 hours.

Debates about digital activism have always questioned the significance of the protester’s body. The Arab Spring was heralded as crucial in allowing for a “digital revolution” to take root. But in hindsight, the moment of cyber euphoria seemed to have yielded mixed outcomes. Though digital media was seen as a catalyzing force and has resulted in an extraordinary collective convergence in Egypt and Tunisia, it was also seen as having negative effects; most notably the counter-use by governments to intercept dissent and use the digital for amping up surveillance. More recently, the #BlackLivesMatter movement started as a hashtag on social media and has since been used 12 million times according to studies conducted by the Pew Research Center. That the hashtag became a movement and that the movement is now ushering in the most resistant and radical thinking around race in the USA is not debatable. Yet the actual role of social media is moot unless the physical impact of these hashtags eventually becomes apparent, and this can generally be gauged through the amount of actual protesters, or through shifts it may bring about in local or governmental spheres.

While digital activism has certainly become entrenched in individual and institutional realms, it seems to have also given rise to the narcissistic clicktivist. Detached from any real action and armed with no more than a do-gooder mentality, the clicktivist tends to like, share, tweet, tumblr and post generously but is seen as not necessarily showing up when it counts and when the going gets tough. It is argued that their contribution cannot be ignored and that if solidarity goes viral so does the cause. screen-shot-2016-11-10-at-11-09-31-amThe Dakota Access Pipeline check-in protests on Facebook were less interesting because of the virality of the phenomenon itself; but they were unusual because they illustrated how much we have come to dislike the online activist, how quickly there has come about an “obvious” knowledge that there is a real dichotomy between the body and the digital. In my opinion, the significance of this particular clicktivist is yet uncertain, and I hope we can arrive at more incisive understandings of this phenomenon.

This conversation will be taken up in some depth by the Digital Humanities reading group sponsored by the UConn Humanities Institute. We will meet on November 17th, Thursday from 12-2pm at the Homer Babbidge Library, 4th floor, room 4-153. We will be working through a variety of readings that include the Black Lives Matter syllabus, surveys that ask how social media users see, share and discuss race and the rise of hashtags like #BlackLivesMatter, skeptic critics such as Robert McChesney, Micah White and Jessy Hempel who make strong arguments against online activism, and academics who have engaged in longer and sustained ways with the impact of these new media.

This meeting is open to all faculty and graduate students. Please email bhakti.shringarpure@uconn.edu for the full list of readings.

Bhakti Shringarpure is Assistant Professor of English and editor-in-chief of Warscapes magazine.

A Crash Course on Digital Mapping for the Moderately Technologically Savvy, by Nathan Braccio

screen-shot-2016-11-03-at-4-09-50-pmFor scholars in the humanities interested in making maps there is a wide range of available tools. At least half-a-dozen programs exist that allow a scholar to upload data, visualize it, analyze it, and then share it with colleagues and the public. These tools provide the enterprising scholar the ability to augment their arguments with exciting visual components or to reveal new questions or patterns that can provide strong evidence or push research in new directions.

In this blog post I will discuss some of the options available, focusing on how each tool matches with different kinds of projects and skill levels. While not an expert in GIS or mapping, I have been working on a mapping project on 17th-century New England that has plunged me into an overwhelming array of websites and software. I made the time-consuming mistake of experimenting with each new software I came across, but hopefully after reading this post others can avoid this quagmire and get to making exciting and fun maps.

Before continuing a little should be said about the different uses for maps (from my perspective as a history PhD candidate). Maps make a striking visual argument that can both stand on its own when crafted well or can complement a text or webpage. For example, while I can point out that dozens of towns in New England were destroyed during King Philip’s War, actually mapping this destruction with intensity bubbles across the region makes a powerful statement. As an analytical tool, maps allow scholars to repurpose heavily used sources in order to find new patterns or to compile relatively insignificant data from ignored sources into more useful aggregated forms. Continuing with examples I know, by plotting something as mundane as the dates of town settlement throughout New England, the chronology of English settlers breaking away from their coastal and riverine settlements becomes clear. Simply reading dates and locations would not have yielded this conclusion. Richard White has presented a particular strong argument for spatial history.

 

Getting Started

Before you actually start to use any mapping programs, you will need a few things, including something to map! You will also need to know your goal. There are three types of things you can do with mapping software: you can make cool visualizations or tell stories, you can plot and analyze vector data, or you can overlay historical maps on contemporary maps (and even plot your vector data on them or extract data from them). Vector data is usually stored in a spreadsheet, although most programs also allow you to add data points internally in a time consuming process. With your goal in mind, gather your vector data, an image of a historical map, or information/a story you want to visualize.

screen-shot-2016-11-03-at-4-08-32-pm

If you are using vector data, you probably already have a location associated with what you want to plot. If your location is expressed in lat/lon numbers you are ready. If it is a town name or street address, you will want to convert it to lat/lon coordinates unless you are using Google Maps or Carto (they can do it automatically). A simple google search will yield some websites that will convert your locational data, but they are somewhat clunky. A better method is to upload your spreadsheet into Google Sheets (Google’s version of excel) and to create a macro. While that may sound intimidating, it really only involves copy and pasting a line of code that can be found here.

Google Maps

For the digital neophyte looking to either visualize or begin an analytical project, the best place to start is Google Maps. While you may be familiar with using Google Maps to get directions or look at a street view of your house, it also has the ability to plot vector data, or become a complex map imbedded in a website. For the purposes of the only moderately technologically savvy, this is best done through “My Maps.” My Maps has built in georeferencing and is linked to Google Sheets, so it is easy to transfer vector data over with basic locational information and have it quickly plotted through a series of intuitive commands. Your data can be classified by Google Maps in a few basic ways including simple color coding. You can also upload customized images to act as icons for your data in addition to Google’s icon library. You are able to easily manipulate your data in the program, change how it is classified or displayed, and isolate specific ranges of data with a few simple clicks.

While a more skilled user may be able to use the Google Map API coupled with coding skill and a web page to do some fancy things, for the normal user Google Maps has several limitations. Without using the API along with your own website, Google Maps has limited customizability. There is limited styling, no ability to apply JavaScript (like you can with the API), and no ability to make a customizable interface. You are also limited to using one of Google’s nine base-maps. While the base-maps can look nice, they often contain contemporary information or labels that may be anachronistic.

 

Carto

Using Carto will avoid several of these problems while requiring some additional technical skill. In general Carto is similar to Google Maps My Maps. It is good for plotting and visualizing vector data and you can modify uploaded spreadsheets within the webpage. screen-shot-2016-11-03-at-4-08-16-pmWhile at times a little less user friendly, it is also enables you to use custom base-maps and to apply limited coding to change the style and interface of your maps. Like Google Maps, it is extremely easy to share on social media or embedded in your own webpage. Overlaying images, while possible, is still a difficult task in this program. Additionally, it only has a limited ability to create a fully customized interface.

 

Arc/QGIS

Perhaps the strongest tools for analysis, although not visualization, are ArcGIS and QGIS. While very similar, QGIS is open source and free, but ArcGIS is proprietary and very expensive for those without access through their institution. These programs provide powerful sandboxes for mapping and uploading data, but are relatively difficult to use. When you open these programs, you are presented with a blank canvas.

It is up to you to upload base maps and data, or plot data in the program. Because you have a blank canvas, anything you upload needs to be georeferenced. If you are uploading a spreadsheet this means lat/long geographical coordinates, although some formats can be georeferenced within the program. The spreadsheet to be uploaded needs to first be converted into a .csv format, which can be done from excel or Google Sheets using “save as.”

screen-shot-2016-11-03-at-4-07-48-pm

Additionally, all layers of data need to have a consistent CRS (Coordinate Reference System) value applied to them.

Once uploaded, you can add to your data, categorize it in several ways, and style it with great freedom. Historical maps and images can easily be uploaded to your project and stretched and overlaid wherever and however you want. These programs are not ideal for creating visually polished maps for internet distribution (frequently people augment them with photoshop or illustrator for this purpose). Despite stylistic limitations, the Arc/QGIS alone are fine for making the mind of maps that can be included in printed publications. QGIS, which I have more experience with, also allows you to make maps into a format that you can integrate into a webpage through one of its many useful plugins. If you are interested in learning how to actually use GIS, the Programming Historian has a few useful tutorials.

 

Neatline

For mapping more focused on an extremely customizable visualization, the choices are limited unless you know JavaScript. Neatline (via Omeka) is one of the only exceptions, and as a tradeoff it has some issues uploading spreadsheets of vector data. Instead, Neatline is extremely good at creating a dynamic and interactive exhibit/story, in which the data and objects are created within Neatline. Installing Neatline is relatively simple, although it is important to know that you cannot do it on the Omeka hosted Omeka.net, but only through an Omeka platform hosted elsewhere (not a free wordpress either, meaning you would have to pay for hosting). I will not go into detail on how to install Omeka on your website here, as it is well documented here and here.

Once installed Neatline has several plugins of its own that allow you to add timelines to your map, create an interactive text in which words are linked to points on the map, and for you to upload images to overlay on the map, or even a custom background. These demos really show its power. Its greatest strength is its friendly interface and plentiful documentation. Oddly enough, some of its issues emerge when doing things that one would imagine would be relatively simple for it, like trying to batch transfer items with lat/lon associated from an Omeka database (luckily there is a helpful internet community for both Neatline and Omeka).

 

Summing Up

While I will not explore it fully here, if you have some experience with Javascript and want to work on embedding an interactive map into your webpage, there are a few different places you can start. Leaflet (QGIS compatibility with QGIS Web App), Openlayers, Google Maps API, and Timemap are all worth looking into. In fact, you will probably use some combination of these programs if you are trying to make an interactive map. As you can see, this blog post just scratches the surface of the programs available. If you are interested in analyzing old maps, take a look at MapAnalyst. If you want to make an atlas of maps you have or want to overlay multiple historic maps onto a contemporary map, check out Map Scholar made by historians. The Library of Congress made Viewshare for mapping out a digital collection on a map and storymap has a self-explanatory name.

Hopefully, this post has given those of you interested in mapping a guide on where you might want to start your project. Personally, I find myself using a combination of Google Maps, QGIS, and Neatline for different aspects of my project, with the intention of eventually taking advantage of the Google Maps API and Leaflet to bring my project online. Please feel free to contact me with any questions or suggestions.

Nathan Braccio is a Ph.D candidate in the UCONN History Department. He received his B.A. and M.A. in history from American University. His research focuses on the conflux of geography and identity in 17th and 18th century New England. More information on mapping and his research can be found on his webpage nathanbraccio.com. Contact him at nathan.braccio@uconn.edu.