visualization

UConn Celebrates Open Access Week, by Jean Nelson – cross post with Babbidge Library

Here at the UConn Library, one of the tenets of our Purposeful Path Forward is to engage in the driving of UConn’s ‘Scholarly Engine’, or the processes of research and knowledge creation. One of the core activities in our approach is educating our community on the importance of Open Access. Open Access (OA), as defined by SPARC (the Scholarly Publishing and Academic Resources Coalition), refers to the “free, immediate, online availability of research articles, coupled with the rights to use these articles fully in the digital environment.”

Why Open? Open changes the way we discover knowledge. It can turn ideas into reality,  break down barriers to learning, and lay the groundwork for breakthrough research.

This month we are embracing the challenge provided by the 2017 International Open Access Week by answering the question, “Open in order to…” through a series of programs and initiatives.

OpenCommons@UConn
The UConn Library is proud to announce the re-launch of the University’s institutional repository, OpenCommons@UConn, a showcase of the scholarship and creative works of the UConn community. The renaming of this services emphasizes the Library’s role in providing the tools to enable independent learning, research, and scholarship. By making the University’s diverse and unique resources openly accessible worldwide, we hope to inspire groundbreaking research and advance learning, teaching, and entrepreneurial thinking.
Open in order to…provide access to UConn’s scholarship

 

Open Educational Resources @ UConn Exhibit: published teaching and learning materials under an open license
October 18-31, 2017

HBL, Plaza Level
Open Access and Open Educational Resources (OER) are related but distinct, with the commonality of providing high quality learning materials at no cost. In an academic setting, the lines of Open Access publishing for research materials and Open Educational Resources for teaching and learning overlap in significant ways. UConn’s OER Initiative began only 2 years ago and to date has saved our undergraduates over $500,000 in textbook costs. View some OER textbooks and learn more about the faculty who are working towards making UConn more affordable.
Open in order to…save students money

 

Is this open access journal any good?
Thursday, October 19, 9:30-11:00am
Homer Babbidge Library, Collaborative Learning Classroom
Faculty often struggle to identify good quality open access journals in which to publish or to serve as an editor or reviewer. Many new open access journals exist now – some are good quality, some are exploitative, and some are in-between. This workshop will include a brief discussion of faculty concerns about identifying journals. The majority of the session will be devoted to identifying and demonstrating indicator web-based tools which can help faculty to appraise a journal’s quality.  Please register at http://cetl.uconn.edu/seminars
Open in order to…find quality teaching materials

 

Paywall: A Conversation about the Business of Scholarship with Filmmaker Jason Schmitt
Wednesday, October 25, 2:30-4:00pm

Konover Auditorium, Thomas J. Dodd Research Center
Help us celebrate Open Access Week by joining award-winning filmmaker Jason Schmitt as we screen and discuss footage from his in-progress documentary Paywall: The Business of Scholarship. Schmitt will be accompanied in the discussion by a panel of UConn faculty who will share their views on making the results of academic research freely accessible online.  Co-sponsored by UConn Humanities Institute
Open in order to…talk about the business of scholarship
Flyer in pdf
Release in pdf

Open Data In Action
Thursday, October 26, 11:00am-2:00pm

Hartford Public Library Atrium
Open Data In Action brings together a wide range of researchers to showcase how their work has benefited from openly and freely accessible data. Presenters from the public, private, and academic sectors will discuss how open data, ranging from historical documents to statistical analyses, is being used to create projects, change policies, or conduct research and highlight the importance open data has on shaping the world around us.

Opening Remarks:
Tyler Kleykamp, Chief Data Officer, State of Connecticut

Presenters:

  • Steve Batt, UConn Hartford/CT State Data Center, Tableau Public and CT Census Data
  • Jason Cory Brunson, UConn Health Center, Modeling Incidence and Severity of Disease using Administrative Healthcare Data
  • Stephen Busemeyer, The Hartford Courant,Journalism and the Freedom of Information
  • Brett Flodine, GIS Project Leader, City of Hartford Open Data
  • Rachel Leventhal-Weiner, CT Data Collaborative, CT Data Academy
  • Anna Lindemann/Graham Stinnett, UConn/DM&D, & Archives, Teaching Motion Graphics with Human Rights Archives
  • Thomas Long, UConn Nursing, Dolan Collection Nursing History Blog
  • Tina Panik, Avon Public Library, World War II Newsletters from the CTDA
  • Jennifer Snow, UConn Library, Puerto Rico Citizenship Archives: Government Documents as Open Data
  • Rebecca Sterns, Korey Stringer Institute, Athlete Sudden Death Registry
  • Andrew Wolf, UConn Digital Media & Design, Omeka Everywhere

Co-sponsored by the Hartford Public Library
Open in order to…share data
Flyer in pdf

Introduction to Data Visualization using Tableau Public
Monday, October 30, 3:00-4:15pm
Homer Babbidge Library, Level 2 Electronic Classroom
Tableau Public is a free version of Tableau business intelligence / visual analytics software, which allows anyone to explore and present any quantitative information in compelling, interactive visualizations. In this hands-on session you will work with different prepared datasets to create online interactive bar graphs, scatterplots, thematic maps and much more, which can be linked to or embedded in blogs or on web sites. Please register at http://workshops.lib.uconn.edu/
Open in order to…visualize research

Digital Scholarship: Partnering for the Future
Joan K. Lippincott, Associate Executive Director, Coalition for Networked Information

Tuesday, November 7, 2-3:30
Homer Babbidge Library, Heritage Room
Researchers in many disciplines are finding that they can ask new kinds of research questions as a result of the rapid growth in the availability of digital content and tools. In addition, the outputs of their research can include many more types of products such as data visualizations, geo-referenced representations, text augmented with images and audio, exhibits on the web, and virtual reality environments. Developing these projects takes a team of people who have a variety of skill sets. These individuals may come from academic departments, the library, the information technology unit, and other specialties. Graduate and undergraduate students are also often part of teams working on digital scholarship projects. In this presentation, Lippincott will provide an update on developments in digital scholarship and will describe existing programs and projects, discuss the importance of physical space, and encourage the development of a campus digital scholarship community.  Co-sponsored by UConn Humanities Institute
Open in order to…develop digital scholarship

The original blog post available here.

Integrating Digital History into the Classroom, by Matthew Ferraro

As an aspiring social studies teacher, I recognize the importance of integrating digital history into the classroom. Students have grown up in the digital age, and, as such, consume a majority of their information online. Gone are the days of searching through a library for primary and secondary sources to support a historical argument or reading a newspaper to discover that day’s events. All this information, and more, can now be found online. This vast availability of information has greatly expanded the possibilities for studying history, which presents us, as educators, with a unique opportunity to integrate digital history into our classrooms. By doing so, we will enable students to utilize digital media to advance historical analysis and understanding. To do this, however, we must first provide students with models of digital history. What follows are several examples of digital history projects that could be used in classrooms (and beyond) to equip students with the skills required to contribute to our knowledge of world contexts in a digital way.

1) History Matters

HistoryMatters is a digital history project that resulted from collaboration between George Mason University and the City University of New York. The project began in 1998 with the intent of providing teachers and students with digital resources that could improve their instruction and understanding of United States history. It was funded by the Kellogg Foundation, the Rockefeller Foundation, and the National Endowment for the Humanities. There are over a thousand primary sources on a variety of topics that range from photographs to text documents to audio files, all of which can be used with students to help them construct a narrative of the past. What’s unique about this project is that it takes full advantage of the digital space by using audio files from everyday Americans to help participants co-construct the history of the United States as well as from scholars on how to teach major aspects of US history. In addition, since there are over a thousand primary sources available, there is a “full search” feature that was developed to assist in locating resources by time, topic, or keyword. With the large number of primary sources available, this digital history project would be an excellent resource for students to use for research papers. Students could use this project to develop a research question based on an area of inquiry, examine primary sources related to their topic, arrive at conclusions based on their research, and publish their findings in order to advance our understanding of history. Doing so would expose them to conducting research digitally while also developing their ability to think critically, evaluate evidence, and articulate their thoughts clearly.

2) Mapping Inequality  

Mapping Inequality is a digital history project that was created through the collaboration of three research teams from the University of Maryland, the University of Richmond, Virginia Tech, and Johns Hopkins University. This project showcases 150 maps that were drafted by the Home Owners’ Loan Corporation (HOLC) from 1935 to 1940. These maps were color-coded to show the credit-worthiness of different neighborhoods in each town. Mortgage lenders then used these maps to determine whether someone would qualify for a loan. This project was developed to show that, when these maps are compared to the layout of neighborhoods in the United States today, it becomes apparent that many of the racial and class inequities that exist are a direct result of the HOLC’s maps. In fact, many of these maps were produced such that they were to codify racial segregation into real estate practice. This project could be used with students for multiple purposes. For example, when teaching about the New Deal, students could use the site to determine how the HOLC reflected a problematic legacy of the New Deal. Students could also be asked to cite specific examples from the map of how the HOLC’s practices led to the racial and class segregation that is seen today. For example, if they examined the areas around Hartford, Connecticut, they would observe that the HOLC deemed that West Hartford had the “best,” most credit worthy neighborhoods, whereas Hartford had the “hazardous,” least credit worthy neighborhoods. If this map is compared to today’s, it becomes evident that the HOLC’s maps led to racial and class segregation, with West Hartford and Hartford reflecting mostly unchanged neighborhoods. In addition, showcasing a digital history project of this nature in class would familiarize students with what digital history can look like. Through this project, teachers could expose students to some of the digital tools and resources—such as mapping software and online databases—that would be required to design it. This would create incentives to work collaboratively with other scholars—especially those who could provide the digital resources for projects like this.

3) The Valley of the Shadow: Two Communities in the American Civil War

The Valley of the Shadow is a digital history project constructed by the Virginia Center for DigitalHistory at the University of Virginia. This project narrates the countless stories of two different communities from the American Civil War—one from the North and one from the South—through letters, newspapers, diaries, speeches, and other primary sources. The project is organized through a series of image maps that direct the viewer to various search engines. This project functions similarly to the HistoryMatters project—they are both databases of primary sources that employ search engines to enable the viewer to locate information—but there is a key difference between the two worth mentioning: while HistoryMatters contains a large amount of primary source information on a wide variety of topics across United States history, this project only provides information that is relevant to a specific time and topic. The narrow focus is relevant to the work historians do on a daily basis, as most of a history scholar’s research explores questions in a specific niche of the past. As such, teachers could use this project to show students how they might approach a digital history research project. This would help transition students away from the traditional way of communicating their thoughts on history through a research paper and, instead, provide them with the opportunity to disseminate their ideas digitally. For example, rather than writing a paper about the significant World War II battles, students could create an online timeline that lays out those events chronologically while also providing descriptions of the significance of each battle. Exposing students to and allowing them to engage in this sort of work would enable them to practice the craft of a historian in a very familiar context and equip them with the skills to pose their own questions about a certain niche of the world.

Matthew Ferraro is a masters’ student in the Neag School of Education’s Integrated Bachelors’ / Masters’ (I/BM) Program. He is currently interning at Conard High School, where his research interests include how to best integrate human rights education into social studies classrooms. He is studying to become a social studies teacher at the high school level. He can be reached at matthew.ferraro@uconn.edu.

DH and Narrative, DH as Narrative, DH-Narrative, by Elisabeth Buzay

While both I—and many others—would argue that those who work in DH agree that they do not agree on what DH means, as I have encountered more and more digital tools and projects, I have begun to think of DH work in a provocative way: DH work should be considered a form of narrative-making or storytelling. For fields such as digital storytelling or tools such as story mapping, this argument may not be that surprising. But what about other types of DH projects and tools? If we think of archival or curated sites, such as those created with Omeka, or book or network conglomerations, such as those made in Scalar, I propose that these forays are equally forms of narrative or story: we pick and choose what to include and exclude, we form paths and groupings and connections to guide or suggest methods of understanding; in other words, we give shape to a narrative. Here I will advance an initial iteration of this argument, which, I believe, ultimately provides another perspective on how DH is truly a part of the humanities.

 

DH and Narrative

If we take Hayden White’s description of narrative, in conversation with Barthes, in The Content of the Form: Narrative Discourse and Historical Representation, which argues that “[a]rising, as Barthes says, between our experience of the world and our efforts to describe that experience in language, narrative ‘ceaselessly substitutes meaning for the straightforward copy of the events recounted’” (1–2), as one of the basic definitions of this concept, we can see how this term could easily be used in reference to various methodologies and tools used in DH. More particularly, however, we must expand the definition by including not just language, but also image and sound. It is worth a look, for instance, at DH projects that create digital archives, such as The Digital Public Library of America or the Bibliothèque nationale de France’s Gallica, in which digital tools are used to create digitized versions of (an) actual archive(s). Or other such projects, like The Internet Archive, The Sonic Dictionary, or The Story of the Beautiful, in which a digital archive is created. Or we might think of digital editions of texts, such as the Folger Digital Texts or digitized resources such as The ARTFL Project. Or, in a slightly different direction, there are tools one can use to compare versions of texts, like Juxta or JuxtaCommons, or to annotate a text (collaboratively or not), like Annotation Studio. In these varying cases, the digital approach and tools used are the methods through which meaning is provided, whether that meaning be the coherency of an archive, the evolution or development of a text, or the preservation of narratives that themselves might otherwise be lost.

 

DH as Narrative

A DH approach is not limited, of course, to archival or editorial projects, however. In many cases, DH projects are clearly narrative in form. The case of digital storytelling is, perhaps, the most obvious such example. StoryCenter, previously known as the Center for Digital Storytelling, is a well-known entity whose basic elements of digital storytelling are often cited. And digital storytelling is also being used in a slightly different manner by teachers and students in the field of education in order to teach and learn about topics beyond those of telling personal stories, as can be seen on the University of Houston’s Educational Uses of Digital Storytelling site. Digital storytelling approaches have been expanded in other directions as well, for instance in

  • tying stories to location, with the use of tools like StoryMapJS, Esri Story Maps, or Odyssey, in which specific events and places are linked,
  • tying stories to timing, with the use of tools like TimeLineJS, TimeGlider, or Timetoast, in which specific events and times are linked,
  • or tying stories to time and location, with the use of tools like Neatline or TimeMapper, in which specific events, places, and times are linked so that a user can follow a story both geographically and/or chronologically.

In all of these cases, the digital approach is one that is explicitly used to shape a narrative or story. In other words, here DH is again a form of narrative or narrative-making.

 

DH-Narrative

Big data projects, such as those of the Stanford Literary Lab or approaches, such as that of Matthew L. Jockers in his Macroanalysis: Digital Methods and Literary History, may present an exception to my argument in comparison to other DH projects and approaches mentioned thus far; nonetheless, I suggest that even projects or approaches such as these also create narratives or stories, in that they provide meaning to observations, calculations, or data that otherwise would not be comprehensible, given their size. How could they not?

This brief overview brings us to a final point to ponder: in their Digital_Humanities, Anne Burdick, Johanna Drucker, Peter Lunenfeld, Todd Presner, and Jeffrey Schnapp argue that the design of DH tools and projects are themselves essential aspects of the arguments they create:

The parsing of the cultural record in terms of questions of authenticity, origin, transmission, or production is one of the foundation stones of humanistic scholarship upon which all other interpretive work depends. But editing is also productive and generative, and it is the suite of rhetorical devices that make a work. Editing is the creative, imaginative activity of making, and as such, design can be also seen as a kind of editing: It is the means by which an argument takes shape and is given form. (18)

In other words, a narrative-making approach is literally embedded in form, in design. Like these authors, I wonder whether this perspective cannot be extended. They write:

DESIGN EMERGES AS THE NEW FOUNDATION FOR THE CONCEPTUALIZATION AND PRODUCTION OF KNOWLEDGE.

DESIGN METHODS INFORM ALL ASPECTS OF HUMANISTIC PRACTICE, JUST AS RHETORIC ONCE SERVED AS BOTH ITS GLUE AND COMPOSITIONAL TECHNIQUE.

CONTEMPORARY ELOQUENCE, POWER, AND PERSUASION MERGE TRADITIONAL VERBAL AND ARGUMENTATIVE SKILLS WITH THE PRACTICE OF MULTIMEDIA LITERACY SHAPED BY AN UNDERSTANDING OF THE PRINCIPLE OF DESIGN. (117–118)

If we apply these points to the entire field of DH, this provides insight into significant food for thought: if
design is the foundation of DH, then isn’t the result of this design necessarily a narrative or a story? And might not this be one further aspect that confirms that DH is indeed a part of the traditional humanities?

These questions invite others: are DH narratives and their design different or new or innovative in comparison to traditional narratives, and if so how? What can DH narratives tell us about ourselves and our world? To circle back to White and Barthes’ view of narrative, if we accept that DH is narrative, what new meanings can be distilled from the events DH recounts?

 

Elisabeth Herbst Buzay is a doctoral student in French and Francophone Studies and in the Medieval Studies Program at the University of Connecticut. Her research interests include medieval romances, contemporary fantasy, digital humanities, video games, the intersection of text and images, and translation. You can contact her at elisabeth.buzay@uconn.edu.

Visualizing English Print at the Folger, by Gregory Kneidel (cross-post with Ruff Draughts)

Screen Shot 2017-03-17 at 3.01.53 PMIn December I spent two days at the Folger’s Visualizing English Print seminar. It brought together people from the Folger, the University of Wisconsin, and the University of Strathclyde in Glasgow; about half of us were literature people, half computer science; a third of us were tenure-track faculty, a third grad students, and a third in other types of research positions (i.e., librarians, DH directors, etc.).

Over those two days, we worked our way through a set of custom data visualization tools that can be found here. Before we could visualize, we needed and were given data: a huge corpus of nearly 33,000 EEBO-TCP-derived simple text files that had been cleaned up and spit through a regularizing procedure so that it would be machine-readable (with loss, obviously, of lots of cool, irregular features—the grad students who wanted to do big data studies of prosody were bummed to learn that all contractions and elisions had been scrubbed out). They also gave us a few smaller, curated corpora of texts, two specifically of dramatic texts, two others of scientific texts. Anyone who wants a copy of this data, I’d be happy to hook you up.vep_1

From there, we did (or were shown) a lot of data visualization. Some of this was based on word-frequency counts, but the real novel thing was using a dictionary of sorts called DocuScope—basically a program that sorts 40 million different linguistic patterns into one of about 100 specific rhetorical/verbal categories (DocuScope was developed at CMU as a rhet/comp tool—turned out not to be good at teaching rhet/comp, but it is good at things like picking stocks). DocuScope might make a hash of some words or phrases (and you can revise or modify it; Michael Witmore tailored a DocuScope dictionary to early modern English), but it does so consistently and you’re counting on the law of averages to wash everything out.

After drinking the DocuScope Kool-Aid, we learned how to visualize the results of DocuScoped data analysis. Again, there were a few other cool features and possibilities, and I only comprehended the tip of the data-analysis iceberg, but basically this involved one of two things.

  • Using something called the MetaData Builder, we derived DocuScope data for individual texts or groups of texts within a large corpus of texts. So, for example, we could find out which of the approximately 500 plays in our subcorpus of dramatic texts is the angriest (i.e., has the greatest proportion of words/phrases DocuScope tags as relating to anger)? Or, in an example we discussed at length, within the texts in our science subcorpus, who used more first-person references, Boyle or Hobbes (i.e., which had the greater proportion of words/phrases DocuScope tags as first-person references). The CS people were quite skilled at slicing, dicing, and graphing all this data in cool combinations. Here are some examples. A more polished essay using this kind of data analysis is here. So this is the distribution of DocuScope traits in texts in large and small corpora.
  • We visualized the distribution of DocuScope tags within a single text using something called VEP Slim TV. Using Slim TV, you can track the rise and fall of each trait within a given text AND (and this is the key part) link directly to the text itself. So, for example, this is an image of Margaret Cavendish’s Blazing-World (1667).

vep_2

 

Here, the blue line in the right frame charts lexical patterns that DocuScope tags as “Sense Objects.”

vep_3vep_4

 

 

 

 

 

 

 

 

 

 

 

 

 

The red line charts lexical patterns that DocuScope tags as “Positive Standards.” You’ll see there is lots of blue (compared to red) at the beginning of Cavendish’s novel (when the Lady is interviewing various Bird-Men and Bear-Men about their scientific experiments), but one stretch in the novel where there is more red than blue (when the Lady is conversing with Immaterial Spirits about the traits of nobility). A really cool thing about Slim TV that could make it useful in the classroom: you can move through and link directly to the text itself (that horizontal yellow bar on the right shows which section of the text is currently being displayed).

So 1) regularized EEBO-TCP texts turned into spreadsheets using 2) the DocuScope dictionary; then use that data to visualize either 3) individual texts as data points within a larger corpus of texts or 4) the distribution of DocuScope tags within a single text.

Again, the seminar leaders showed some nice examples of where this kind of research can lead and lots of cool looking graphs. Ultimately, some of the findings were, if not underwhelming, at least just whelming: we had fun discussing the finding that, relatively speaking, Shakespeare’s comedies tend to use “a” and his tragedies tend to use “the.” Do we want to live in a world where that is interesting? As we experimented with the tools they gave us, at times it felt a little like playing with a Magic 8 Ball: no matter what texts you fed it, DocuScope would give you lots of possible answers, but you just couldn’t tell if the original question was important or figure out if the answers had anything to do with the question. So formulating good research questions remains, to no one’s surprise, the real trick.

A few other key takeaways for me:

1) Learn to love csv files or, better, learn to love someone from the CS world who digs graphing software;

2) Curated data corpora might be the new graduate/honors thesis. Create a corpora (e.g.s, sermons, epics, travel narratives, court reports, romances), add some good metadata, and you’ve got yourself a lasting contribution to knowledge (again, the examples here are the drama corpora or the science corpora). A few weeks ago, Alan Liu told me that he requires his dissertation advisees to have a least one chapter that gets off the printed page and has some kind of digital component. A curated data collection, which could be spun through DocuScope or any other kind of textual analysis program, could be just that kind of thing.

3) For classroom use, the coolest thing was VEP Slim TV, which tracks the prominence of certain verbal/rhetorical features within a specific text and links directly to the text under consideration. It’s colorful and customizable, something students might find enjoyable.

All this stuff is publicly available as well. I’d be happy to demo what we did (or what I can do of what we did) to anyone who is interested.

Gregory Kneidel is Associate Professor of English at the Hartford Campus. He specializes in Renaissance poetry and prose, law and literature, and textual editing. He can be reached at gregory.kneidel@uconn.edu.