digital scholarship

UConn Celebrates Open Access Week, by Jean Nelson – cross post with Babbidge Library

Here at the UConn Library, one of the tenets of our Purposeful Path Forward is to engage in the driving of UConn’s ‘Scholarly Engine’, or the processes of research and knowledge creation. One of the core activities in our approach is educating our community on the importance of Open Access. Open Access (OA), as defined by SPARC (the Scholarly Publishing and Academic Resources Coalition), refers to the “free, immediate, online availability of research articles, coupled with the rights to use these articles fully in the digital environment.”

Why Open? Open changes the way we discover knowledge. It can turn ideas into reality,  break down barriers to learning, and lay the groundwork for breakthrough research.

This month we are embracing the challenge provided by the 2017 International Open Access Week by answering the question, “Open in order to…” through a series of programs and initiatives.

OpenCommons@UConn
The UConn Library is proud to announce the re-launch of the University’s institutional repository, OpenCommons@UConn, a showcase of the scholarship and creative works of the UConn community. The renaming of this services emphasizes the Library’s role in providing the tools to enable independent learning, research, and scholarship. By making the University’s diverse and unique resources openly accessible worldwide, we hope to inspire groundbreaking research and advance learning, teaching, and entrepreneurial thinking.
Open in order to…provide access to UConn’s scholarship

 

Open Educational Resources @ UConn Exhibit: published teaching and learning materials under an open license
October 18-31, 2017

HBL, Plaza Level
Open Access and Open Educational Resources (OER) are related but distinct, with the commonality of providing high quality learning materials at no cost. In an academic setting, the lines of Open Access publishing for research materials and Open Educational Resources for teaching and learning overlap in significant ways. UConn’s OER Initiative began only 2 years ago and to date has saved our undergraduates over $500,000 in textbook costs. View some OER textbooks and learn more about the faculty who are working towards making UConn more affordable.
Open in order to…save students money

 

Is this open access journal any good?
Thursday, October 19, 9:30-11:00am
Homer Babbidge Library, Collaborative Learning Classroom
Faculty often struggle to identify good quality open access journals in which to publish or to serve as an editor or reviewer. Many new open access journals exist now – some are good quality, some are exploitative, and some are in-between. This workshop will include a brief discussion of faculty concerns about identifying journals. The majority of the session will be devoted to identifying and demonstrating indicator web-based tools which can help faculty to appraise a journal’s quality.  Please register at http://cetl.uconn.edu/seminars
Open in order to…find quality teaching materials

 

Paywall: A Conversation about the Business of Scholarship with Filmmaker Jason Schmitt
Wednesday, October 25, 2:30-4:00pm

Konover Auditorium, Thomas J. Dodd Research Center
Help us celebrate Open Access Week by joining award-winning filmmaker Jason Schmitt as we screen and discuss footage from his in-progress documentary Paywall: The Business of Scholarship. Schmitt will be accompanied in the discussion by a panel of UConn faculty who will share their views on making the results of academic research freely accessible online.  Co-sponsored by UConn Humanities Institute
Open in order to…talk about the business of scholarship
Flyer in pdf
Release in pdf

Open Data In Action
Thursday, October 26, 11:00am-2:00pm

Hartford Public Library Atrium
Open Data In Action brings together a wide range of researchers to showcase how their work has benefited from openly and freely accessible data. Presenters from the public, private, and academic sectors will discuss how open data, ranging from historical documents to statistical analyses, is being used to create projects, change policies, or conduct research and highlight the importance open data has on shaping the world around us.

Opening Remarks:
Tyler Kleykamp, Chief Data Officer, State of Connecticut

Presenters:

  • Steve Batt, UConn Hartford/CT State Data Center, Tableau Public and CT Census Data
  • Jason Cory Brunson, UConn Health Center, Modeling Incidence and Severity of Disease using Administrative Healthcare Data
  • Stephen Busemeyer, The Hartford Courant,Journalism and the Freedom of Information
  • Brett Flodine, GIS Project Leader, City of Hartford Open Data
  • Rachel Leventhal-Weiner, CT Data Collaborative, CT Data Academy
  • Anna Lindemann/Graham Stinnett, UConn/DM&D, & Archives, Teaching Motion Graphics with Human Rights Archives
  • Thomas Long, UConn Nursing, Dolan Collection Nursing History Blog
  • Tina Panik, Avon Public Library, World War II Newsletters from the CTDA
  • Jennifer Snow, UConn Library, Puerto Rico Citizenship Archives: Government Documents as Open Data
  • Rebecca Sterns, Korey Stringer Institute, Athlete Sudden Death Registry
  • Andrew Wolf, UConn Digital Media & Design, Omeka Everywhere

Co-sponsored by the Hartford Public Library
Open in order to…share data
Flyer in pdf

Introduction to Data Visualization using Tableau Public
Monday, October 30, 3:00-4:15pm
Homer Babbidge Library, Level 2 Electronic Classroom
Tableau Public is a free version of Tableau business intelligence / visual analytics software, which allows anyone to explore and present any quantitative information in compelling, interactive visualizations. In this hands-on session you will work with different prepared datasets to create online interactive bar graphs, scatterplots, thematic maps and much more, which can be linked to or embedded in blogs or on web sites. Please register at http://workshops.lib.uconn.edu/
Open in order to…visualize research

Digital Scholarship: Partnering for the Future
Joan K. Lippincott, Associate Executive Director, Coalition for Networked Information

Tuesday, November 7, 2-3:30
Homer Babbidge Library, Heritage Room
Researchers in many disciplines are finding that they can ask new kinds of research questions as a result of the rapid growth in the availability of digital content and tools. In addition, the outputs of their research can include many more types of products such as data visualizations, geo-referenced representations, text augmented with images and audio, exhibits on the web, and virtual reality environments. Developing these projects takes a team of people who have a variety of skill sets. These individuals may come from academic departments, the library, the information technology unit, and other specialties. Graduate and undergraduate students are also often part of teams working on digital scholarship projects. In this presentation, Lippincott will provide an update on developments in digital scholarship and will describe existing programs and projects, discuss the importance of physical space, and encourage the development of a campus digital scholarship community.  Co-sponsored by UConn Humanities Institute
Open in order to…develop digital scholarship

The original blog post available here.

Digital Humanities Is for Humans, Not Just Humanists: Social Science and DH, by Kitty O’Riordan

In an article published online last month by The Guardian—“AI programs exhibit racial and gender biases, research reveals”—the computer scientists behind the technology were careful to emphasize that this reflects not prejudice on the part of artificial intelligence, but AI’s learning of our own prejudices as encoded within language.

“Word embedding”, “already used in web search and machine translation, works by building up a mathematical representation of language, in which the meaning of a word is distilled into a series of numbers (known as a word vector) based on which other words most frequently appear alongside it. Perhaps surprisingly, this purely statistical approach appears to capture the rich cultural and social context of what a word means in the way that a dictionary definition would be incapable of.”

This tool’s ability to reproduce complex and nuanced word associations is probably not surprising to anyone familiar with digital humanities—and the fact that it returned associations that match pleasant words with whiteness and unpleasant ones with blackness, or that associate “woman” with the arts and interpretative disciplines and “man” with the STEM fields shouldn’t be surprising to anyone who has been paying attention. The distressing prospect that AI and other digital programs and platforms will only reinforce existing bias and inequality has certainly garnered the attention of scholars in media studies and DH, but one could argue that it has received equal attention in the social sciences.

As a graduate student in cultural anthropology drawn to DH, I sometimes find myself considering what exactly demarcates digital humanities from social science when apprehending these kinds of topics; somehow, with the addition of ‘digital’, the lines seem to have blurred. Both ultimately represent an investigation of how humans create meaning through or in relation to the digital universe, and the diverse methodologies at the disposal of each are increasingly overlapping. Below are just a few reasons, from my limited experience, as to why social scientists can benefit from involvement with digital humanities—and vice-versa.

1) Tools developed in DH can serve as methodologies in the social sciences.

Text mining, a process that derives patterns and trends from textual sources similar to the phenomenon described above, is particularly suited for social science analysis of primary sources. Programs like Voyant and Textalyser are free and easily available on the web, no downloads or installations required, and can pull data from PDFs, URLs, and Microsoft Word, plain text and more. Interview transcripts can also be analyzed using these programs, and the graphs and word clouds they create provide a unique way to “see” an argument, a theme, bias, etc.

Platforms like Omeka and Scalar can provide an opportunity not only to display ethnographic information for visual anthropologists, but can give powerful form to arguments in a way that textual forms cannot (see, for example, Performing Archive: Curtis + “the vanishing race”, which turns Edward S. Curtis’ famous photos of Native Americans on their heads by visualizing the categories instead of the categorized).

2) Both fields are tackling the same issues.

Miriam Posner writes that she “would like us to start understanding markers like gender and race not as givens but as constructions…I want us to stop acting as though the data models for identity are containers to be filled in order to produce meaning and recognize instead that these structures themselves constitute data.” Drucker and Svensson echo that creating data structures that expose inequality or incorporate diversity is not as straightforward as it seems, given that “the organization of the fields and tag sets already prescribes what can be included and how these inclusions are put into signifying relations with each other” (10). Anthropologist Sally Engle Merry, in The Seductions of Quantification, expounds on this idea in the realm of Human Rights, proving that indicators can obscure as much or more than they reveal. Alliances between DHers as builders and analyzers of digital tools and platforms, and social scientists as suppliers of information on the effects of these on the ground in various cultural contexts, provide benefit to both.

3) Emerging fields in the social sciences can learn a lot from established DH communities and scholarship.


Digital anthropology
, digital sociology, cyberanthropology, digital ethnography, and virtual anthropology are all sub-disciplines emerging from the social sciences with foci and methods that often overlap with those of digital humanities. Studies of Second Life, World of Warcraft, or hacking; the ways diasporic communities use social media platforms to maintain relationships; or projects that focus on digitizing indigenous languages all have counterparts within digital humanities.  Theoretically, there is much to compare: Richard Grusin’s work on mediation intersects with
anthropologists leading the “ontological turn” like Philippe Descola and Eduardo Viveiros de Castro; Florian Cramer’s work on the ‘post-digital’ pairs interestingly with Shannon Lee Dawdy’s concept of “clockpunk” anthropology, influenced by thinkers both disciplines share like Walter Benjamin and Bruno Latour.

Though I am still relatively new to DH, one theme I find repeated often, and which represents much of the promise and the excitement of digital humanities for me, is the push for collaboration and the breaking down of disciplinary boundaries. Technologies like AI remind us that we all share the collective responsibility to build digital worlds that don’t simply reflect the restrictions and biases of our textual and social worlds.

 

Kitty O’Riordan is a doctoral student in cultural anthropology at the University of Connecticut. Her research interests include anthropology of media and public discourse, comparative science studies, and contemporary indigenous issues in New England. You can reach her at caitlin.o’riordan@uconn.edu.

DH and Narrative, DH as Narrative, DH-Narrative, by Elisabeth Buzay

While both I—and many others—would argue that those who work in DH agree that they do not agree on what DH means, as I have encountered more and more digital tools and projects, I have begun to think of DH work in a provocative way: DH work should be considered a form of narrative-making or storytelling. For fields such as digital storytelling or tools such as story mapping, this argument may not be that surprising. But what about other types of DH projects and tools? If we think of archival or curated sites, such as those created with Omeka, or book or network conglomerations, such as those made in Scalar, I propose that these forays are equally forms of narrative or story: we pick and choose what to include and exclude, we form paths and groupings and connections to guide or suggest methods of understanding; in other words, we give shape to a narrative. Here I will advance an initial iteration of this argument, which, I believe, ultimately provides another perspective on how DH is truly a part of the humanities.

 

DH and Narrative

If we take Hayden White’s description of narrative, in conversation with Barthes, in The Content of the Form: Narrative Discourse and Historical Representation, which argues that “[a]rising, as Barthes says, between our experience of the world and our efforts to describe that experience in language, narrative ‘ceaselessly substitutes meaning for the straightforward copy of the events recounted’” (1–2), as one of the basic definitions of this concept, we can see how this term could easily be used in reference to various methodologies and tools used in DH. More particularly, however, we must expand the definition by including not just language, but also image and sound. It is worth a look, for instance, at DH projects that create digital archives, such as The Digital Public Library of America or the Bibliothèque nationale de France’s Gallica, in which digital tools are used to create digitized versions of (an) actual archive(s). Or other such projects, like The Internet Archive, The Sonic Dictionary, or The Story of the Beautiful, in which a digital archive is created. Or we might think of digital editions of texts, such as the Folger Digital Texts or digitized resources such as The ARTFL Project. Or, in a slightly different direction, there are tools one can use to compare versions of texts, like Juxta or JuxtaCommons, or to annotate a text (collaboratively or not), like Annotation Studio. In these varying cases, the digital approach and tools used are the methods through which meaning is provided, whether that meaning be the coherency of an archive, the evolution or development of a text, or the preservation of narratives that themselves might otherwise be lost.

 

DH as Narrative

A DH approach is not limited, of course, to archival or editorial projects, however. In many cases, DH projects are clearly narrative in form. The case of digital storytelling is, perhaps, the most obvious such example. StoryCenter, previously known as the Center for Digital Storytelling, is a well-known entity whose basic elements of digital storytelling are often cited. And digital storytelling is also being used in a slightly different manner by teachers and students in the field of education in order to teach and learn about topics beyond those of telling personal stories, as can be seen on the University of Houston’s Educational Uses of Digital Storytelling site. Digital storytelling approaches have been expanded in other directions as well, for instance in

  • tying stories to location, with the use of tools like StoryMapJS, Esri Story Maps, or Odyssey, in which specific events and places are linked,
  • tying stories to timing, with the use of tools like TimeLineJS, TimeGlider, or Timetoast, in which specific events and times are linked,
  • or tying stories to time and location, with the use of tools like Neatline or TimeMapper, in which specific events, places, and times are linked so that a user can follow a story both geographically and/or chronologically.

In all of these cases, the digital approach is one that is explicitly used to shape a narrative or story. In other words, here DH is again a form of narrative or narrative-making.

 

DH-Narrative

Big data projects, such as those of the Stanford Literary Lab or approaches, such as that of Matthew L. Jockers in his Macroanalysis: Digital Methods and Literary History, may present an exception to my argument in comparison to other DH projects and approaches mentioned thus far; nonetheless, I suggest that even projects or approaches such as these also create narratives or stories, in that they provide meaning to observations, calculations, or data that otherwise would not be comprehensible, given their size. How could they not?

This brief overview brings us to a final point to ponder: in their Digital_Humanities, Anne Burdick, Johanna Drucker, Peter Lunenfeld, Todd Presner, and Jeffrey Schnapp argue that the design of DH tools and projects are themselves essential aspects of the arguments they create:

The parsing of the cultural record in terms of questions of authenticity, origin, transmission, or production is one of the foundation stones of humanistic scholarship upon which all other interpretive work depends. But editing is also productive and generative, and it is the suite of rhetorical devices that make a work. Editing is the creative, imaginative activity of making, and as such, design can be also seen as a kind of editing: It is the means by which an argument takes shape and is given form. (18)

In other words, a narrative-making approach is literally embedded in form, in design. Like these authors, I wonder whether this perspective cannot be extended. They write:

DESIGN EMERGES AS THE NEW FOUNDATION FOR THE CONCEPTUALIZATION AND PRODUCTION OF KNOWLEDGE.

DESIGN METHODS INFORM ALL ASPECTS OF HUMANISTIC PRACTICE, JUST AS RHETORIC ONCE SERVED AS BOTH ITS GLUE AND COMPOSITIONAL TECHNIQUE.

CONTEMPORARY ELOQUENCE, POWER, AND PERSUASION MERGE TRADITIONAL VERBAL AND ARGUMENTATIVE SKILLS WITH THE PRACTICE OF MULTIMEDIA LITERACY SHAPED BY AN UNDERSTANDING OF THE PRINCIPLE OF DESIGN. (117–118)

If we apply these points to the entire field of DH, this provides insight into significant food for thought: if
design is the foundation of DH, then isn’t the result of this design necessarily a narrative or a story? And might not this be one further aspect that confirms that DH is indeed a part of the traditional humanities?

These questions invite others: are DH narratives and their design different or new or innovative in comparison to traditional narratives, and if so how? What can DH narratives tell us about ourselves and our world? To circle back to White and Barthes’ view of narrative, if we accept that DH is narrative, what new meanings can be distilled from the events DH recounts?

 

Elisabeth Herbst Buzay is a doctoral student in French and Francophone Studies and in the Medieval Studies Program at the University of Connecticut. Her research interests include medieval romances, contemporary fantasy, digital humanities, video games, the intersection of text and images, and translation. You can contact her at elisabeth.buzay@uconn.edu.

Visualizing English Print at the Folger, by Gregory Kneidel (cross-post with Ruff Draughts)

Screen Shot 2017-03-17 at 3.01.53 PMIn December I spent two days at the Folger’s Visualizing English Print seminar. It brought together people from the Folger, the University of Wisconsin, and the University of Strathclyde in Glasgow; about half of us were literature people, half computer science; a third of us were tenure-track faculty, a third grad students, and a third in other types of research positions (i.e., librarians, DH directors, etc.).

Over those two days, we worked our way through a set of custom data visualization tools that can be found here. Before we could visualize, we needed and were given data: a huge corpus of nearly 33,000 EEBO-TCP-derived simple text files that had been cleaned up and spit through a regularizing procedure so that it would be machine-readable (with loss, obviously, of lots of cool, irregular features—the grad students who wanted to do big data studies of prosody were bummed to learn that all contractions and elisions had been scrubbed out). They also gave us a few smaller, curated corpora of texts, two specifically of dramatic texts, two others of scientific texts. Anyone who wants a copy of this data, I’d be happy to hook you up.vep_1

From there, we did (or were shown) a lot of data visualization. Some of this was based on word-frequency counts, but the real novel thing was using a dictionary of sorts called DocuScope—basically a program that sorts 40 million different linguistic patterns into one of about 100 specific rhetorical/verbal categories (DocuScope was developed at CMU as a rhet/comp tool—turned out not to be good at teaching rhet/comp, but it is good at things like picking stocks). DocuScope might make a hash of some words or phrases (and you can revise or modify it; Michael Witmore tailored a DocuScope dictionary to early modern English), but it does so consistently and you’re counting on the law of averages to wash everything out.

After drinking the DocuScope Kool-Aid, we learned how to visualize the results of DocuScoped data analysis. Again, there were a few other cool features and possibilities, and I only comprehended the tip of the data-analysis iceberg, but basically this involved one of two things.

  • Using something called the MetaData Builder, we derived DocuScope data for individual texts or groups of texts within a large corpus of texts. So, for example, we could find out which of the approximately 500 plays in our subcorpus of dramatic texts is the angriest (i.e., has the greatest proportion of words/phrases DocuScope tags as relating to anger)? Or, in an example we discussed at length, within the texts in our science subcorpus, who used more first-person references, Boyle or Hobbes (i.e., which had the greater proportion of words/phrases DocuScope tags as first-person references). The CS people were quite skilled at slicing, dicing, and graphing all this data in cool combinations. Here are some examples. A more polished essay using this kind of data analysis is here. So this is the distribution of DocuScope traits in texts in large and small corpora.
  • We visualized the distribution of DocuScope tags within a single text using something called VEP Slim TV. Using Slim TV, you can track the rise and fall of each trait within a given text AND (and this is the key part) link directly to the text itself. So, for example, this is an image of Margaret Cavendish’s Blazing-World (1667).

vep_2

 

Here, the blue line in the right frame charts lexical patterns that DocuScope tags as “Sense Objects.”

vep_3vep_4

 

 

 

 

 

 

 

 

 

 

 

 

 

The red line charts lexical patterns that DocuScope tags as “Positive Standards.” You’ll see there is lots of blue (compared to red) at the beginning of Cavendish’s novel (when the Lady is interviewing various Bird-Men and Bear-Men about their scientific experiments), but one stretch in the novel where there is more red than blue (when the Lady is conversing with Immaterial Spirits about the traits of nobility). A really cool thing about Slim TV that could make it useful in the classroom: you can move through and link directly to the text itself (that horizontal yellow bar on the right shows which section of the text is currently being displayed).

So 1) regularized EEBO-TCP texts turned into spreadsheets using 2) the DocuScope dictionary; then use that data to visualize either 3) individual texts as data points within a larger corpus of texts or 4) the distribution of DocuScope tags within a single text.

Again, the seminar leaders showed some nice examples of where this kind of research can lead and lots of cool looking graphs. Ultimately, some of the findings were, if not underwhelming, at least just whelming: we had fun discussing the finding that, relatively speaking, Shakespeare’s comedies tend to use “a” and his tragedies tend to use “the.” Do we want to live in a world where that is interesting? As we experimented with the tools they gave us, at times it felt a little like playing with a Magic 8 Ball: no matter what texts you fed it, DocuScope would give you lots of possible answers, but you just couldn’t tell if the original question was important or figure out if the answers had anything to do with the question. So formulating good research questions remains, to no one’s surprise, the real trick.

A few other key takeaways for me:

1) Learn to love csv files or, better, learn to love someone from the CS world who digs graphing software;

2) Curated data corpora might be the new graduate/honors thesis. Create a corpora (e.g.s, sermons, epics, travel narratives, court reports, romances), add some good metadata, and you’ve got yourself a lasting contribution to knowledge (again, the examples here are the drama corpora or the science corpora). A few weeks ago, Alan Liu told me that he requires his dissertation advisees to have a least one chapter that gets off the printed page and has some kind of digital component. A curated data collection, which could be spun through DocuScope or any other kind of textual analysis program, could be just that kind of thing.

3) For classroom use, the coolest thing was VEP Slim TV, which tracks the prominence of certain verbal/rhetorical features within a specific text and links directly to the text under consideration. It’s colorful and customizable, something students might find enjoyable.

All this stuff is publicly available as well. I’d be happy to demo what we did (or what I can do of what we did) to anyone who is interested.

Gregory Kneidel is Associate Professor of English at the Hartford Campus. He specializes in Renaissance poetry and prose, law and literature, and textual editing. He can be reached at gregory.kneidel@uconn.edu.

Jennifer Snow, Digital Scholarship Librarian

Screen Shot 2017-02-24 at 11.29.44 AM1. What initially intrigued you about research/teaching in digital humanities or media studies?

I began my work at UConn as the History Librarian six years ago, and I have slowly grown my skills and interests from there.  I have a Master’s in History, although I was trained in the traditional research methodologies.  Digital humanities didn’t really feature in my education.  However, as I worked with scholars and colleagues on various projects, I saw key ways that the Library could be more involved in digital humanities.  As research and scholarship change, the Library must adapt as well to remain relevant.  My skills and knowledge in this area are mostly self-taught, and I enjoy teaching others and seeing students become excited over the research possibilities opened up by a digital approach. 

2. Has entering the DHMS realm changed your approach to research and teaching in general? If so, how?

Absolutely!  I find my research and teaching to be much more collaborative now.  I’ve learned as much from students and scholars as they have from me.  We each bring our own expertise to the table, whether it’s a technological skill or subject knowledge.  I also actively seek out from others what they would like to learn, so I can tailor workshops and research consultations to their specific needs.  Whenever I work on a new project, I immediately think about who else might be interested and have something to contribute.  It’s a very different experience from individual work on an article for publication.  The projects I work on are multidisciplinary, and I have grown as a researcher from these collaborative opportunities.

3. You have three (commitment-free) wishes to receive support for your research/teaching in DH or media studies: what are they?

First, I would love to have more staff in the library dedicated to DH.  Web developers, graphic designers, coders!  We are always trying to do more with less.  It would be nice to never worry about finding time to work on a project because there is plenty of people to work on it.  Second, the opportunity to offer student internships or assistantships would be great.  I think this will be forthcoming in the future, though, so I am very much looking forward to that.  It would be a wonderful opportunity for students to learn more about DHMS and to work on interesting projects.  And third, more time is always welcome!  There are so many fantastic projects out there that I want to be a part of, but unfortunately, there are only so many hours in a day, and I have other responsibilities.  

4. First struggles and successes: do you have any best-practice advice?

My advice is really to just dive in!  If there’s something you’re interested in learning about, whether it’s a new tool, platform, or something else, don’t hesitate to start working with it.  Try and find other people who have a similar interest, and you can help each other.  Look for workshops, seminars, and meet-and-greets related to digital scholarship.  DH is collaborative by nature, so networking is hugely important.  There will definitely be struggles.  You may not master a particular tool as quickly or easily as you had hoped.  You will have other things competing for your time.  My advice is to not get discouraged and keep plugging away.  Don’t be afraid to ask for help when you need it, whether from the library or from your own departments. 

 5. How would you like to challenge yourself in DH or media studies? Or what is a project you most seek to realize? 

As the Digital Scholarship Librarian, I am tasked with working beyond the humanities and branching out into the social sciences and sciences.  This is certainly a challenge for me as my background is squarely in the humanities.  However, I am working on developing skills in areas such as data visualization that can be of benefit to people in the sciences.  I would absolutely love to work with a researcher outside of the humanities who is new to digital scholarship.  We can educate each other and become more well-rounded researchers because of our collaboration.  I somewhat actively avoided the sciences in my academic career (to this day, I have never set foot inside the science buildings at my alma mater!) so this is definitely a new area for me.  The silos between the disciplines have begun to break down as research becomes more multidisciplinary, and I’m very excited to be part of that.

Jennifer Snow has a BA in History from Vassar College and an MA in History and Master in Library Science from Florida State University.  She currently serves as the Digital Scholarship/Humanities and Social Sciences Librarian for UConn.  Her academic background is in early modern French history, and she has worked on a number of digital scholarship projects on a variety of subjects.  She has published articles and a book chapter on topics related to digital scholarship and critical pedagogy.qa