Find the full announcement below:
eTRAP (Electronic Text Reuse Acquisition Project) is an Early Career Research Group funded by the German Federal Ministry of Education and Research (BMBF) and based at the Göttingen Centre for Digital Humanities at the University of Göttingen. The research group, which started on 1st March 2015, was awarded €1.6 million and runs for four years. As the name suggests, this interdisciplinary team studies the linguistic and literary phenomenon that is text reuse with a particular focus on historical languages. More specifically, we look at how ancient authors copied, alluded to, paraphrased and translated each other as they spread their knowledge in writing. This early career research group seeks to provide a basic understanding of the (historical) text reuse methodology (it being distinct from plagiarism), and so to study what defines text reuse, why some people reuse information, how text is reused and how this practice has changed over history.
The Hackathon week is over and looking back on it the eTRAP team agrees…it was a hit!
23 participants from 15 different institutions and 8 countries hacking away at research questions on their laptops to achieve the same goal, albeit with different datasets. And the goal was achieved. Our hackers were humanists with a desire to find textual reuses across different works of the same author or across several authors from different times and locations. They brought data in English, German, Latin, Sanskrit, Hebrew and even Arabic and Estonian, spanning across many genres – from folkloristic poetry, to narratives and letters, from lists of citations to biblical texts. From day one they were led by computer scientist and leader of eTRAP, Marco Büchler, through each of the six steps required by the TRACER tool (1) to perform scans of the texts in search of reuse. By using the command line like pros, hackers preprocessed their data and set the parameters they needed to guarantee the most informative outcome. The week culminated with a tutorial on TRAViz (2), an open source variant graph visualisation tool created and presented by Stefan Jänicke (3), which allows users to create a swish visualisation with the results yielded by the TRACER tool.
We present an overview of the last ten years of research on visualizations that support close and distant reading of textual data in the digital humanities. We look at various works published within both the visualization and digital humanities communities. We provide a taxonomy of applied methods for close and distant reading, and illustrate approaches that combine both reading techniques to provide a multifaceted view of the data. Furthermore, we list toolkits and potentially beneficial visualization approaches for research in the digital humanities. Finally, we summarize collaboration experiences when developing visualizations for close and distant reading, and give an outlook on future challenges in that research area.
As the first series of the Göttingen Dialog in Digital Humanities (GDDH) has just come to a close (sob!), it’s time for us to take a few minutes to reflect on its outcome and on the things we’d like to bring to the next series.
GDDH turned out to be a great success! We did not only accept 14 full papers from 11 institutions in 5 countries, but have secured a deal with Digital Humanities Quarterly to publish each contribution in a special issue. The series touched upon numerous different fields, joint by the thread that is Digital Humanities: Digital Classics, Topic Modelling, Text Visualisation, Digital Editions, 3D Motion Capture, Social Networks, Television Media, Web History, Digital Collections, Geographic Information Systems and Text Mining… (*catches breath*) WOW! We’re also currently busy evaluating the best paper and presentation – the winner, who will receive a 500€ cash prize, will be announced very soon.
Greta’s book review of Digital Critical Editions by Daniel Apollon, Claire Bélisle and Philippe Régnier (2014) has just been published in Oxford University Press’ journal Digital Scholarship in the Humanities! You can read the review in advanced access here.
Greta’s latest co-authored article “The Linked Fragment: TEI and the Encoding of Text Reuses of Lost Authors” has just been published by the Journal of the Text Encoding Initiative! Here is the abstract:
This paper presents a joint project of the Humboldt Chair of Digital Humanities at the University of Leipzig, the Perseus Digital Library at Tufts University, and the Harvard Center for Hellenic Studies to produce a new open series of Greek and Latin fragmentary authors. Such authors are lost and their works are preserved only thanks to quotations and text reuses in later texts. The project is undertaking two tasks: (1) the digitization of paper editions of fragmentary works with links to the source texts from which the fragments have been extracted; (2) the production of born-digital editions of fragmentary works. The ultimate goals are the creation of open, linked, machine-actionable texts for the study and advancement of the field of Classical textual fragmentary heritage and the development of a collaborative environment for crowdsourced annotations. These goals are being achieved by implementing the Perseids Platform and by encoding the Fragmenta Historicorum Graecorum, one of the most important and comprehensive collections of fragmentary authors.
The article can be accessed and downloaded for free here.
Hosted by the Göttingen Centre for Digital Humanities (GCDH), Georg-August-Universität Göttingen, Germany
Organised by: Franzini, Greta Franzini and Maria Moritz
The Göttingen Centre for Digital Humanities will host a Hackathon targeted at students and researchers with a humanities background who wish to improve their computer skills by working with their own data-set. Rather than teaching everything there is to know about algorithms, the Hackathon will assist participants with their specific data-related problem, so that they can take away the knowledge needed to tackle the issue(s) at hand. The focus of this Hackathon is automatic text re-use detection and aims at engaging participants in intensive collaboration. Participants will be introduced to technologies representing the state of the art in the field and shown the potential of text re-use detection. Participants will also be able to equip themselves with the necessary knowledge to make sense of the output generated by algorithms detecting text re-use, and will gain an understanding of which algorithms best fit certain types of textual data. Finally, participants will be introduced to some text re-use visualisations.
Click here for further information on text re-use.