JDMDH Special Issue: Call for Contribution

JDMDH Call for Contribution: Special Issue on Computer-Aided Processing of Intertextuality in Ancient Languages

Europe’s future is digital”. This was the headline of a speech given at the Hannover exhibition in April 2015 by Günther Oettinger, EU-Commissioner for Digital Economy and Society. While businesses and industries have already made major advances in digital ecosystems, the digital transformation of texts stretching over a period of more than two millennia is far from complete. On the one hand, mass digitisation leads to an „information overload“ of digitally available data; on the other, the “information poverty” embodied by the loss of books and the fragmentary state of ancient texts form an incomplete and biased view of our past. In a digital ecosystem, this coexistence of data overload and poverty adds considerable complexity to scholarly research.

Continue reading

DH Estonia 2015: attend the eTRAP Workshop!

We will be giving a workshop on Text Reuse at the
Translingual and Transcultural Digital Humanities Conference. Estonia, 19-21 October 2015!!
Day of workshop: Wednesday 21 October 2015

Find the full announcement below:
—————————————————————————————————————————————–
estonia-656787_640

eTRAP (Electronic Text Reuse Acquisition Project) is an Early Career Research Group funded by the German Federal Ministry of Education and Research (BMBF) and based at the Göttingen Centre for Digital Humanities at the University of Göttingen. The research group, which started on 1st March 2015, was awarded €1.6 million and runs for four years. As the name suggests, this interdisciplinary team studies the linguistic and literary phenomenon that is text reuse with a particular focus on historical languages. More specifically, we look at how ancient authors copied, alluded to, paraphrased and translated each other as they spread their knowledge in writing. This early career research group seeks to provide a basic understanding of the (historical) text reuse methodology (it being distinct from plagiarism), and so to study what defines text reuse, why some people reuse information, how text is reused and how this practice has changed over history.

Continue reading

REFLECTING on the recent Digital Humanities Hackathon on Text Re-Use – “Don’t leave your data problems at home!”

The Hackathon week is over and looking back on it the eTRAP team agrees…it was a hit!
23 participants from 15 different institutions and 8 countries hacking away at research questions on their laptops to achieve the same goal, albeit with different datasets. And the goal was achieved. Our hackers were humanists with a desire to find textual reuses across different works of the same author or across several authors from different times and locations. They brought data in English, German, Latin, Sanskrit, Hebrew and even Arabic and Estonian, spanning across many genres – from folkloristic poetry, to narratives and letters, from lists of citations to biblical texts. From day one they were led by computer scientist and leader of eTRAP, Marco Büchler, through each of the six steps required by the TRACER tool (1) to perform scans of the texts in search of reuse. By using the command line like pros, hackers preprocessed their data and set the parameters they needed to guarantee the most informative outcome. The week culminated with a tutorial on TRAViz (2), an open source variant graph visualisation tool created and presented by Stefan Jänicke (3), which allows users to create a swish visualisation with the results yielded by the TRACER tool.

Continue reading

Proceedings: On Close and Distant Reading in DH

Greta’s latest conference paper, co-authored with Stefan Jänicke, Muhammad Faisal Cheema and Gerik Scheuermann, “On Close and Distant Reading in Digital Humanities: A Survey and Future Challenges” is out!

Here’s the abstract:

We present an overview of the last ten years of research on visualizations that support close and distant reading of textual data in the digital humanities. We look at various works published within both the visualization and digital humanities communities. We provide a taxonomy of applied methods for close and distant reading, and illustrate approaches that combine both reading techniques to provide a multifaceted view of the data. Furthermore, we list toolkits and potentially beneficial visualization approaches for research in the digital humanities. Finally, we summarize collaboration experiences when developing visualizations for close and distant reading, and give an outlook on future challenges in that research area.

You can download a pre-print here.

GDDH 2015: Conclusions

As the first series of the Göttingen Dialog in Digital Humanities (GDDH) has just come to a close (sob!), it’s time for us to take a few minutes to reflect on its outcome and on the things we’d like to bring to the next series.

GDDH turned out to be a great success! We did not only accept 14 full papers from 11 institutions in 5 countries, but have secured a deal with Digital Humanities Quarterly to publish each contribution in a special issue. The series touched upon numerous different fields, joint by the thread that is Digital Humanities: Digital Classics, Topic Modelling, Text Visualisation, Digital Editions, 3D Motion Capture, Social Networks, Television Media, Web History, Digital Collections, Geographic Information Systems and Text Mining… (*catches breath*) WOW! We’re also currently busy evaluating the best paper and presentation – the winner, who will receive a 500€ cash prize, will be announced very soon.

GDDH_stats_map
GDDH 2015 speakers: dots correspond to affiliations of speakers; dot colour represents gender. [Click the image to view the interactive version, where you can find more detailed information].

Continue reading

Article: TEI and the Encoding of Text Reuses of Lost Authors

Greta’s latest co-authored article “The Linked Fragment: TEI and the Encoding of Text Reuses of Lost Authors” has just been published by the Journal of the Text Encoding Initiative! Here is the abstract:

This paper presents a joint project of the Humboldt Chair of Digital Humanities at the University of Leipzig, the Perseus Digital Library at Tufts University, and the Harvard Center for Hellenic Studies to produce a new open series of Greek and Latin fragmentary authors. Such authors are lost and their works are preserved only thanks to quotations and text reuses in later texts. The project is undertaking two tasks: (1) the digitization of paper editions of fragmentary works with links to the source texts from which the fragments have been extracted; (2) the production of born-digital editions of fragmentary works. The ultimate goals are the creation of open, linked, machine-actionable texts for the study and advancement of the field of Classical textual fragmentary heritage and the development of a collaborative environment for crowdsourced annotations. These goals are being achieved by implementing the Perseids Platform and by encoding the Fragmenta Historicorum Graecorum, one of the most important and comprehensive collections of fragmentary authors.

The article can be accessed and downloaded for free here.