The latest Digital Humanities monograph published by Open Book Publishers, Digital Scholarly Editing: Theories and Practices, includes a chapter written by Greta together with Melissa Terras and Simon Mahony from the UCL Centre for Digital Humanities. The chapter is entitled A Catalogue of Digital Editions and reports on an homonymous ongoing project that collects and analyses digital editions in an attempt to identify best practice in digital scholarly editing.
As an Open Access publication, you can download the entire volume for free!
Greta will be presenting the Catalogue of Digital Editions project in the form of a poster at the upcoming Text Encoding Initiative conference in Vienna.
Greta’s latest article “Visual Text Analysis in Digital Humanities“, co-authored with Stefan Jänicke, Muhammad Faisal Cheema and Gerik Scheuermann, has just been published by the Computer Graphics Forum! Here is the abstract:
In 2005, Franco Moretti introduced Distant Reading to analyze entire literary text collections. This was a rather revolutionary idea compared to the traditional Close Reading, which focuses on the thorough interpretation of an individual work. Both reading techniques are the prior means of Visual Text Analysis. We present an overview of the research conducted since 2005 on supporting text analysis tasks with close and distant reading visualizations in the digital humanities. Therefore, we classify the observed papers according to a taxonomy of text analysis tasks, categorize applied close and distant reading techniques to support the investigation of these tasks and illustrate approaches that combine both reading techniques in order to provide a multi-faceted view of the textual data. In addition, we take a look at the used text sources and at the typical data transformation steps required for the proposed visualizations. Finally, we summarize collaboration experiences when developing visualizations for close and distant reading, and we give an outlook on future challenges in that research area.
eTRAP’s article “Sentence Shortening via Morpho-Syntactic Annotated Data in Historical Language Learning” authored by Maria Moritz, Barbara Pavlek, Greta Franzini and Gregory Crane, is now published in the current issue of the ACM Journal on Computing and Cultural Heritage (JOCCH). The work was supported by the Federal Ministry of Education (BMBF) and the European Social Fund (ESF). Here is the abstract:
We present an approach to shorten Ancient Greek sentences by using morpho-syntactic information attached to each word in a sentence. This work underpins the content of our eLearning application, AncientGeek, whose unique teaching technique draws from primary Greek sources. By applying a technique that skips the clausal dependents of a main verb, we reached a well-formed rate of 89% of the sentences.
Greta’s latest article “TRAViz: A Visualization for Variant Graphs“, co-authored with Stefan Jänicke, Annette Geßner, Melissa Terras, Simon Mahony and Gerik Scheuermann, has been published by the Digital Scholarship in the Humanities journal! Here is the abstract:
This article describes the development and application of an innovative tool, Text Re-use Alignment Visualization (TRAViz), whose aim is to visualize variation between editions of both historical and modern texts. Reading different editions of a text empowers research in literary studies and linguistics, where one can study a text’s reception or follow the development of its language over time. One of the purposes of a text edition is to trace or reconstruct a possible archetype or something that might be considered to be an original version of the text in order to better understand its evolution over time. To do so, the textual scholar examines and records the similarities and the differences between a number of exemplars in what is known as a ‘critical apparatus’. The result of this variant analysis can be visually represented as a ‘Variant Graph’, where the relationships between these exemplars can be more easily studied. Variant Graphs can be, in turn, visualized in order to facilitate reading and interaction with the source data. Borrowing from existing digital tools, TRAViz assists the scholar in the collation process by specifically focusing on design and user engagement, concurrently seeking to simplify interaction as a means of encouraging humanists to adopt the tool. The article will describe the needs and rationale behind the creation of TRAViz by exploring existing research, describing its functionality through examples, and by finally discussing how its application can influence future development of this tool in particular and of the field in general.
“Europe’s future is digital”. This was the headline of a speech given at the Hannover exhibition in April 2015 by Günther Oettinger, EU-Commissioner for Digital Economy and Society. While businesses and industries have already made major advances in digital ecosystems, the digital transformation of texts stretching over a period of more than two millennia is far from complete. On the one hand, mass digitisation leads to an „information overload“ of digitally available data; on the other, the “information poverty” embodied by the loss of books and the fragmentary state of ancient texts form an incomplete and biased view of our past. In a digital ecosystem, this coexistence of data overload and poverty adds considerable complexity to scholarly research.
Greta’s latest conference paper, co-authored with Stefan Jänicke, Muhammad Faisal Cheema and Gerik Scheuermann, “On Close and Distant Reading in Digital Humanities: A Survey and Future Challenges” is out!
Here’s the abstract:
We present an overview of the last ten years of research on visualizations that support close and distant reading of textual data in the digital humanities. We look at various works published within both the visualization and digital humanities communities. We provide a taxonomy of applied methods for close and distant reading, and illustrate approaches that combine both reading techniques to provide a multifaceted view of the data. Furthermore, we list toolkits and potentially beneficial visualization approaches for research in the digital humanities. Finally, we summarize collaboration experiences when developing visualizations for close and distant reading, and give an outlook on future challenges in that research area.
You can download a pre-print here.
Greta’s book review of Digital Critical Editions by Daniel Apollon, Claire Bélisle and Philippe Régnier (2014) has just been published in Oxford University Press’ journal Digital Scholarship in the Humanities! You can read the review in advanced access here.
Greta’s latest co-authored article “The Linked Fragment: TEI and the Encoding of Text Reuses of Lost Authors” has just been published by the Journal of the Text Encoding Initiative! Here is the abstract:
This paper presents a joint project of the Humboldt Chair of Digital Humanities at the University of Leipzig, the Perseus Digital Library at Tufts University, and the Harvard Center for Hellenic Studies to produce a new open series of Greek and Latin fragmentary authors. Such authors are lost and their works are preserved only thanks to quotations and text reuses in later texts. The project is undertaking two tasks: (1) the digitization of paper editions of fragmentary works with links to the source texts from which the fragments have been extracted; (2) the production of born-digital editions of fragmentary works. The ultimate goals are the creation of open, linked, machine-actionable texts for the study and advancement of the field of Classical textual fragmentary heritage and the development of a collaborative environment for crowdsourced annotations. These goals are being achieved by implementing the Perseids Platform and by encoding the Fragmenta Historicorum Graecorum, one of the most important and comprehensive collections of fragmentary authors.
The article can be accessed and downloaded for free here.
Marco’s latest article “Is it Research or is it Spying? Thinking-Through Ethics in Big Data AI and Other Knowledge Sciences” has just been published online! Here is the abstract:
“How to be a knowledge scientist after the Snowden revelations?” is a question we all have to ask as it becomes clear that our work and our students could be involved in the building of an unprecedented surveillance society. In this essay, we argue that this affects all the knowledge sciences such as AI, computational linguistics and the digital humanities. Asking the question calls for dialogue within and across the disciplines. In this article, we will position ourselves with respect to typical stances towards the relationship between (computer) technology and its uses in a surveillance society, and we will look at what we can learn from other fields. We will propose ways of addressing the question in teaching and in research, and conclude with a call to action.