Draft proceedings available
9:00-9:15
Doors open
9:15-9:30
Opening
9:30-10:30
Marion Botella
According to psychology, creativity is the ability to produce ideas that are both original and appropriate (Lubart et al., 2015). Fitting in with this definition, the creative process is then the sequence of thoughts and actions that result in an original and adapted production. This process can thus be described according to a macro approach, detailing the stages that make it up, or by a micro approach, detailing the mechanisms within each stage. In this presentation, we will define creativity and, more specifically, the creative process according to psychology, and then look at the methods used to evaluate or observe it.
10:30-11:00
Coffee break
11:00-11:30
Lieve Macken, Joke Daems and Paola Ruffo
This study investigates the impact of different translation workflows and underlying machine translation technologies on the translation strategies used in literary translations. We compare human translation, translation within a computer-assisted translation (CAT) tool, and machine translation post-editing (MTPE), alongside neural machine translation (NMT) and large language models (LLMs). Using three short stories translated from English into Dutch, we annotated translation difficulties and strategies employed to overcome them. Our analysis reveals differences in translation solutions across modalities, highlighting the influence of technology on the final translation. The findings suggest that while MTPE tends to produce more literal translations, human translators and CAT tools exhibit greater creativity and employ more non-literal translation strategies. Additionally, LLMs reduced the number of literal translation solutions compared to traditional NMT systems. While our study provides valuable insights, it is limited by the use of only three texts and a single language pair. Further research is needed to explore these dynamics across a broader range of texts and languages, to better understand the full impact of translation workflows and technologies on literary translation.
11:30-12:00
Xiaoye Li and Joke Daems
With quality improvements in neural machine translation (NMT), scholars have argued that human translation revision and MT post-editing are becoming more alike, which would have implications for translator training. This study contributes to this growing body of work by exploring the ability of student translators (ZH-EN) to distinguish between NMT and human translation (HT) for news text and literary text and analyses how text type and student perceptions influence their subsequent revision process. We found that participants were reasonably adept at distinguishing between NMT and HT, particularly for literary texts. Participants' revision quality was dependent on the text type as well as the perceived source of translation. The findings also highlight student translators' limited competence in revision and post-editing, emphasizing the need to integrate NMT, revision, and post-editing into translation training programmes.
12:00-12:30
Judith Brenner and Julia Othlinghaus-Wulhorst
In this empirical study we examine three different translation modes with varying involvement of machine translation (MT) post-editing (PE) when translating video game texts. The three translation modes are translation from scratch without MT, full PE of MT output in a static way, and flexible PE as a combination of translation from scratch and post-editing of only those machine-translated sentences deemed useful by the translator. Data generation took place at the home offices of freelance game translators. In a mixed-methods approach, quantitative data was generated through keylogging, eye tracking, error annotation, and user experience questionnaires as well as qualitative data through interviews. Results show a negative perception of PE and suggest that translators’ user experience is positive when translating from scratch, neutral with a positive tendency when doing flexible PE of domain-adapted MT output and negative with static PE of generic MT output.
12:30-13:30
Lunch break
13:30-14:30
Tim Van de Cruys
Literary translation poses unique challenges for computational systems - not only in terms of preserving meaning, but in conveying tone, imagery, and style. Creativity plays a central role, especially when translating texts that resist straightforward alignment. In this talk, I present the ERC project TENACITY, which explores unsupervised models of linguistic creativity using tensor-based semantic representations and neural network architectures. These models do not merely replicate language patterns, but aim to understand and generate language with creative intent. I explore how such models can contribute to the task of literary translation, particularly when dealing with metaphor, ambiguity, or stylistic shifts - offering computational techniques that complement the work of human translators in capturing linguistic nuance.
14:30-15:30
Moderated by Paola Ruffo and Damien Hansen
15:30-16:00
Coffee break
16:00-16:30
Judith Bojana Mikelenić, Antoni Oliver and Sergi Àlvarez Vidal
This paper explores the fine-tuning and evaluation of neural machine translation (NMT) models for literary texts using RomCro v.2.0, an expanded multilingual and multidirectional parallel corpus. RomCro v.2.0 is based on RomCro v.1.0, but includes additional literary works, as well as texts in Catalan, making it a valuable resource for improving MT in underrepresented language pairs. Given the challenges of literary translation, where style, narrative voice, and cultural nuances must be preserved, fine-tuning on high-quality domain-specific data is essential for enhancing MT performance.
We fine-tune existing NMT models with RomCro v.2.0 and evaluate their performance for six different language combinations using automatic metrics and for Spanish-Croatian and French-Catalan using manual evaluation. Results indicate that fine-tuned models outperform general-purpose systems, achieving greater fluency and stylistic coherence. These findings support the effectiveness of corpus-driven fine-tuning for literary translation and highlight the importance of curated high-quality corpus.
16:30-17:00
Delu Kong and Lieve Macken
This study focuses on evaluating the performance of machine translations (MTs) compared to human translations (HTs) in children's literature translation (CLT) from a stylometric perspective. The research constructs a \textit{Peter Pan} corpus, comprising 21 translations: 7 human translations (HTs), 7 large language model translations (LLMs), and 7 neural machine translation outputs (NMTs). The analysis employs a generic feature set (including lexical, syntactic, readability, and n-gram features) and a creative text translation (CTT-specific) feature set, which captures repetition, rhyme, translatability, and miscellaneous levels, yielding 447 linguistic features in total.
Using classification and clustering techniques in machine learning, we conduct a stylometric analysis of these translations. Results reveal that in generic features, HTs and MTs exhibit significant differences in conjunction word distributions and the ratio of 1-word-gram-一样, while NMTs and LLMs show significant variation in descriptive words usage and adverb ratios. Regarding CTT-specific features, LLMs outperform NMTs in distribution, aligning more closely with HTs in stylistic characteristics, demonstrating the potential of LLMs in CLT.
17:00-17:15
Closing