Call for abstracts & papers: Machine and Computer-assisted Interpreting - LANS-TTS issue 24, publication year 2025

05-09-2023

Call for abstracts & papers: Machine and Computer-assisted Interpreting

LANS-TTS issue 24, publication year 2025

Guest editors

  • Lu Xinchao, Beijing Foreign Studies University (China)
  • Claudio Fantinuoli, Mainz University (Germany)

Lu Xinchao is the Director of the Center for Research on Interpreting Practice and Pedagogy (CRIPP), a platform boasting an interdisciplinary team of interpreting practitioners, trainers, researchers, and experts in natural language processing, computational linguistics, and machine translation, as well as machine interpreting engineers and machine interpreting system developers from leading companies. By engaging researchers from the interpreting community and beyond, we aim to explore the most relevant themes related to interpreting, ranging from interpreting competence, processes, and products to the interpreting profession and its associated pedagogy, with a special interest in technology-motivated research themes, such as machine interpreting.

Claudio Fantinuoli is researcher and lecturer at the Mainz University/Germany and Head of Innovation at KUDO Inc. He conducts research in the field of Natural Language Processing applied to computer-assisted interpreting and automatic speech translation. He also teaches conference interpreting. In the past, he taught Technology and Interpreting at the University of Innsbruck and at the Postgraduate Center of the University of Vienna. He is the founder of InterpretBank, an AI-based tool for conference interpreters.

Machine and computer-assisted interpreting

During the last decades, information technology has played a central role in the domain of spoken language translation. Currently, there are two major lines of research on this area: machine interpreting and computer-assisted interpreting.

 

Machine interpreting (automatic interpreting, speech translation, and speech-to-speech translation) refers to the practice, process, or product of real-time automatic or automated speech translation by a computerized system combining components of automatic speech recognition, machine translation, speech synthesis, and subtitling (Cho et al., 2013; Fügen et al., 2007; Horváthp, 2022; Müller et al., 2016; Stüker et al., 2012). It explores how machine systems perform interpreting, and studies in this field, which are most often conducted by scientists and engineers, deal with interpreting system design, improvement, and evaluation. The areas explored include interpreting system design (e.g., Fügen et al., 2007; Jekat, 1996; Luperfoy, 1996; Sakamoto et al., 2013; Wang et al., 2016); evaluation of machine interpreting quality (e.g., Hamon et al., 2009; Le et al., 2018; Stewart et al., 2018); improvement of interpreting rules or models (e.g., He et al., 2015; Siahbani et al., 2018);  key issues and processes, such as sentence segmentation (e.g., Siahbani et al., 2018); incremental processing and latency (e.g., Fujita et al., 2013; Grissom II et al., 2014); and facial expression-based affective speech translation (Székely et al., 2014), corpus construction, and machine learning (e.g., Murata et al., 2010; Shimizu et al., 2013).

 

Unlike machine translation research and development, which spans over half a century, machine interpreting is an emerging field that is far less explored. Machine interpreting was first tested in the 1980s and implemented in the 1990s; by early this century, researchers and developers in fields such as computer science, linguistics, speech processing, and artificial intelligence made it possible for machine systems to interpret dialogues for reservations and scheduling, travel conversations, broadcast news, parliament speeches, improvised speeches, and lectures (Nakamura, 2009; Pöchhacker, 2015, pp. 239–242, 2016, p. 194; Waibel & Fügen, 2008). It was listed by the MIT Technology Review in 2004 as one of the “10 emerging technologies that will change your world.”

Computer-assisted interpreting tools are programs designed to support professional interpreters during the different phases of the interpreting workflows and encompass all tools that aims at integrating into the interpreter’s workstation the latest advances in Natural Language Processes and Artificial Intelligence. Among others, CAI tools support interpreters and interpreter managers in creating terminological resources, in managing and reusing event information for future tasks, in sharing such information among different stakeholders, to access information in real-time during the delivery of the interpreting service, and, more recently, to perform activities such as quality assurance and similar. 

 

By leveraging automatic speech recognition, transcription and transcript display, and machine translation, and key technologies or components of machine interpreting systems, students and professional interpreters can improve their interpreting accuracy, particularly with numbers and terms (e.g., Defrancq & Fantinuoli, 2020; Desmet et al., 2018; Sun et al., 2021; Zhang et al., 2018; Fantinuoli, 2017).

 

In the last two decades, impressive progress in automatic speech recognition, natural language processing, artificial intelligence, deep learning, and neural machine translation has given a major boost to the development of machine and computer-assisted interpreting systems, improving their robustness with increasingly uncertain and diversified source language features and environments and extending their domains, modes, and scenarios of application through enhanced acceptability, affordability, portability, and usability. Machine systems, whether assisting human interpreters or working alone, are reshaping and will continue to reshape the global ecosystem of interpreting—its practices, processes, products, profession, and pedagogy.

 

With most of the existing literature being general introductions or theoretical explorations, there has been a dearth of empirical research (cf. Tripepi Winteringham, 2010; Fantinuoli, 2018; Oritz and Cavallo, 2019), and particularly of applied research, conducted by researchers, trainers, and practitioners in the interpreting community. Given this underrepresentation, many fundamental questions remain to be answered (cf. Mellinger, 2021; Prandi 2023).

 

What is the state of the art of machine and computer-assisted interpreting development? What are the major bottlenecks and challenges in developing quality machine interpreting systems? What are the latest developments and innovations in machine interpreting processes (e.g., from three components to end-to-end, multimodal information processing and integration)? How do machine systems compare to human interpreters in terms of interpreting competences, processes, and products? How do computer-assisted systems interact or collaborate with human interpreters? What are the potential areas (domains, modes, patterns, etc.) of machine–human complementarity? How do machine interpreting and computer-assisted interpreting tools redefine interpreters’ roles and competences while reshaping interpreting pedagogy and practice and the language industry as a whole? What are the potential risks or ethical issues related to machine interpreting?

 

To respond to these questions, we need to examine and explore certain key themes, including (but not limited to) the following:

 

  • machine interpreting system design (e.g., end-to-end interpreting systems; domain- or mode-specific vs. general interpreting systems; configurable systems adapted to communicative situations; working modes, domains, language combinations, and interpreting directions; and source language variables)
  • innovative models, processes, or mechanisms of machine interpreting (e.g., processing of disfluencies, prosodic, pragmatic, and visual information; and low latency/concurrency of processing)
  • interpreting corpus construction for machine learning
  • machine interpreting quality evaluation
  • evaluation of computer-assisted interpreting
  • cognitive implications of use of computer-assisted interpreting tools in simultaneous interpretation
  • use of computer-assisted interpreting tools in underexplored settings, such as liaison interpreting
  • comparison of human and machine interpreting competences
  • comparison of human and machine interpreting processes
  • comparison of human and machine interpreting products
  • machine-aided human interpreting (system design; operational procedures and mechanisms; and products and performances specific to different language combinations, event types, interpreting modes, domains, themes, source language variables, etc.)
  • human-aided machine interpreting (system design; operational procedures and mechanisms; and products and performances specific to different language combinations, event types, interpreting modes, domains, themes, source language variables, etc.)
  • advances of computer-assisted interpreting tools and their effectiveness
  • emerging forms of hybridization in the delivery of interpreting services
  • implications of machine and computer-assisted interpreting for the interpreting profession and the language industry (e.g., competition and collaboration between interpreters and machine systems, roles of machine systems and interpreters in linguistic/cultural mediation, new professional profiles and working conditions and remuneration)
  • implications of machine and computer-assisted interpreting for interpreter training
  • the ethics of artificial intelligence applied to machine interpreting and computer-assisted interpreting (e.g., training data bias and data quality, interpreting data ownership and privacy, and the transparency of machine system development and decision-making processes in machine interpreting)

Selected papers will be submitted for a double-blind peer review as requested by LANS–TTS. 

Practical information and deadlines

Proposals: Please submit abstracts of approximately 500–1000 words in English, including relevant references (not included in the word count), to both Lu Xinchao (luxinchao@bfsu.edu.cn) and Claudio Fantinuoli (fantinuoli@uni-mainz.de) in the same email.

  • Abstract deadline: 1 April 2024
  • Acceptance of abstract proposals: 1 June 2024
  • Submission of papers: 1 November 2024
  • Acceptance of the papers: 1 March 2025
  • Submission of final versions of papers: 1 June 2025
  • Editorial work (proofreading and APA check): June to November 2025
  • Publication: December 2025

For all submissions (abstracts and full papers), authors have to use APA 7th.

References (apa.org)
APA Style Reference Guide for Journal Articles, Books, and Edited Book Chapters, APA Style 7th Edition
APA Style Common Reference Examples Guide, APA Style 7th Edition

References (APA 7th edition).

Cho, E., Fugen, C., Herrmann, T., Kilgour, K., Mediani, M., Mohr, C., Niehues, J., Rottmann, K., Saam, C., Stuker, S., & Waibel, A. (2013). A real-world system for simultaneous translation of German lectures. INTERSPEECH, 13, 3473–3477. https://doi.org/10.21437/Interspeech.2013-612

Defrancq, B., & Fantinuoli, C. (2020). Automatic speech recognition in the booth: Assessment of system performance, interpreters’ performances and interactions in the context of numbers. Target, 33 (1), 73–102. https://doi.org/10.1075/target.19166.def

Desmet, B., Vandierendonck, M., & Defrancq, B. (2018). Simultaneous interpretation of numbers and the impact of technological support. In C. Fantinuoli (Ed.), Interpreting and Technology, 13–27. Language Science Press.

Fantinuoli, C. (2018). Interpreting and technology: The upcoming technological

turn. In C. Fantinuoli (Ed.), Interpreting and Technology, 1–12. Language Science Press.

Fantinuoli, C. (2017). Speech Recognition in the Interpreter Workstation. Proceedings of the Translating and the Computer, 39, 25–34.

Fügen, C., Waibel, A., & Kolss, M. (2007). Simultaneous translation of lectures and speeches. Machine Translation, 21 (4), 209–252. https://doi.org/10.1007/s10590-008-9047-0

Fujita, T., Neubig, G., Sakti, S., Toda, T., & Nakamura, S. (2013). Simple, lexicalized choice of translation timing for simultaneous speech translation. INTERSPEECH, 13, 3487–3491. https://doi.org/10.21437/Interspeech.2013-615

Grissom II, A. C., Boyd-Graber, J., He, H., Morgan, J., & Daumé III, H. (2014). Don’t until the final verb wait: Reinforcement learning for simultaneous machine translation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 1342–1352. https://doi.org/10.3115/v1/D14-1140

Hamon, O., Fügen, D., Mostefa, D., Arranz, V., Kolss, M., Waibel, A., & Choukri, K. (2009). End-to-end evaluation in simultaneous translation. Proceedings of the 12th Conference of the European Chapter of the ACL, 345–353. https://doi.org/10.3115/1609067.1609105

Horváthp, I. (2022). AI in interpreting: Ethical considerations. Across Languages and Cultures, 23 (1), 1–13. https://doi.org/10.1556/084.2022.00108

He, H., Grissom II, A., Boyd-Graber, J., & Daumé III, H. (2015). Syntax-based rewriting for simultaneous machine translation. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 55–64. https://doi.org/10.18653/v1/D15-1006

Jekat, S.J., & Klein, A. (1996). Machine interpretation open problems and some solutions. Interpreting, 1 (1), 7–20. https://doi.org/10.1075/intp.1.1.02jek

Le, N., Lecouteux, B., & Besacier, L. (2018). Automatic quality estimation for speech translation using joint ASR and MT features. Machine Translation, 32 (4), 325–351. https://doi.org/10.1007/s10590-018-9218-6

Luperfoy, S. (1996). Machine interpretation of bilingual dialogue. Interpreting, 1 (2), 213–233. https://doi.org/10.1075/intp.1.2.03lup

Mellinger, C.D. (2019). Computer-assisted interpreting technologies: a product and process-oriented perspective. Revista Tradumàtica, 17, 33–44.

Müller, M., Nguyen, T.S., Niehues, J., Cho, E., Krüge, B., Ha, T.L., Kilgour, K., Sperber, M., Mediani, M., Stüker, S., & Waibel, A. (2016). Speech translation framework for simultaneous lecture translation. Proceedings of NAACL-HLT, 82–86. https://doi.org/10.18653/v1/N16-3017

Murata, M., Ohno, T., Matsubara, S., & Inagaki, Y. (2010). Construction of chunk-aligned bilingual lecture corpus for simultaneous machine translation. Proceedings of the International Conference on Language Resources and Evaluation, LREC 2010.

Nakamura, S. (2009). Overcoming the language barrier with speech translation technology. Quarterly Review, 31, 35–48.

Ortiz, L., & Cavallo, P. (2018). Computer-Assisted Interpreting Tools (CAI) and options for automation with Automatic Speech Recognition. TradTerm, 32, 9–31. https://doi.org/10.11606/issn.2317-9511.v32i0p9-31

Prandi, B. (2023). Computer-assisted simultaneous interpreting: A cognitive-experimental study on terminology. Science Language Press.

Pöchhacker, F. (2015). Routledge Encyclopedia of Interpreting Studies. Routledge. https://doi.org/10.4324/9781315678467

Sakamoto, A., Watanabe, N., Kamatani, S., & Sumita, K. (2013). Development of a simultaneous interpretation system for face-to-face services and its evaluation experiment in real situation. Proceedings of the XIV Machine Translation Summit, 85–92.

Shimizu, H., Neubig, G., Sakti, S., Toda, T., & Nakamur, S. (2013). Constructing a speech translation system using simultaneous interpretation data. Proceedings of the 10th International Workshop on Spoken Language Translation, 212–218.

Siahbani, M., Shavarani, H.S., Alinejad, A., & Sarkar, A. (2018). Simultaneous translation using optimized segmentation. Proceedings of AMTA 2018 (1): MT Research Track, 154–167.

Stewart, C., Vogler, N., Hu, J.J., Boyd-Graber, J., & Neubig, G. (2018). Automatic estimation of simultaneous interpreter performance. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Short Papers), 662–666. https://doi.org/10.18653/v1/P18-2105

Stüker, S., Herrmann, T., Kolss, M., Niehues, J., & Wölfel, M. (2012). Research opportunities in automatic speech-to-speech translation. IEEE Potentials, 31 (3), 26–33. https://doi.org/10.1109/MPOT.2011.2178192

Sun, H., Li, K., & Lu, J. (2021). AI-assisted simultaneous interpreting - An experiment and its Implications. Computer-assisted Foreign Language Education in China, 6, 75-80+86+12.

Székely, É., Steiner, I., Ahmed, Z., & Carson-Berndsen, J. (2014). Facial expression-based affective speech translation. Journal on Multimodal User Interfaces, 8 (1), 87–96. https://doi.org/10.1007/s12193-013-0128-x

Tripepi Winteringham, S. (2010). The usefulness of ICTs in interpreting practice. The Interpreters’ Newsletter, 15, 87–99.

Waibel, A., & Fügen, C. (2008). Spoken language translation enabling cross-lingual human–human communication. IEEE Signal Processing Magazine, May, 70–79. https://doi.org/10.1109/MSP.2008.918415

Wang, X. L., Finch, A., Utiyama, M., & Sumita, E. (2016). An efficient and effective online sentence segmenter for simultaneous interpretation. Proceedings of the 3rd Workshop on Asian Translation, 139–148.

Zhang, A., Yang, Z., Liu, C., & Li, S. (2018). A tentative proposal for translation & interpreting based on human-computer collaboration through developments in artificial intelligence. Computer-Assisted Foreign Language Education in China, 3, 88–94.