Translationese and Post-editese: How comparable is comparable quality?

Authors

DOI:

https://doi.org/10.52034/lanstts.v16i0.434

Keywords:

translationese, post-editese, human translation, translation quality, post-editing, machine-learning, text analysis

Abstract

Whereas post-edited texts have been shown to be either of comparable quality to human translations or better, one study shows that people still seem to prefer human-translated texts. The idea of texts being inherently different despite being of high quality is not new. Translated texts, for example, are also different from original texts, a phenomenon referred to as ‘Translationese’. Research into Translationese has shown that, whereas humans cannot distinguish between translated and original text, computers have been trained to detect Translationese successfully. It remains to be seen whether the same can be done for what we call Post-editese. We first establish whether humans are capable of distinguishing post-edited texts from human translations, and then establish whether it is possible to build a supervised machine-learning model that can distinguish between translated and post-edited text.

References

Aharoni, R., Koppel, M., & Goldberg, Y. (2014, June). Automatic detection of machine translated text and translation quality estimation. Paper presented at the 52nd Annual Meeting of the Association for Computational Linguistics (Short Papers), Baltimore, MD.

Al-Shabab, O. (1996). Interpretation and the language of translation: Creativity and conventions in translation. Edinburgh: Janus.

Baker, M. (1993). Corpus linguistics and translation studies: Implications and applications. In M. Baker, G. Francis, & E. Tognini-Bonelli (Eds.), Text and technology: In honour of John Sinclair (pp. 233–252). Amsterdam: John Benjamins.

Baroni, M., & Bernardini, S. (2006). A new approach to the study of translationese: Machine-learning the difference between original and translated text. Literary and Linguistic Computing, 21(3), 259–274.

Bowker, L. (2009). Can Machine Translation meet the needs of official language minority communities in Canada?: A recipient evaluation. Linguistica Antverpiensia New Series – Themes in Translation Studies, 8, 123–155.

Bowker, L., & Buitrago Ciro, J. (2015). Investigating the usefulness of machine translation for newcomers at the public library. Translation and Interpreting Studies, 10(2), 165–186.

Čulo, O., & Nitzke, J. (2016). Patterns of terminological variation in post-editing and of cognate use in machine translation in contrast to human translation. Baltic Journal of Modern Computing, 4(2), 106–114.

Daelemans, W., Zavrel, J., Van der Sloot, K., & Van den Bosch, A. (2010). TiMBL: Tilburg Memory Based Learner, version 6.3, Reference Guide.

Daems, J. (2016). A translation robot for each translator?: A comparative study of manual translation and post-editing of machine translations: process, quality and translator attitude. Ghent University. Faculty of Arts and Philosophy, Ghent, Belgium.

Daems, J., Macken, L., & Vandepitte, S. (2013). Quality as the sum of its parts: A two-step approach for the identification of translation problems and translation quality assessment for HT and MT+PE. Proceedings of MT Summit XIV Workshop on Post-editing Technology and Practice, Nice, France, 63–71.

De Clercq, O., & Hoste, V. (2016). All mixed up?: Finding the optimal feature set for general readability prediction and its application to English and Dutch. Computational Linguistics, 42(3), 457–490.

Denturck, K. (2014). Et pour cause...: la traduction de connecteurs causaux à la lumière des universaux de traduction: Une étude de corpus (français–néerlandais, néerlandais–français). Ghent University. Faculty of Arts and Philosophy, Ghent, Belgium.

Desmet, B., Hoste, V., Verstraeten, D., & Verhasselt, J. (2013). Gallop Documentation. Retrieved from https://www.lt3.ugent.be/publications/gallop-documentation/

Fiederer, R., & O’Brien, S. (2009). Quality and Machine Translation: A realistic objective? The Journal of Specialised Translation, 11, 52–74.

François, T., & Miltsakaki, E. (2012, June). Do NLP and machine learning improve traditional readability formulas? Paper presented at the 1st Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR2012), Montreal, QC.

Garcia, I. (2010). Is machine translation ready yet? Target, 22(1), 7–21.

Gellerstam, M. (1986, June). Translationese in Swedish novels translated from English. Paper presented at the Scandinavian Symposium on Translation Theory, Lund.

Green, S., Heer, J., & Manning, C. (2013, May). The efficacy of human post-editing for language translation. Paper presented at the ACM Human Factors in Computing Systems (CHI), Paris.

Halliday, M., & Hasan, R. (1976). Cohesion in English. Longman Group.

Ilisei, I., Inkpen, D., Corpas Pastor, G., & Mitkov, R. (2010). Identification of Translationese: A machine learning approach. In A. Gelbukh (Ed.), Computational linguistics and intelligent text processing: 11th International Conference, CICLing 2010, Iaşi, Romania, 21–27 March 2010. Proceedings (pp. 503–511). Berlin: Springer.

Koponen, M. (2016). Is machine translation post-editing worth the effort?: A survey of research into post-editing and effort. JoSTrans 25, 131–148.

Koppel, M., & Ordan, N. (2011, June). Translationese and its dialects. Paper presented at the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Portland, OR.

Lapshinova-Koltunski, E. (2013, August). VARTRA: A comparable corpus for analysis of translation variation. Paper presented at the 6th Workshop on Building and Using Comparable Corpora, Sofia, Bulgaria.

Laviosa, S. (1998). Core patterns of lexical use in a comparable corpus of English lexical prose. Meta, 43(4), 557–570.

Mitchell, M. (1996). An introduction to genetic algorithms. MIT Press, Cambridge.

O’Curran, E. (2014, October). Translation quality in post-edited versus human-translated segments: A case study. Paper presented at the AMTA 2014 3rd Workshop on Post-editing Technology and Practice (WPTP-3), Vancouver, BC.

Oostdijk, N., Reynaert, M., Hoste, V., & Schuurman, I. (2013). The construction of a 500-million-word reference corpus of contemporary written Dutch. In P. Spyns & J. Odijk (Eds.), Essential speech and language technology for Dutch: Theory and applications of natural language processing (pp. 219–247). Springer, Berlin.

Plitt, M., & Masselot, F. (2010). A productivity test of statistical machine translation: Post-editing in a typical localisation context. The Prague Bulletin of Mathematical Linguistics, 93,7–16.

Quinlan, J. R. (1986). Induction of decision trees. Machine Learning, 1, 81–106.

Quinlan, J. R. (1993). C4.5: Programs for machine learning. San Mateo, CA: Morgan Kaufmann.

Rabinovich, E., & Wintner, S. (2015). Unsupervised identification of translationese. Transactions of the Association for Computational Linguistics, 3, 419–432.

Rayson, P., & Garside, R. (2000, October). Comparing corpora using frequency profiling. Paper presented at the 38th Annual Meeting of the Association for Computational Linguistics Workshop on Comparing Corpora, Hong Kong.

Salton, G. (1989). Automatic text processing: The transformation, analysis and retrieval of information by computer. Addison-Wesley Longman, Reading.

Staphorsius, G. (1994). Leesbaarheid en leesvaardigheid: De ontwikkeling van een domeingericht meetinstrument. Universiteit Twente.

Tirkkonen-Condit, S. (2002). Translationese - a myth or an empirical fact?: A study into the linguistic identifiability of translated language. Target, 14(2), 207–220.

van den Bosch, A., Busser, B., Daelemans, W., & Canisius, S. (2007, December). An efficient memory-based morphosyntactic tagger and parser for Dutch. Paper presented at the Seventeenth Computational Linguistics in the Netherlands (CLIN), Nijmegen.

van Noord, G. J. M., Bouma, G., van Eynde, F., de Kok, D., van der Linde, J., Schuurman, I., Sang, E. T. K., & Vandeghinste, V. (2013). Large scale syntactic annotation of written Dutch: LASSY. In P. Spyns & J. Odijk (Eds.), Essential speech and language technology for Dutch: Theory and applications of natural language processing (pp. 231–254). Heidelberg: Springer.

van Oosten, P., Tanghe, D., & Hoste, V. (2010, May). Towards an improved methodology for automated readability prediction. Paper presented at the 7th International Conference on Language Resources and Evaluation (LREC-2010), Valletta.

Volansky, V., Ordan, N., & Wintner, S. (2015). On the features of translationese. Digital Scholarship in the Humanities, 30(1), 98–118.

White, A. P., & Liu, W. Z. (1994). Bias in information-based measures in decision tree induction. Machine Learning, 15(3), 321–329.

Downloads

Published

29-01-2018

How to Cite

Daems, J., De Clercq, O., & Macken, L. (2018). Translationese and Post-editese: How comparable is comparable quality?. Linguistica Antverpiensia, New Series – Themes in Translation Studies, 16. https://doi.org/10.52034/lanstts.v16i0.434