Methodological contributions of Social Neuroscience for the study of multimodality in translation

Antonio Javier Chica Núñez

Abstract


This study analyses the interplay of various communication modes that enables emotions to be transmitted efficiently from source (ST) to target text (TT) in audio description (AD) as a multimodal text. It draws on existing experimental designs, including neutral or emotional conditions based on the congruency of stimuli from various modes (images, dialogue semantic content or prosody in a film, together with the semantic content of AD). This article reviews the methodological contribution that Social Neuroscience could make to the study of multimodal translation. To this end, some neurobiological models and studies are quoted regarding multimodal emotional information processing (Brück, Kreifelts, & Wildgruber, 2011), the impact of multimodal emotional processing on subjects’ empathy (Regenbogen et al., 2012) and the dynamics of neural networks involved in human empathy and communication through the presentation of multimodal stimuli (Regenbogen, Habel, & Kellerman, 2013). Finally, an experimental design that focuses on the transfer of feelings and emotions in film AD, which would be suitable for a potential pilot study, is presented.


Keywords


multimodal translation; audio description; neuroscience; multimodal emotional processing; experimental design

Full Text:

PDF

References


AENOR (2005). Norma UNE: 153020. Audiodescripción para personas con discapacidad visual. Requisitos para la audiodescripción y elaboración de audioguías. Madrid: AENOR.

Belin, P., Zatorre, R. J., Lafaille, P., Ahad, P., & Pike, B. (2000). Voice-selective areas in human auditory cortex. Nature, 403(6767), 309–312.

Braun, S. (2008). Audiodescription research: State of the art and beyond. Translation Studies in the New Millennium, 6, 14–30.

Braun, S. (2011). Creating coherence in audio description. Meta: Journal des traducteurs/Meta: Translators’ Journal, 56(3), 645–662.

Brück, C., Kreifelts, B., & Wildgruber, D. (2011). Emotional voices in context: A neurobiological model of multimodal affective information processing. Physics of Life Reviews, 8(4), 383–403.

Chica Núñez, A. J. (2016). La traducción de la imagen dinámica en contextos multimodales. Granada: Tragacanto.

Chica Núñez, A. J. (2015). Multimodality and multi-sensoriality as basis for access to knowledge in translation: The case of audio description of colour and movement. Procedia - Social and Behavioral Sciences, 212, 210–17.

Damasio, A. R. (1999). The feeling of what happens. New York: Harcourt Brace.

De Vignemont, F., & Singer, T. (2006). The empathic brain: How, when and why? Trends in Cognitive Sciences, 10(10), 435–441.

Ethofer, T., Van De Ville, D., Scherer, K., & Vuilleumier, P. (2009). Decoding of emotional information in voice-sensitive cortices. Current Biology, 19(12), 1028–1033.

Fecteau, S., Armony, J. L., Joanette, Y., & Belin, P. (2004). Is voice processing species-specific in human auditory cortex?: An fMRI study. Neuroimage, 23(3), 840–848.

Grossmann, T., Oberecker, R., Koch, S. P., & Friederici, A. D. (2010). The developmental origins of voice processing in the human brain. Neuron, 65(6), 852–858.

Fernández Iglesias, E. Martínez Martínez, S., & Chica Núñez, A. J. (2015). Cross-fertilization between reception studies in audio description and interpreting quality assessment: The role of the describer’s voice. In R. Baños Piñero & J. Díaz Cintas (Eds.), Audiovisual translation in a global context (pp. 72–95). London: Palgrave Macmillan.

Jiménez Hurtado, C., & Seibel, C. (2012) Multisemiotic and multimodal corpus analysis in audio description: TRACCE. In A. Remael, P. Orero, & M. Carroll (Eds.), AVT and media accessibility at the crossroads. Media for All 3 (pp. 409–425). Amsterdam: Rodopi.

Jiménez Hurtado, C. (2010a). Fundamentos teóricos del análisis de la AD. In C. Jiménez Hurtado, A. Rodríguez Domínguez, & C. Seibel (Eds.), Un corpus de cine: Teoría y práctica de la audiodescripción (pp. 13–56). Granada: Tragacanto.

Jiménez Hurtado, C. (2010b). Fundamentos metodológicos del análisis de la AD. In C. Jiménez Hurtado, A. Rodríguez Domínguez, & C. Seibel (Eds.), Un corpus de cine: Teoría y práctica de la audiodescripción (pp. 57–110). Granada: Tragacanto.

Johnstone, T., & Scherer, K. R. (2000). Vocal communication of emotion. Handbook of Emotions, 2, 220–235.

Kruger, H., & Kruger, J. L. (2017). Cognition and reception. In J. W. Schwieter & A. Ferrerira (Eds.), The handbook of translation and cognition (pp. 71–89). Malden, MA: Wiley Blackwell.

Lachaud, C. M. (2013). Conceptual metaphors and embodied cognition: EEG coherence reveals brain activity differences between primary and complex conceptual metaphors during comprehension. Cognitive Systems Research, 22, 12–26.

LeDoux, J. (1998). The emotional brain: The mysterious underpinnings of emotional life. New York, NY: Simon and Schuster.

OFCOM (2000). ITC Guidance on standards for audiodescription. Retrieved from http://www.ofcom.org.uk/tv/ifi/guidance/tv_access_serv/archive/audio_description_stnds/ (Accessed November 2017).

Orero, P., & Vilaró, A. (2012). Eye tracking analysis of minor details in films for audio description. In R. Agost, P. Orero, & E. di Giovanni (Eds.), Multidisciplinarity in audiovisual translation. MonTI 4, 295–319.

Peirce, J. W. (2007). PsychoPy: Psychophysics software in Python. Journal of Neuroscience Methods, 162(1), 8–13.

Rai, S., Greening, J., & Leen, P. (2010). A comparative study of audio description guidelines prevalent in different countries. Retrieved from http://www.rnib.org.uk/professionals/Documents/International_AD_Standards_comparative%20study_2010.doc (Accessed July 2017).

Ramos Caro, M. (2013). El impacto emocional de la audiodescripción. PhD Thesis, Universidad de Murcia.

Regenbogen, C., Habel, U., & Kellermann, T. (2013). Connecting multimodality in human communication. Frontiers in Human Neuroscience, 7(774), 1–10.

Regenbogen, C., Schneider, D. A., Gur, R. E., Schneider, F., Habel, U., & Kellermann, T. (2012). Multimodal human communication: Targeting facial expressions, speech content and prosody. Neuroimage, 60(4), 2346–2356.

Ricciardi, E., Bonino, D., Pellegrini, S., & Pietrini, P. (2014). Mind the blind brain to understand the sighted one!: Is there a supramodal cortical functional architecture? Neuroscience & Bihavioral Reviews, 41, 64–77.

Scherer, K. R., Johnstone, T., & Klasmeyer, G. (2003). Vocal expression of emotion. In R. J. Davidson, K. R. Sherer, & H. H. Goldsmith, (Eds.), Handbook of affective sciences (pp. 433–456). Oxford: Oxford University Press.

Schwieter, J. W., & Ferreira, A. (Eds.) (2017). The handbook of translation and cognition. New Jersey: John Wiley & Sons.

Smith, E. E., & Kosslyn, S. M. (2008). Procesos cognitivos: modelos y bases neurales. Madrid: Pearson.

Snyder, J. (2005). Audio description. The Visual Made Verbal Across Arts Disciplines–Across the Globe. Translating Today, 4, 15–17.

Tymoczko, M. (2012). The neuroscience of translation. Target. International Journal of Translation Studies, 24(1), 83–102.

Vercauteren, G. (2012). Narratological approach to content selection in audio description: Towards a strategy for the description of narratological time. MonTI: Monografías de traducción e interpretación, 4, 207–230.

Walczak, A., & Fryer, L. (2017). Creative description: The impact of audio description style on presence in visually impaired audiences. British Journal of Visual Impairment, 35(1), 6–17.

Wiethoff, S., Wildgruber, D., Kreifelts, B., Becker, H., Herbert, C., Grodd, W., & Ethofer, T. (2008). Cerebral processing of emotional prosody: Influence of acoustic parameters and arousal. Neuroimage, 39(2), 885–893.

Zeki, S., & Bartels A. (1999). Toward a Theory of Visual Consciousness. Consciousness and Cognition, 8, 225–259.