Subtitle Secrets: An Interview with Hanneke Loerts

Learning your mother tongue is easy. But learning a new language is a tough challenge, especially when you are older. Dr. Hanneke Loerts is a lecturer within the master’s programs Applied Linguistics and Multilingualism, and she investigates second language acquisition. Using eye-tracking methodology, she shows that TV subtitles can improve both language learning and information transfer.

 

AdRvH: Let’s go back in time: in 2012 you defended your dissertation called ‘Uncommon gender: Eyes and brains, native and second language learners, & grammatical gender’ at the University of Groningen. What was your PhD research about?

HL: During my PhD I investigated differences in on-line language comprehension between native speakers of Dutch and late post-puberty Polish learners of Dutch. For this, I used ERP and eye-tracking methodology to see whether late learners of Dutch could process grammatical gender the same way as native speakers of Dutch. Late learners of Dutch can almost never learn gender (the fact that some words are preceded by ‘de’ and others by ‘het’ in Dutch), as this is very difficult.

 

“If you have multiple sources of the same information it’s not distracting but helping each other.”

 

In the eye-tracking experiment, participants would hear spoken sentences and I wanted to see whether they could anticipate what was coming next. For example, if a Dutch speaker sees a chair and a house, you know that if you hear ‘de’, you should click on the chair. While Dutch speakers indeed already looked at the chair before hearing the word, late learners of Dutch could not use ‘de’ to determine which word would be heard next. In the ERP experiment, there was a gender error in the spoken sentences (e.g., ‘het stoel’). The goal of this experiment was to see whether the late learners would notice these mistakes. The ERP results showed a P600 effect (noticing a grammatical error) for both the Dutch speakers as well as some of the more proficient late Polish learners of Dutch. This suggests that they do process the errors subconsciously, but they cannot use it to anticipate upcoming words. They are not fast and automatic enough.

 

AdRvH: Now – four years later – I heard you study subtitles in relation to language learning and information transfer.

HL: The Netherlands is one of the countries that use subtitles a lot. It is estimated that 30% of Dutch television is non-Dutch and thus contains subtitles. In my research, I focus on two questions: First of all, do subtitles help language proficiency? And secondly, is it possible to process all modalities (the text, the soundtrack, and the scene) at the same time? Besides that, it’s important to know what people pay attention to, what they do with the information they see and hear, and when the information is best transferred.

 

AdRvH: There is this claim that Dutch people are good at English because of subtitles. What do your results say about the relationship between subtitles and language proficiency?

HL: It is very difficult to test whether Dutch people are good at English because of subtitles. In one of the eye-tracking experiments that I conducted with an MA student, Elena Lazareva, Dutch speakers watched a Russian animation video. Because they knew nothing about Russian, double subtitles were used, having the Dutch subtitles below the Russian subtitles. We wanted to know whether they learnt the words, or whether they were too distracted by the scene and the displayed text in both Dutch and Russian. Afterwards, participants needed to arrange the clips in the correct order of appearance to see whether they actually processed them. It appeared that the participants who were better at reading the subtitles were also better at arranging the clips in the correct order. Apparently, if you focus more on one thing, it helps you to focus on the other. Also, they learnt quite a few words! Especially with the double subtitles. If you want to learn a new language, double subtitles are very useful because you hear and see the word in the foreign language and you also see the word in your native language. Thus, you can link everything.

 

 

AdRvH: What about your second research question? Can there be too much stimuli to process it all?

HL: A common example of this is the gorilla video. In this video, you see people play basketball. When you are asked to count the number of times they pass the ball, you may not notice the gorilla on the basketball field. Based on this attentional blindness, some have suggested that if you need to focus on the subtitles, you are missing a lot of information and it might impair your comprehension. However, my results so far show otherwise. I haven’t published anything about this yet, but in a recent eye-tracking experiment I was interested in the amount of time that people look at subtitles when hearing different foreign languages and how people process the related events on the screen. Often, when you see an English movie, you do read the subtitles, even though you don’t really need them. It’s just because it’s there and it attracts attention. In this experiment, I used different languages like English, Spanish and Swahili to compare within one person how we read subtitles. And it was surprising, because it turned out that they spent most time reading the subtitles when they are more or less familiar with the language. So if you look at known (English), familiar (Spanish) and unknown (Swahili) languages, people spend most time at reading the subtitles when hearing the familiar language. I thought they would probably spend the least amount of reading when hearing English, and the most in the Swahili case, but they actually spent the most time reading the subtitles when hearing Spanish audio.

 

AdRvH: Why do you think that is?

HL: I think people are trying to link the audio to the Dutch words, because they hear foreign words that aren’t completely unfamiliar and then think ‘Oh! Would that word mean that in Dutch?’. So, they try to relate the printed words to the words they hear.

_sandermartens_0108_hannekeloerts

 

AdRvH: So what about the processing difficulties that might or might not arise when there are too much stimuli?

HL: Yes, that is also part of the second experiment. After watching several scenes with and without subtitles, there was a scene recognition task (so there were also scenes that the participants didn’t see), and it turned out that the scenes with subtitles on the screen were better remembered than the scenes without subtitles. That suggests if you have multiple sources of the same information it’s not distracting but actually helping each other.

 

AdRvH: How do you explain that?

HL: The dual coding theory suggests that if you receive the same information in verbal and non-verbal messages, it’s processed more deeply than when you only hear or see it. In the case of subtitled television, you see things in the scene that are linked to the subtitles, so maybe that’s why people remembered it better. In the future, I would like to investigate whether the overlap in information also matters. Sometimes you are watching the news and they are talking about the stock market, and you see someone that is eating an apple walking on the screen. That doesn’t make any sense, so maybe it’s the overlap in information that counts.

 

AdRvH: Knowing when and how subtitles can be used to improve information transfer seems very applicable.

HL: Yes! If you look at the learning component, you can design different types of educational videos with subtitles to teach people new languages. And that’s natural input, better (and more fun) than learning the rules of a language in a classroom. Also, you can think of subtitles in a television kind of way, but you can also think about subtitles on information boards in hospitals or other places. I now see subtitles everywhere! And they are becoming more and more important as more and more people have different language backgrounds. So, we should know how to use them in the best way possible. There is still a lot to discover!

 

NOTES:
Text by Annelot de Rechteren van Hemert
Photos by Sander Martens

 

This interview first appeared in the BCN Newsletter
(Visited 173 times, 1 visits today)
Tassos Sarampalis on Twitter

Dr. Sarampalis is a lecturer at the Psychology department of the University of Groningen. He began his career in psychoacoustics in the UK where he worked with Deb Fantini and Chris Plack, before moving to California to work on hearing devices, first with Monita Chatterjee and then with Erv Hafter. His current research interests involve understanding the contributions of cognition in complex hearing situations and the interactions of cognition and hearing impairment. For more information, you can visit his website.

Note: Photo by Sander Martens


Select Publications

Hogenelst, K., Sarampalis, A., Leander, N. P., Müller, B. C., Schoevers, R. A., & Aan Het Rot, M. (2016). “The effects of acute tryptophan depletion on speech and behavioural mimicry in individuals at familial risk for depression.” Journal of Psychopharmacology (Oxford, England). http://doi.org/10.1177/0269881115625156

Pals, C., Sarampalis, A., van Rijn, H., & Başkent, D., (2015). “Validation of a simple response-time measure of listening effort.” J. Acoust. Soc. Am. 138(3), EL187-EL192.

Pals, C., Sarampalis, A., & Başkent, D. (2013). “Listening Effort with Cochlear Implant Simulations.” Journal of Speech, Language, and Hearing Research.

Sarampalis, A., Kalluri, S., Edwards, B., Hafter, E. (2009). “Objective measures of listening effort: Effects of background noise and noise reduction,” Journal of Speech Language, and Hearing Research, 52, 1230-1240.

Hafter, E.R., Sarampalis, A., and Louie, P. (2007). “Auditory attention and filters,” in Auditory Perception of Sound Sources, edited by W. A. Yost (Springer-Verlag, New York).

Chatterjee, M, Sarampalis, A., and Oba. S.I. (2006). “Auditory stream segregation with cochlear implants: A preliminary report,” Hearing Research, 222, 100-107.


You may also like

Leave a comment