Most of us have experienced mishearing song lyrics. For example, in Bob Dylan’s song, instead of The answer my friend is blowin’ in the wind, we might hear The ants are my friends, they’re blowin’ in the wind. It should come as no surprise that, in our listening classes, what our listeners hear is often similarly surreal! And yet many such mishearings probably go unnoticed. After all, we cannot see into the listener’s minds. If they choose the right answer in the listening task, we assume they’ve decoded everything correctly. If they don’t, we rarely discover why. How can we ever teach listening if we never diagnose the learner’s problems? We will be condemned to simply testing it over and over again instead.
In this article, I will give a very brief history of the teaching of listening and then outline a simple procedure I use to get to the nitty gritty of my students’ listening difficulties: dictation. I will briefly explain how this traditional, time-honoured classroom activity can be repurposed to focus on developing students decoding skills. I will then go onto present some real examples from my students’ transcriptions, and outline some insights that can be gleaned about the listening processes. Finally, I offer a summary of some of the advantages of using dictation in the listening lesson.
A brief history of the teaching of listening
Much has been said about testing versus the teaching of listening. Indeed, if we venture back more than 30 years ago, to 1987, we will find an article in the ELTJ which begins: ‘Listening comprehension lessons are all too often a series of listening tests in which tapes are played, comprehension exercises are attempted by the learners, and feedback is given in the form of the ‘right’ answer. In lessons such as this, listening is not taught but tested’ (Sheerin, 1987). Now, some 33 years on, we might update Sheerin’s description of the teaching of listening by simply replacing the word ‘tape’ with the word ‘audio’ (in this case, a catch-all term for various forms of aural input): the statement still holds water in many face-to-face and online classrooms today.
Cauldwell, (2018a, and 31 years after Sheeran’s claim), refers to the phenomenon of ‘the false belief that continuous practice in coping (the hope-to-cope listening comprehension method) is the only way to teach listening’. A quick survey of listening activities in a random selection of teaching materials spanning the last 33 years evidences Cauldwell’s observation, we still ‘teach’ listening by offering students scaffolding in the form of pre-listening activities, we advise them to use short-cut work-around compensation strategies such as listening for key words and, by implication, to ignore small function words like pronouns, articles and prepositions.
Following the seminal works of John Field (2008), in which he challenges the orthodox approach to teaching listening, and Richard Caudwell’s work, concerned with nudging teachers to add a decoding dimension to their teaching, I gradually changed my classroom approach from testing to teaching listening by investing more in developing students’ decoding skills. In other words, I began to spend more time focussing my students’ attention more on, for example, how phonemes, syllables and chunks of language sounded in various acoustic contexts.
Dictation as a window into the listener’s mind
Sheppard & Butler, (2017), observe, ‘greater knowledge about what learners perceive when they listen could help language teachers better tailor their instruction to student needs,’ with dictation being one method which can provide access to what students believe they heard. There are various ways of carrying out dictation activities, for example, we can: divide a short text into a series of small chunks or tone groups, read the text ourselves and pause for students to write what they think was said; pause audios periodically for students to write the last 4 or 5 words they heard; insert pauses say after every four or five words using an audio editing tool such as audacity, again for students to write what they heard.
I usually write a short text, topically-related to lesson content, and divide it into small chunks before dictating it, and I will repeat each chunk if students want me to. Students write on paper or mobile devices, and give or send their transcripts to me. Dictation activities, which involve delivering a text in chunks or word clusters, provide the opportunity to raise awareness to, and show how, words blend together and create features of connected speech.
When I return students’ dictation transcripts along with the actual audio script, we talk through differences between what was written (anonymously) and what was said. For me, and evidently for my students too, this is the point at which the teaching and learning kicks in as we discuss selected features of spoken language. Wherever possible, I will take every opportunity to model the original word or chunk, said along-side students’ often ‘plausible hearings’ (Cauldwell, 2018b). It’s amazing how similar the words ‘cinema’ and ‘seminar’ can sound when pronounced quickly, and teacher-modelling can demonstrate to students that they might not have been so far from the acoustic truth when deciphering one for the other after all.
Student transcriptions provide tangible evidence of their perception of what they had heard (and hadn’t), and can subsequently be used for analysis and discussion. Post-dictation discussions serve to raise student’s awareness to various features of spoken language, for example, how the pronunciation of words in connected speech might change from the way they are pronounced in isolation. This is something most students seem to be unaware of. Indeed, when presented with this, a French learner spontaneously acclaimed, ‘Ah, now I understand why I don’t understand,’ much to the amusement and agreement of others in the group.
Decoding difficulties
Field (2008) points out that non-expert listeners need to work with considerable uncertainty when it comes to processing words: they might not be confident that they have heard a group of phonemes correctly, or be able to recall them in the same order as they were said. There is no doubt that word recognition presents formidable challenges, and in traditional listening classes such nitty gritty listener problems are rarely worked on. The following are transcriptions by students of different L1 backgrounds and they offer a flavour of the types of decoding challenges learners face when presented with chunks of language to transcribe. These include: not correctly recognising or hearing phonemes (examples: 2, 4, 5 ), confusing similar sounding structures and words (examples: 3, 5, 9, 11, 12, 13, 15 ), not hearing the –ed and –er morpheme (examples: 8, 14, 3), not hearing or mishearing unstressed functions words (examples: 3, 7, 10, 16 ), and having difficulty recognising words and phrases in connected speech as a result of linking (examples: 1, 3), assimilation (example: 1), and elision (examples: 6, 16).
said... transcribed as...
1 gets on well get some well
2 that, in short, is what that, insure it’s was
3 a lot smaller than lots more than
4 we’re not short of when not sort of
5 we’ll be will be
6 I want to go I won’t go
7 was at university was in the university; went to university
8 I only needed I only need
9 reaching which in
10 forty years of songs four years a song; forty year songs
11 seminar cinema
12 journey Jenny
13 float plane flute plane
14 democratically elected democratic elected
15 night nice
16 I would do all I do my
Students’ transcriptions commonly reveal phoneme-level problems requiring focus in listening classrooms, and very often evidence small slips of the ear– imagine you believe you heard ‘brain’ for ‘vein’ or ‘Jenny’ for ‘journey’, for example? Or what about if you heard /s/ for /t/ and so decoded ‘nice’ for ‘night’? Would it be possible to make sense of the overall meaning of what was being said? The brain/vein confusion was experienced by a Spanish speaker, who couldn’t distinguish between /b/ and /v/ consonants. Another example, hearing ‘Jenny’ for ‘journey’, involved three Chinese learners in a class with a student who went by the English name of Jenny. Two students reported not having recognised the long vowel sound /ɜː/, whereas for one, the problem was a vocabulary issue. The student didn’t know the word ‘journey’, and in her frantic search to tentatively match what she thought she had heard to a best fit, her limited mental lexicon threw up the name of her classmate as the closest candidate. In all cases, the mishearings derailed understanding.
Decoding problems regularly show up as a result of dictation, and they cry out for some form of on-the-spot teaching. Guided ear-training tasks such as minimal pair discrimination can be used to focus on specific examples, before students listen again to hear the word or words s in context (see Field, 2008, pg. 168, for various examples of a range of ear-training exercise types).
Students regularly have problems recognising the beginnings and endings of words, perhaps by failing to notice final consonants or syllables such as the plural or tense markers ed and es, and their ability to follow a speakers’ meaning is (momentarily, at least) compromised. Similar problems apply to other word endings, like ing, and ee or er, along with prefixes and suffixes in general. In order to recognise a word in the stream of speech, students need to be able to pick out where it starts and where it ends, and teacher modelling at various speeds can provide practice and gradually help students recognise a word. Practice in recognising frequent word beginnings and endings by getting students to raise a finger when they hear beginning of a given word, or the beginning of a subsequent next word can help chip away at the hope-to-cope method.
The ability to locate word boundaries in connected speech tends to be taken for granted in the listening lesson. Students are inadvertently left with the ‘unspoken myth’ (Cauldwell, 2018a), expecting white spaces (or pauses) between spoken words, that words will always sound the way they do in citation form, and that chunks or words which co-occur will be carefully articulated. However, with word boundaries hard to determine, and students not necessarily alerted to this, spoken word recognition is difficult. One consequence of this is that students invariably attribute failure to understand as their own poor listening, rather than the inherent difficulties created by continuous speech.
As well as paying attention to the features of connected speech and the challenges they present for listeners, we also need to pay more attention to helping non-expert listeners pick out function words, which are often reduced. According to Caudwell, (2018a), there is a ‘function-word fallacy’ held by many teachers, that function words in the stream of speech can be ignored. Clearly, we need to find a way of redressing this faulty reasoning - goodness knows how a listener can follow meaning if s/he is unable to pick out a pronoun! Periodic dictations of chunks, at different speeds and with students simply saying how many words they heard, can help them develop their ability to hear function words.
The cumulative effects of the difficulties of decoding phonemes, syllables, function words and known vocabulary create an enormous challenge for non-expert listeners attempting to attribute meaning to what a speaker is saying. The whole is often greater than the sum of the parts and, in post-listening discussions, students often report problems understanding specific information, detail, the main idea, or even the gist of what they had listened to.
The Benefits of Using Dictation
With a window into the listener’s mind we lift some of the mystery surrounding listening processes, and bring the results into the classroom which are then open to inspection for both teachers and students. Dictation enables us identify listening problems and to teach economically and efficiently. With knowledge of which areas need attention, we can create useful and meaningful classroom practice which students see as relevant and valuable. Class discussion also offers the opportunity for raising awareness to appropriate listening strategies. All in all, we can help to nudge students in the direction of decoding automaticity by repeated and recycled exposure to common features of spoken language.
Dictation activities give leaners the opportunity to compare concrete examples of mishearings from their own listening and the audio script, and they can start to reflect and discover reasons for their listening difficulties. Class discussion can show that listening difficulty often resides with the speaker rather than with the learner. It provides the opportunity to encourage students to move away from expecting to understand every word, and to develop an ambiguity tolerance as they listen.
At some point in my teaching career, I’d intuited that ‘hope-to-cope listening comprehension’, along with pre-listening activities which belied an exclusive meaning-building focus, were somehow not entirely hitting the spot. I certainly didn’t feel that I’d taught anybody anything in the same way as I would have done if I had been focussing on elements of language or the other skills. I would have been unable to answer a question concerning what students would be taking from one particular listening experience on to the next one. Dictation affords me a rich resource with which to discover my learners’ listening needs, and which might otherwise have been left blowing in the wind.
References
Cauldwell, R.T. (2018a, November 30). Speech in Action: Insights into Student Listening https://www.speechinaction.org/47-insights-into-student-listening
Cauldwell, R.T. (2018b). A Syllabus for Listening – Decoding. Birmingham: Speech in Action.
Field, J. (2008). Listening in the Language Classroom. Cambridge: Cambridge University Press.
Sheerin, S. Listening Comprehension: teaching or testing? ELT Journal, Volume 41, Issue 2, April 1987, Pages 126-131.
Sheppard, B. & Butler, B. (2017). Insights into Student Listening from Paused Transcription. CATESOL Journal, 29.2, 81-107.
Add new comment