We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
High-Variability Phonetic Training (HVPT) has been shown to be effective in improving the perception of the hardest non-native sounds. However, it remains unclear whether such training can enhance phonological processing at the lexical level. The present study tested whether HVPT also improves word recognition. Late French learners of English completed eight online sessions of HVPT on the perception of English word-initial /h/. This sound does not exist in French and has been shown to cause difficulty both at the prelexical (Mah, Goad & Steinhauer, 2016) and the lexical level of processing (Melnik & Peperkamp, 2019). In pretest and posttest participants were administered a prelexical identification task and a lexical decision task. Results demonstrate that after training the learners’ accuracy improved in both tasks. Moreover, these improvements were retained four months after posttest. This is the first evidence that short training can enhance not only prelexical perception, but also word recognition.
This study investigated the contribution of second-language (L2) phonetic categorization abilities and vocabulary size to the phonolexical encoding of challenging non-native phonological contrasts into the L2 lexicon. Two groups of German learners of English differing in L2 proficiency (advanced vs. intermediate) participated in an English lexical decision task including words and nonwords with /ɛ/ and /æ/ (/æ/ does not exist in German), an /ɛ/-/æ/ phonetic categorization task and an English vocabulary test. Results showed that the effects of phonetic categorization and vocabulary size on lexical decision performance were modulated by proficiency: categorization predicted /ɛ/-/æ/ nonword rejection accuracy for intermediate learners, whereas vocabulary did so for advanced learners. This suggests that sufficient phonetic identification ability is key for an accurate phonological representation of difficult L2 phones, but, for learners for whom robust phonetic identification is already in place, their ultimate success is tightly linked to their vocabulary size in the L2.
Speech perception involves both conceptual cues and perceptual cues. These, individually, have been shown to guide bilinguals’ speech perception; but their potential interaction has been ignored. Explicitly, bilinguals have been given perceptual cues that could be predicted by the conceptual cues. Therefore, to target the perceptual-conceptual interaction, we created a restricted range of perceptual cues that either matched, or mismatched, bilinguals’ conceptual predictions based on the language context. Specifically, we designed an active speech perception task that concurrently collected electrophysiological data from Spanish–English bilinguals and English monolinguals to address the extent to which this cue interaction uniquely affects bilinguals’ speech sound perception and allocation of attentional resources. Bilinguals’ larger MMN-N2b in the mismatched context aligns with the Predictive Coding Hypothesis to suggest that bilinguals use their diverse perceptual routines to best allocate cognitive resources to perceive speech.
Although bilinguals benefit from semantic context while perceiving speech-in-noise in their native language (L1), the extent to which bilinguals benefit from semantic context in their second language (L2) is unclear. Here, 57 highly proficient English–French/French–English bilinguals, who varied in L2 age of acquisition, performed a speech-perception-in-noise task in both languages while event-related brain potentials were recorded. Participants listened to and repeated the final word of sentences high or low in semantic constraint, in quiet and with a multi-talker babble mask. Overall, our findings indicate that bilinguals do benefit from semantic context while perceiving speech-in-noise in both their languages. Simultaneous bilinguals showed evidence of processing semantic context similarly to monolinguals. Early sequential bilinguals recruited additional neural resources, suggesting more effective use of semantic context in L2, compared to late bilinguals. Semantic context use was not associated with bilingual language experience or working memory.
This study aimed to investigate the benefit of Bonebridge devices in patients with single-sided deafness.
Method
Five patients with single-sided deafness who were implanted with Bonebridge devices were recruited in a single-centre study. Participants’ speech perception and horizontal sound localisation abilities were assessed at 6 and 12 months post-operatively. Speech intelligibility in noisy environments was measured in three different testing conditions (speech and noise presented from the front, speech and noise presented from the front and contralateral (normal ear) side separately, and speech presented from the ipsilateral (implanted Bonebridge) side and noise from the contralateral side). Sound localisation was evaluated in Bonebridge-aided and Bonebridge-unaided conditions at different stimuli levels (65, 70 and 75 dB SPL).
Results
All participants showed a better capacity for speech intelligibility in quiet environments with the Bonebridge device. The speech recognition threshold with the Bonebridge device was significantly decreased at both short- and long-term follow up in the speech presented from the ipsilateral (implanted Bonebridge) side and noise from the contralateral side condition (p < 0.05). Additionally, participants maintained similar levels of sound localisation between the Bonebridge-aided and unaided conditions (p > 0.05). However, the accuracy of localisation showed some improvement at 70 dB SPL and 75 dB SPL post-operatively.
Conclusion
The Bonebridge device provides the benefit of improved speech perception performance in patients with single-sided deafness. Sound localisation abilities were neither improved nor worsened with Bonebridge implantation at the follow-up assessments.
Infants struggle to understand familiar words spoken in unfamiliar accents. Here, we examine whether accent exposure facilitates accent-specific adaptation. Two types of pre-exposure were examined: video-based (i.e., listening to pre-recorded stories; Experiment 1) and live interaction (reading books with an experimenter; Experiments 2 and 3). After video-based exposure, Canadian English-learning 15- to 18-month-olds failed to recognize familiar words spoken in an unfamiliar accent. However, after face-to-face interaction with a Mandarin-accented talker, infants showed enhanced recognition for words produced in Mandarin English compared to Australian English. Infants with live exposure to an Australian talker were not similarly facilitated, perhaps due to the lower vocabulary scores of the infants assigned to the Australian exposure condition. Thus, live exposure can facilitate accent adaptation, but this ability is fragile in young infants and is likely influenced by vocabulary size and the specific mapping between the speaker and the listener's phonological system.
The majority of the world’s population is believed to speak more than one language. Moreover, given current demographic trends, older adults make up a significant portion of our population. In this chapter, we review what is known about the intersection between cognitive aging and language processing in one’s first and second language. We review current research findings concerning speech and language processing in older bilinguals at the level of words, sentences, and discourse. We review the implications of being bilingual for nonlinguistic cognitive functions and cognitive reserve. We close by highlighting the need for models of auditory and visual language processing to accommodate age-related changes in sensation, perception and cognition, and to account for important individual differences in language history and use.
The current study investigates how second language auditory word recognition, in early and highly proficient Spanish–Basque (L1-L2) bilinguals, is influenced by crosslinguistic phonological-lexical interactions and semantic priming. Phonological overlap between a word and its translation equivalent (phonological cognate status), and semantic relatedness of a preceding prime were manipulated. Experiment 1 examined word recognition performance in noisy listening conditions that introduce a high degree of uncertainty, whereas Experiment 2 employed clear listening conditions, with low uncertainty. Under noisy listening conditions, semantic priming effects interacted with phonological cognate status: for word recognition accuracy, a related prime overcame inhibitory effects of phonological overlap between target words and their translations. These findings are consistent with models of bilingual word recognition that incorporate crosslinguistic phonological-lexical-semantic interactions. Moreover, they suggest an interplay between L2-L1 interactions and the integration of information across acoustic and semantic levels of processing in flexibly mapping the speech signal onto the spoken words, under adverse listening conditions.
This study investigated how Korean toddlers’ perception of stop categories develops in the acoustic dimensions of VOT and F0. To examine the developmental trajectory of VOT and F0 in toddlers’ perceptual space, a perceptual identification test with natural and synthesized sound stimuli was conducted with 58 Korean monolingual children (aged 2–4 years). The results revealed that toddlers’ perceptual mapping functions on VOT mainly in the high-pitch environment, resulting in more successful perceptual accuracy in fortis or aspirated stops than in lenis stops. F0 development is correlated with the perceptual distinction of lenis from aspirated stops, but no consistent categorical perception for F0 was found before four years of age. The findings suggest that multi-parametric control in perceptual development guides an acquisition ordering of Korean stop phonemes and that tonal development is significantly related to the acquisition of Korean phonemic contrasts.
Can children tell how different a speaker's accent is from their own? In Experiment 1 (N = 84), four- and five-year-olds heard speakers with different accents and indicated where they thought each speaker lived relative to a reference point on a map that represented their current location. Five-year-olds generally placed speakers with stronger accents (as judged by adults) at more distant locations than speakers with weaker accents. In contrast, four-year-olds did not show differences in where they placed speakers with different accents. In Experiment 2 (N = 56), the same sentences were low-pass filtered so that only prosodic information remained. This time, children judged which of five possible aliens had produced each utterance, given a reference speaker. Children of both ages showed differences in which alien they chose based on accent, and generally rated speakers with foreign accents as more different from their native accent than speakers with regional accents. Together, the findings show that preschoolers perceive accent distance, that children may be sensitive to the distinction between foreign and regional accents, and that preschoolers likely use prosody to differentiate among accents.
Recent findings demonstrate a bilingual advantage for voice processing in children, but the mechanism supporting this advantage is unknown. Here we examined whether a bilingual advantage for voice processing is observed in adults and, if so, if it reflects enhanced pitch perception or inhibitory control. Voice processing was assessed for monolingual and bilingual adults using an associative learning identification task and a discrimination task in English (a familiar language) and French (an unfamiliar language). Participants also completed pitch perception, flanker, and auditory Stroop tasks. Voice processing was improved for the familiar compared to the unfamiliar language and reflected individual differences in pitch perception (both tasks) and inhibitory control (identification task). However, no bilingual advantage was observed for either voice task, suggesting that the bilingual advantage for voice processing becomes attenuated during maturation, with performance in adulthood reflecting knowledge of linguistic structure in addition to general auditory and inhibitory control abilities.
Substantial individual differences exist in regard to type and amount of experience with variable speech resulting from foreign or regional accents. Whereas prior experience helps with processing familiar accents, research on how experience with accented speech affects processing of unfamiliar accents is inconclusive, ranging from perceptual benefits to processing disadvantages. We examined how experience with accented speech modulates mono- and bilingual children's (mean age: 9;10) ease of speech comprehension for two unfamiliar accents in German, one foreign and one regional. More experience with regional accents helped children repeat sentences correctly in the regional condition and in the standard condition. More experience with foreign accents did not help in either accent condition. The results suggest that type and amount of accent experience co-determine processing ease of accented speech.
Listening to speech entails adapting to vast amounts of variability in the signal. The present study examined the relationship between flexibility for adaptation in a second language (L2) and robustness of L2 phonolexical representations. Phonolexical encoding and phonetic flexibility for German learners of English were assessed by means of a lexical decision task containing nonwords with sound substitutions and a distributional learning task, respectively. Performance was analyzed for an easy (/i/-/ɪ/) and a difficult contrast (/ε/-/æ/, where /æ/ does not exist in German). Results showed that for /i/-/ɪ/ listeners were quite accurate in lexical decision, and distributional learning consistently triggered shifts in categorization. For /ε/-/æ/, lexical decision performance was poor but individual participants’ scores related to performance in distributional learning: the better learners were in their lexical decision, the smaller their categorization shift. This suggests that, for difficult L2 contrasts, rigidity at the phonetic level relates to better lexical performance.
Spanish speakers tend to perceive an illusory [e] preceding word-initial [s]-consonant sequences, e.g., perceiving [stið] as [estið] (Cuetos, Hallé, Domínguez & Segui, 2011), but this illusion is weaker for Spanish speakers who know English, which lacks the illusion (Carlson, Goldrick, Blasingame & Fink, 2016). The present study aimed to shed light on why this occurs by assessing how a brief interval spent using English impacts performance in Spanish auditory discrimination and lexical decision. Late Spanish–English bilinguals’ pattern of responses largely matched that of monolinguals, but their response times revealed significant differences between monolinguals and bilinguals, and between bilinguals who had just completed tasks in English vs. Spanish. These results suggest that late bilinguals do not simply learn to perceive initial [s]-consonant sequences veridically, but that elements of both their phonotactic systems interact dynamically during speech perception, as listeners work to identify what it was they just heard.
In this article, I present a selective review of research on speech perception development and its relation to reference, word learning, and other aspects of language acquisition, focusing on the empirical and theoretical contributions that have come from my laboratory over the years. Discussed are the biases infants have at birth for processing speech, the mechanisms by which universal speech perception becomes attuned to the properties of the native language, and the extent to which changing speech perception sensitivities contribute to language learning. These issues are reviewed from the perspective of both monolingual and bilingual learning infants. Two foci will distinguish this from my previous reviews: first and foremost is the extent to which contrastive meaning and referential intent are not just shaped by, but also shape, changing speech perception sensitivities, and second is the extent to which infant speech perception is multisensory and its implications for both theory and methodology.
It has long been debated whether speech production and perception remain flexible in adulthood. The current study investigates the effects of language dominance switch in Galician new speakers (neofalantes) who are raised with Spanish as a primary language and learn Galician at an early age in a bilingual environment, but in adolescence, decide to switch to using Galician almost exclusively, for ideological reasons. Results showed that neofalantes pattern with Spanish-dominants in their perception and production of mid-vowel and fricative contrasts, but with Galician-dominants in their realisation of unstressed word-final vowels, a highly salient feature of Galician. These results are taken to suggest that despite early exposure to Galician, high motivation and almost exclusive Galician language use post-switch, there are limitations to what neofalantes can learn in both production and perception, but that the hybrid categories they appear to develop may function as opportunities to mark identity within a particular community.
School-age children's understanding of unfamiliar accents is not adult-like and the age at which this ability fully matures is unknown. To address this gap, eight- to fifteen-year-old children's (n = 74) understanding of native- and non-native-accented sentences in quiet and noise was assessed. Children's performance was adult-like by eleven to twelve years for the native accent in noise and by fourteen to fifteen years for the non-native accent in quiet. However, fourteen- to fifteen-year old's performance was not adult-like for the non-native accent in noise. Thus, adult-like comprehension of unfamiliar accents may require greater exposure to linguistic variability or additional cognitive–linguistic growth.
During the first two years of life, infants concurrently refine native-language speech categories and word learning skills. However, in the Switch Task, 14-month-olds do not detect minimal contrasts in a novel object–word pairing (Stager & Werker, 1997). We investigate whether presenting infants with acoustically salient contrasts (liquids) facilitates success in the Switch Task. The first two experiments demonstrate that acoustic differences boost infants’ detection of contrasts. However, infants cannot detect the contrast when the segments are digitally shortened. Thus, not all minimal contrasts are equally difficult, and the acoustic properties of a contrast matter in word learning.
This study investigates how second language (L2) listeners match an unexpected accented form to their stored form of a word. The phonetic-to-lexical mapping for L2 as compared to L1 regional varieties was examined with early and late Italian-L2 speakers who were all L1-Australian English speakers. AXB discrimination and lexical decision tasks were conducted in both languages, using unfamiliar regional accents that minimize (near-merge) consonant contrasts maintained in their own L1-L2 accents. Results reveal that in the L2, early bilinguals’ recognition of accented variants depended on their discrimination capacity. Late bilinguals, for whom the accented variants were not represented in their L2 lexicon, instead mapped standard and accented exemplars to the same lexical representations (i.e., dual mapping: Samuel & Larraza, 2015). By comparison, both groups showed the same broad accommodation to L1 accented variants. Results suggest qualitatively different yet similarly effective phonetic-to-lexical mapping strategies both for L2 versus L1 regional accents.
A bilingual advantage has been found in both cognitive and social tasks. In the current study, we examine whether there is a bilingual advantage in how children process information about who is talking (talker-voice information). Younger and older groups of monolingual and bilingual children completed the following talker-voice tasks with bilingual speakers: a discrimination task in English and German (an unfamiliar language), and a talker-voice learning task in which they learned to identify the voices of three unfamiliar speakers in English. Results revealed effects of age and bilingual status. Across the tasks, older children performed better than younger children and bilingual children performed better than monolingual children. Improved talker-voice processing by the bilingual children suggests that a bilingual advantage exists in a social aspect of speech perception, where the focus is not on processing the linguistic information in the signal, but instead on processing information about who is talking.