Hostname: page-component-8448b6f56d-cfpbc Total loading time: 0 Render date: 2024-04-24T09:15:51.365Z Has data issue: false hasContentIssue false

Children perceive speech onsets by ear and eye*

Published online by Cambridge University Press:  11 January 2016

SUSAN JERGER*
Affiliation:
School of Behavioral and Brain Sciences, GR4·1, University of Texas at Dallas, and Callier Center for Communication Disorders, Richardson, Texas
MARKUS F. DAMIAN
Affiliation:
School of Experimental Psychology, University of Bristol
NANCY TYE-MURRAY
Affiliation:
Department of Otolaryngology–Head and Neck Surgery, Washington University School of Medicine
HERVÉ ABDI
Affiliation:
School of Behavioral and Brain Sciences, GR4·1, University of Texas at Dallas
*
Address for correspondence: Susan Jerger, School of Behavioral and Brain Sciences, GR4·1, University of Texas at Dallas, 800 W. Campbell Rd, Richardson, TX 75080. tel: 512-216-2961; e-mail: sjerger@utdallas.edu

Abstract

Adults use vision to perceive low-fidelity speech; yet how children acquire this ability is not well understood. The literature indicates that children show reduced sensitivity to visual speech from kindergarten to adolescence. We hypothesized that this pattern reflects the effects of complex tasks and a growth period with harder-to-utilize cognitive resources, not lack of sensitivity. We investigated sensitivity to visual speech in children via the phonological priming produced by low-fidelity (non-intact onset) auditory speech presented audiovisually (see dynamic face articulate consonant/rhyme b/ag; hear non-intact onset/rhyme: –b/ag) vs. auditorily (see still face; hear exactly same auditory input). Audiovisual speech produced greater priming from four to fourteen years, indicating that visual speech filled in the non-intact auditory onsets. The influence of visual speech depended uniquely on phonology and speechreading. Children – like adults – perceive speech onsets multimodally. Findings are critical for incorporating visual speech into developmental theories of speech perception.

Type
Articles
Copyright
Copyright © Cambridge University Press 2016 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

[*]

This research was supported by the National Institute on Deafness and Other Communication Disorders, grant DC-00421. Dr Abdi would like to acknowledge the support of an EURIAS fellowship at the Paris Institute for Advanced Studies (France), with the support of the European Union's 7th Framework Program for research, and funding from the French state managed by the Agence Nationale de la Recherche (program: Investissements d'avenir, ANR-11-LABX-0027-01 Labex RFIEA+). Sincere appreciation to (i) speech science colleagues for their guidance and advice to adopt a perceptual criterion for editing the non-intact stimuli and (ii) Dr Peter Assmann for generously giving of his time, talents, and software to prepare Figure 1. We thank Dr Brent Spehar for recording the audiovisual stimuli. We thank the children and parents who participated and the research staff who assisted, namely Aisha Aguilera, Carissa Dees, Nina Dinh, Nadia Dunkerton, Alycia Elkins, Brittany Hernandez, Cassandra Karl, Demi Krieger, Michelle McNeal, Jeffrey Okonye, Rachel Parra, and Kimberly Periman of UT-Dallas (data collection, analysis, presentation), and Derek Hammons and Scott Hawkins of UT-Dallas and Brent Spehar of Washington University School of Medicine (computer programming).

References

REFERENCES

Abdi, H., Edelman, B., Valentin, D. & Dowling, W. (2009). Experimental design and analysis for psychology. New York: Oxford University Press.Google Scholar
Abdi, H. & Williams, L. (2010). Contrast analysis. In Salkind, N. (ed.), Encyclopedia of research design, 243–51. Thousand Oaks, CA: Sage.Google Scholar
Barutchu, A., Crewther, S., Kiely, P., Murphy, M. & Crewther, D. (2008). When /b/ill with /g/ill becomes /d/ill: evidence for a lexical effect in audiovisual speech perception. European Journal of Cognitive Psychology 20, 111.Google Scholar
Beery, K. & Beery, N. (2004). The Beery-Buktenica Developmental Test of Visual-Motor Integration with Supplemental Developmental Tests of Visual Perception and Motor Coordination, 5th ed. Minneapolis: NCS Pearson, Inc.Google Scholar
Bell-Berti, F. & Harris, K. (1981). A temporal model of speech production. Phonetica 38, 920.Google Scholar
Bonte, M. & Blomert, L. (2004). Developmental changes in ERP correlates of spoken word recognition during early school years: a phonological priming study. Clinical Neurophysiology 115, 409–23.Google Scholar
Brancazio, L. (2004). Lexical influences in audiovisual speech perception. Journal of Experimental Psychology: Human Perception and Performance 30, 445–63.Google Scholar
Brooks, P. & MacWhinney, B. (2000). Phonological priming in children's picture naming. Journal of Child Language 27, 335–66.CrossRefGoogle ScholarPubMed
Calvert, G., Campbell, R. & Brammer, M. (2000). Evidence from functional magnetic resonance imaging of crossmodal binding in the human heteromodal cortex. Current Biology 10, 649–57.CrossRefGoogle ScholarPubMed
Campbell, R. (1988). Tracing lip movements: making speech visible. Visible Language 22, 3257.Google Scholar
Damian, M. & Dumay, N. (2007). Time pressure and phonological advance planning in spoken production. Journal of Memory and Language 57, 195209.Google Scholar
Damian, M. & Martin, R. (1999). Semantic and phonological codes interact in single word production. Journal of Experimental Psychology: Learning, Memory, and Cognition 25, 345–61.Google Scholar
Dekle, D., Fowler, C. & Funnell, M. (1992). Audiovisual integration in perception of real words. Perception & Psychophysics 51, 355–62.CrossRefGoogle ScholarPubMed
Desjardins, R., Rogers, J. & Werker, J. (1997). An exploration of why preschoolers perform differently than do adults in audiovisual speech perception tasks. Journal of Experimental Child Psychology 66, 85110.Google Scholar
Dodd, B. (1977). The role of vision in the perception of speech. Perception 6, 3140.Google Scholar
Dodd, B., Crosbie, S., McIntosh, B., Teitzel, T. & Ozanne, A. (2003). Pre-Reading Inventory of Phonological Awareness. San Antonio, TX: Psychological Corporation.Google Scholar
Dunn, L. & Dunn, D. (2007). The Peabody Picture Vocabulary Test-IV, 4th ed. Minneapolis, MN: NCS Pearson.Google Scholar
Easton, R. & Basala, M. (1982). Perceptual dominance during lipreading. Perception & Psychophysics 32, 562–70.Google Scholar
Erdener, D. & Burnham, D. (2013). The relationship between auditory-visual speech perception and language-specific speech perception at the onset of reading instruction in English-speaking children. Journal of Experimental Child Psychology 114, 120–38.CrossRefGoogle Scholar
Evans, J. (2002). Variability in comprehension strategy use in children with SLI: a dynamical systems account. International Journal of Language and Communication Disorders 37, 95116.Google Scholar
Fort, M., Spinelli, E., Savariaux, C. & Kandel, S. (2012). Audiovisual vowel monitoring and the word superiority effect in children. International Journal of Behavioral Development 36, 457–67.Google Scholar
Garner, W. (1974). The processing of information and structure. Potomax, MD: Erlbaum.Google Scholar
Goldman, R. & Fristoe, M. (2000). Goldman Fristoe 2 Test of Articulation. Circle Pines, MN: American Guidance Service.Google Scholar
Gow, D., Melvold, J. & Manuel, S. (1996). How word onsets drive lexical access and segmentation: evidence from acoustics, phonology, and processing. Spoken Language ICSLP Proceedings of the 4th International Conference 1, 66–9.Google Scholar
Grant, K., van Wassenhove, V. & Poeppel, D. (2004). Detection of auditory (cross-spectral) and auditory-visual (cross-modal) synchrony. Speech Communication 44, 4353.CrossRefGoogle Scholar
Green, K. (1998). The use of auditory and visual information during phonetic processing: implications for theories of speech perception. In Campbell, R., Dodd, B. & Burnham, D. (eds), Hearing by eye II: advances in the psychology of speechreading and auditory-visual speech, 325. Hove: Taylor & Francis.Google Scholar
Green, K. & Kuhl, P. (1989). The role of visual information in the processing of place and manner features in speech perception. Perception & Psychophysics 45, 3442.CrossRefGoogle ScholarPubMed
Hillock, A. R., Powers, A. R. & Wallace, M. T. (2011). Binding of sights and sounds: age-related changes in multisensory temporal processing. Neuropsychologia 49, 461–7.CrossRefGoogle ScholarPubMed
Holt, R. F., Kirk, K. I. & Hay-McCutcheon, M. (2011). Assessing multimodal spoken word-in-sentence recognition in children with normal hearing and children with cochlear implants. Journal of Speech, Language, and Hearing Research 54, 632–57.CrossRefGoogle ScholarPubMed
Huyse, A., Berthommier, F. & Leybaert, J. (2013). Degradation of labial information modifies audiovisual speech perception in cochlear-implanted children. Ear and Hearing 34, 110–21.Google Scholar
Jerger, S., Damian, M. F., Spence, M. J., Tye-Murray, N. & Abdi, H. (2009). Developmental shifts in children's sensitivity to visual speech: a new multimodal picture-word task. Journal of Experimental Child Psychology 102(1), 4059.Google Scholar
Jerger, S., Damian, M. F., Tye-Murray, N. & Abdi, H. (2014). Children use visual speech to compensate for non-intact auditory speech. Journal of Experimental Child Psychology 126, 295312.Google Scholar
Jerger, S., Martin, R. & Damian, M. F. (2002). Semantic and phonological influences on picture naming by children and teenagers. Journal of Memory and Language 47, 229–49.Google Scholar
Jerger, S., Pirozzolo, F., Jerger, J., Elizondo, R., Desai, S., Wright, E. & Reynosa, R. (1993). Developmental trends in the interaction between auditory and linguistic processing. Perception & Psychophysics 54, 310–20.Google Scholar
Jordan, T. & Bevan, K. (1997). Seeing and hearing rotated faces: influences of facial orientation on visual and audiovisual speech recognition. Journal of Experimental Psychology: Human Perception and Performance 23, 388403.Google Scholar
Lalonde, K. & Holt, R. (2015). Preschoolers benefit from visually salient speech cues. Journal of Speech, Language, and Hearing Research 58, 135–50.Google Scholar
Levelt, W., Schriefers, H., Vorberg, D., Meyer, A., Pechmann, T. & Havinga, J. (1991). The time course of lexical access in speech production: a study of picture naming. Psychological Review 98, 122–42.Google Scholar
Locke, J. (1993). The child's path to spoken language. Cambridge, MA: Harvard University Press.Google Scholar
Mattys, S. (2014). Speech perception. In Reisberg, D. (ed.), The Oxford handbook of cognitive psychology, 391411. Oxford: Oxford University Press.Google Scholar
Mattys, S. L., White, L. & Melhorn, J. F. (2005). Integration of multiple speech segmentation cues: a hierarchical framework. Journal of Experimental Psychology-General 134, 477500.Google Scholar
McGurk, H. & McDonald, M. (1976). Hearing lips and seeing voices. Nature 264, 746–8.Google Scholar
Mills, A. (1987). The development of phonology in the blind child. In Dodd, B. & Campbell, R. (eds), Hearing by eye: the psychology of lipreading, 145–61. London: Erlbaum.Google Scholar
Nath, A. & Beauchamp, M. (2011). Dynamic changes in superior temporal sulcus connectivity during perception of noisy audiovisual speech. Journal of Neuroscience 31, 1704–14.Google Scholar
Roelofs, A. (1997). The WEAVER model of word-form encoding in speech production. Cognition 64, 249–84.Google Scholar
Rosenblum, L. & Saldana, H. (1992). Discrimination tests of visually influenced syllables. Perception & Psychophysics 52, 461–73.Google Scholar
Ross, L., Molholm, S., Blanco, D., Gomez-Ramirez, M., Saint-Amour, D. & Foxe, J. (2011). The development of multisensory speech perception continues into the late childhood years. European Journal of Neuroscience 33, 2329–37.Google Scholar
Ross, M. & Lerman, J. (1971). Word Intelligibility by Picture Identification. Pittsburgh: Stanwix House, Inc.Google Scholar
Sams, M., Manninen, P., Surakka, V., Helin, P. & Katto, R. (1998). McGurk effect in Finnish syllables, isolated words, and words in sentences: effects of word meaning and sentence context. Speech Communication 26, 7587.Google Scholar
Schriefers, H., Meyer, A. & Levelt, W. (1990). Exploring the time course of lexical access in language production: picture-word interference studies. Journal of Memory and Language 29, 86102.Google Scholar
Seewald, R. C., Ross, M., Giolas, T. G. & Yonovitz, A. (1985). Primary modality for speech perception in children with normal and impaired hearing. Journal of Speech and Hearing Research 28, 3646.Google Scholar
Sekiyama, K. & Burnham, D. (2004). Issues in the development of auditory-visual speech perception: adults, infants, and children. Interspeech-2004, 1137–40.Google Scholar
Sekiyama, K. & Burnham, D. (2008). Impact of language on development of auditory-visual speech perception. Developmental Science 11, 306–20.CrossRefGoogle ScholarPubMed
Sekiyama, K. & Tohkura, Y. (1991). McGurk effect in non-English listeners: few visual effects for Japanese subjects hearing Japanese syllables of high auditory intelligibility. Journal of the Acoustical Society of America 90, 1797–805.Google Scholar
Smith, L. & Thelen, E. (2003). Development as a dynamic system. Trends in Cognitive Sciences 7, 343–8.Google Scholar
Spence, M., Rollins, P. & Jerger, S. (2002). Children's recognition of cartoon voices. Journal of Speech, Language, and Hearing Research 45, 214–22.Google Scholar
Stevenson, R. A., Wallace, M. T. & Altieri, N. (2014). The interaction between stimulus factors and cognitive factors during multisensory integration of audiovisual speech. Frontiers in Psychology 5, 352. doi: 10.3389/fpsyg.2014.00352.Google Scholar
Storkel, H. & Hoover, J. (2010). An on-line calculator to compute phonotactic probability and neighborhood density based on child corpora of spoken American English. Behavior Research Methods 42(2), 497506.Google Scholar
ten Oever, S., Sack, A. T., Wheat, K. L., Bien, N. & van Atteveldt, N. (2013). Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs. Frontiers in Psychology 4: 331. doi: 10.3389/fpsyg.2013.00331.Google Scholar
Tomiak, G., Mullennix, J. & Sawusch, J. (1987). Integral processing of phonemes: evidence for a phonetic mode of perception. Journal of the Acoustical Society of America 81, 755–64.Google Scholar
Tremblay, C., Champoux, R., Voss, P., Bacon, B., Lepore, F. & Theoret, H. (2007). Speech and non-speech audio-visual illusions: a developmental study. PLoS One 2(8), e742. doi:10.1371/journal.pone.0000742.Google Scholar
Tye-Murray, N. (2014). Foundations of aural rehabilitation: children, adults, and their family members, 4th ed. Boston: Cengage Learning.Google Scholar
Tye-Murray, N. & Geers, A. (2001). Children's Audio-Visual Enhancement Test. St Louis, MO: Central Institute for the Deaf.Google Scholar
Vatakis, A. & Spence, C. (2007). Crossmodal binding: evaluating the ‘unity assumption’ using audiovisual speech stimuli. Perception & Psychophysics 69(5), 744–56.Google Scholar
Vitevitch, M. & Luce, P. (2004). A web-based interface to calculate phonotactic probability for words and nonwords in English. Behavior Research Methods, Instruments & Computers 36, 481–7.Google Scholar
Wightman, F., Kistler, D. & Brungart, D. (2006). Informational masking of speech in children: auditory-visual integration. Journal of the Acoustical Society of America 119, 3940–9.Google Scholar
Supplementary material: File

Jerger supplementary material S1

Appendices A, B, and C

Download Jerger supplementary material S1(File)
File 20.3 KB
Supplementary material: File

Jerger supplementary material S2

Appendices A, B, and C

Download Jerger supplementary material S2(File)
File 15.5 KB
Supplementary material: File

Jerger supplementary material S3

Appendices A, B, and C

Download Jerger supplementary material S3(File)
File 15.1 KB