To save this undefined to your undefined account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your undefined account.
Find out more about saving content to .
To save this article to your Kindle, first ensure firstname.lastname@example.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We investigate whether child-directed speech (CDS) contains a higher proportion of canonical pronunciations compared to adult-directed speech (ADS), focusing on Korean noun stem-final obstruent variation. In a word-teaching task, we observed that mothers use a higher rate of canonical pronunciation when addressing infants than when addressing adults. In a follow-up experiment, adults exhibited a higher rate of canonical pronunciation for high- than low-frequency words. Additional analyses conducted with only the high-frequency monosyllabic words from the two experiments found no evidence for simplified phonology in CDS when lexical frequency was controlled for. Our findings suggest that the higher rate of canonical forms in CDS, with respect to Korean morphophonological rules, is mediated by the frequency of word usage. Thus, the didactic function of CDS phonology appears to be a byproduct of mothers using familiar words with children. These results highlight the importance of considering word usage in investigating the nature of CDS.
Previous research on infant-directed speech (IDS) and its role in infants’ language development has largely focused on mothers, with fathers being investigated scarcely. Here we examine the acoustics of IDS as compared to adult-directed speech (ADS) in Norwegian mothers and fathers to 8-month-old infants, and whether these relate to direct (eye-tracking) and indirect (parental report) measures of infants’ word comprehension. Forty-five parent-infant dyads participated in the study. Parents (24 mothers, 21 fathers) were recorded reading a picture book to their infant (IDS), and to an experimenter (ADS), ensuring identical linguistic context across speakers and registers. Results showed that both mothers’ and fathers’ IDS had exaggerated prosody, expanded vowel spaces, as well as more variable and less distinct vowels. We found no evidence that acoustic features of parents’ speech were associated with infants’ word comprehension. Potential reasons for the lack of such a relationship are discussed.
The ability to process plural marking of nouns is acquired early: at a very young age, children are able to understand if a noun represents one item or more than one. However, little is known about how the segmental characteristics of plural marking are used in this process. Using eye-tracking, we aim at understanding how five to twelve-year old children use the phonetic, phonological, and morphological information available to process noun plural marking in German (i.e., a very complex system) compared to adults. We expected differences with stem vowels, stem-final consonants or different suffixes, alone or in combination, reflecting different processing of their segmental information. Our results show that for plural processing: 1) a suffix is the most helpful cue, an umlaut the least helpful, and voicing does not play a role; 2) one cue can be sufficient and 3) school-age children have not reached adult-like processing of plural marking.
This study reports on the feasibility of using the Test of Complex Syntax- Electronic (TECS-E), as a self-directed app, to measure sentence comprehension in children aged 4 to 5 ½ years old; how testing apps might be adapted for effective independent use; and agreement levels between face-to-face supported computerized and independent computerized testing with this cohort. A pilot phase was completed with 4 to 4;06-year-old children, to determine the appropriate functional app features required to facilitate independent test completion. Following the integration of identified features, children completed the app independently or with adult support (4–4;05 (n = 22) 4;06–4;11 months (n = 55) and 5 to 5;05 (n = 113)) and test re-test reliability was examined. Independent test completion posed problems for children under 5 years but for those over 5, TECS-E is a reliable method to assess children’s understanding of complex sentences, when used independently.
We compare two frameworks for the segmentation of words in child-directed speech, PHOCUS and MULTICUE. PHOCUS is driven by lexical recognition, whereas MULTICUE combines sub-lexical properties to make boundary decisions, representing differing views of speech processing. We replicate these frameworks, perform novel benchmarking and confirm that both achieve competitive results. We develop a new framework for segmentation, the DYnamic Programming MULTIple-cue framework (DYMULTI), which combines the strengths of PHOCUS and MULTICUE by considering both sub-lexical and lexical cues when making boundary decisions. DYMULTI achieves state-of-the-art results and outperforms PHOCUS and MULTICUE on 15 of 26 languages in a cross-lingual experiment. As a model built on psycholinguistic principles, this validates DYMULTI as a robust model for speech segmentation and a contribution to the understanding of language acquisition.
Parents are often a good source of information, introducing children to how the world around them is described and explained in terms of cause-and-effect relations. Parents also vary in their speech, and these variations can predict children’s later language skills. Being born preterm might be related to such parent-child interactions. The present longitudinal study investigated parental causal language use in Turkish, a language with particular causative morphology, across three time points when preterm and full-term children were 14-, 20-, and 26-months-old. In general, although preterm children heard fewer words overall, there were no differences between preterm and full-term groups in terms of the proportion of causal language input. Parental causal language input increased from 20 to 26 months, while the amount of overall verbal input remained the same. These findings suggest that neonatal status can influence the amount of overall parental talk, but not parental use of causal language.
Indirect answers are a common type of non-literal language that do not provide an explicit “yes” or “no” to a question (e.g., “I have to work late” indirectly answered “Are you going to the party?” with a negative response). In the current study, we examined the developmental trajectory of comprehension of indirect answers among 5- to 10-year-old children with typical development. Forty-eight children, 23 boys and 25 girls, between the ages of 5 years; 0 months and 10 years; 11 months (M = 8;2, SD = 19.77 months) completed an experimental task to judge whether a verbally presented indirect answer meant yes or no (Comprehension Task) and then explain their choice (Explanation Task). Responses were scored for accuracy and coded for error analysis. On the Comprehension Task, the 5- to 8-year-olds performed with approximately 85% accuracy, while the 9- and 10-year-olds achieved 95% accuracy. On the Explanation Task, the cross-sectional trajectory revealed three stages: the 5- and 6-year-olds adequately explained indirect answers 32% of the time, the 7- and 8-year-olds performed significantly higher at 55%, and the 9- and 10-year-olds made significant gains than the younger children at 66%. Error analysis revealed that when children fail to interpret speaker intentions appropriately, they repeat the speaker’s utterance or provide an insufficient explanation 80% of the time. Other responses, such as those irrelevant to the context, indicating “I don’t know” or no response, or that were made-up interpretations each accounted for 2%-10% of total inadequate explanations. Study findings indicate discrepancies between task performances and offer two separate sets of baseline data for future comparisons that investigate comprehension or explanation of indirect answers by children with different cultural and linguistic backgrounds and by those with varying cognitive and language profiles.
One approach to studying how children acquire language is to simulate language acquisition through computational modelling. Computational models implement theories of language acquisition and simulation outcomes can then be tested against existing real-world data or in new empirical research. It is more than ten years ago that Journal of Child Language published a special issue on the topic, edited and introduced by Brian MacWhinney (MacWhinney, 2010). Now is thus a good time to take stock of recent developments by bringing together a collection of articles that explore recent research and insights from computational modelling of child language acquisition.
Many Aboriginal Australian communities are undergoing language shift from traditional Indigenous languages to contact varieties such as Kriol, an English-lexified Creole. Kriol is reportedly characterised by lexical items with highly variable phonological specifications, and variable implementation of voicing and manner contrasts in obstruents (Sandefur, 1986). A language, such as Kriol, characterised by this unusual degree of variability presents Kriol-acquiring children with a potentially difficult language-learning task, and one which challenges the prevalent theories of acquisition. To examine stop consonant acquisition in this unusual language environment, we present a study of Kriol stop and affricate production, followed by a mispronunciation detection study, with Kriol-speaking children (ages 4-7) from a Northern Territory community where Kriol is the lingua franca. In contrast to previous claims, the results suggest that Kriol-speaking children acquire a stable phonology and lexemes with canonical phonemic specifications, and that English experience would not appear to induce this stability.
The present study explored developmental differences in preschoolers’ use of reported speech and internal state language in personal narratives. Three-, four-, and five-year-olds attending a laboratory preschool shared 204 stories about ‘a time when you were happy/sad’. Stories were audio-recorded, transcribed, and coded for reported speech (direct, indirect, narrativized) and internal state language (cognitive states, total emotion terms, unique emotion terms). Personal narratives told by five-year-olds included more cognitive states and more narrativized speech than those told by three- and four-year-olds, even when accounting for children’s vocabulary skills, and that reported speech (narrativized, direct) were positively correlated with cognitive state talk. These findings highlight distinct shifts in children’s use of cognitive state talk and reported speech in personal narratives told at age five. Associations between reported speech and internal state language are both informed by and support Vygotsky’s (1978) fundamental claim that psychological processes are socially mediated by language.
This study investigated links between the development of children’s understanding of ironic comments and their metapragmatic knowledge. Forty-six 8-year-olds completed the short version of the Irony Comprehension Task, during which they were presented with ironic comments in three stories and asked to provide reasons for why the speaker in a story uttered an ironic comment. We coded their responses and compared the results to similar data collected previously with 5-year-olds. Results showed that compared to younger children, 8-year-olds frequently refer to interlocutors’ emotions, intentions, and to metapragmatics. These results support the view that comprehension of verbal irony is an emerging skill in children.
In this study, we report an extensive investigation of the structural language and acoustical specificities of the spontaneous speech of ten three- to five-year-old verbal autistic children. The autistic children were compared to a group of ten typically developing children matched pairwise on chronological age, nonverbal IQ and socioeconomic status, and groupwise on verbal IQ and gender on various measures of structural language (phonetic inventory, lexical diversity and morpho-syntactic complexity) and a series of acoustical measures of speech (mean and range fundamental frequency, a formant dispersion index, syllable duration, jitter and shimmer). Results showed that, overall, the structure and acoustics of the verbal autistic children’s speech were highly similar to those of the TD children. Few remaining atypicalities in the speech of autistic children lay in a restricted use of different vocabulary items, a somewhat diminished morpho-syntactic complexity, and a slightly exaggerated syllable duration.
The current study investigated whether vocabulary relates to phonetic categorization at neural level in early childhood. Electoencephalogram (EEG) responses were collected from 53 Dutch 20-month-old children in a passive oddball paradigm, in which they were presented with two nonwords “giep” [ɣip] and “gip” [ɣɪp] that were contrasted solely by the vowel. In the multiple-speaker condition, both nonwords were produced by twelve different speakers; while, in the single-speaker condition, one single token of each word was used as stimuli. Infant positive mismatch responses (p-MMR) were elicited in both conditions without significant amplitude differences. When the infants were median split based on vocabulary level, the large and small vocabulary groups showed comparable p-MMR amplitudes yet different scalp distribution in both conditions. These results suggest successful phonetic categorization of native similar sounding vowels at 20 months, and a close relationship between speech categorization and vocabulary development.
Current understanding of word-finding (WF) difficulties in children and their underlying language processing deficit is poor. Authors have proposed that different underlying deficits may result in different profiles. The current study aimed to better understand WF difficulties by identifying difficult tasks for children with WF difficulties and by focusing on semantic vs. phonological profiles. Twenty-four French-speaking children with WF difficulties and 22 children without WF difficulties, all aged 7- to 12-years-old, participated. They were compared on a range of measures to cover the overall mechanism of WF and the quality of semantic and phonological representations. The largest differences were found on a parent questionnaire and a word definition task. Cluster analyses revealed “high performance” and “low performance” clusters, with intermediary groups. These clusters did not match the expected semantic vs. phonological profiles derived from models of lexical access, suggesting that WF difficulties may be linked to both semantic and phonological deficits.
Cochlear Implants (CIs) enhance linguistic skills in deaf or hard of hearing children (D/HH). However, the benefits of CIs have not been sufficiently studied, especially with regard to communicative-pragmatics, i.e., the ability to communicate appropriately in a specific context using different expressive means, such as language and extralinguistic or paralinguistic cues. The study aimed to assess the development of communicative-pragmatic ability, through the Assessment Battery for Communication (ABaCo), in school-aged children with CIs, to compare their performance to a group of children with typical auditory development (TA), and to investigate if CI received under the age of 24 months promotes the typical development of such ability. Results show that children with CIs performed significantly worse than TA on the paralinguistic and contextual scales of the ABaCo. Finally, the age of first implantation had a significant role in the development of communicative-pragmatic ability.
We examined how noun frequency and the typicality of surrounding linguistic context contribute to children’s real-time comprehension. Monolingual English-learning toddlers viewed pairs of pictures while hearing sentences with typical or atypical sentence frames (Look at the… vs. Examine the…), followed by nouns that were higher- or lower-frequency labels for a referent (horse vs. pony). Toddlers showed no significant differences in comprehension of nouns in typical and atypical sentence frames. However, they were less accurate in recognizing lower-frequency nouns, particularly among toddlers with smaller vocabularies. We conclude that toddlers can recognize nouns in diverse sentence contexts, but their representations develop gradually.
Linguistic input in multi-lingual/-cultural contexts is highly variable. We examined the production of English and Malay laterals by fourteen early bilingual preschoolers in Singapore who were exposed to several allophones of coda laterals: Malay caregivers use predominantly clear-l in English and Malay, but their English coda laterals can also be l-less (vocalised/deleted) and in formal contexts, velarised. Contrastingly, the English coda laterals of the Chinese majority are typically l-less. Findings show that English coda laterals were overall more likely to be l-less than Malay laterals like their caregivers’, but English coda laterals produced by children with close Chinese peer(s) were more likely to be l-less than those without. All children produced English coda clear-l, demonstrating the transmission of an ethnic marker that had emerged from long-term contact. In diverse settings, variation is intrinsic to the acquisition process, and input properties and language experience are important considerations in predicting language outcomes.
Infant-directed speech often has hyperarticulated features, such as point vowels whose formants are further apart than in adult-directed speech. This increased “vowel space” may reflect the caretaker’s effort to speak more clearly to infants, thus benefiting language processing. However, hyperarticulation may also result from more positive valence (e.g., speaking with positive vocal emotion) often found in mothers’ speech to infants. This study was designed to replicate others who have found hyperarticulation in maternal speech to their 6-month-olds, but also to examine their speech to a non-human infant (i.e., a puppy). We rated both kinds of maternal speech for their emotional valence and recorded mothers’ speech to a human adult. We found that mothers produced more positively valenced utterances and some hyperarticulation in both their infant- and puppy-directed speech, compared to their adult-directed speech. This finding promotes looking at maternal speech from a multi-faceted perspective that includes emotional state.
Infant-directed speech (IDS) produced in laboratory settings contains acoustic cues, such as pauses, pitch changes, and vowel-lengthening that could facilitate breaking speech into smaller units, such as syntactically well-formed utterances, and the noun- and verb-phrases within them. It is unclear whether these cues are present in speech produced in more natural contexts outside the lab. We captured LENA recordings of caregiver speech to 12-month-old infants in daylong interactions (N = 49) to address this question. We found that the final positions of syntactically well-formed utterances contained greater vowel lengthening and pitch changes, and were followed by longer pauses, relative to non-final positions. However, we found no evidence that these cues were present at utterance-internal phrase boundaries. Results suggest that acoustic cues marking the boundaries of well-formed utterances are salient in everyday speech to infants and highlight the importance of characterizing IDS in a large sample of naturally-produced speech to infants.
This study is a validation of the LENA system for the Italian language. In Study 1, to test LENA’s accuracy, seventy-two 10-minute samples extracted from daylong LENA recordings were manually transcribed for 12 children longitudinally observed at 1;0 and 2;0. We found strong correlations between LENA and human estimates in the number of Adult Word Count (AWC) and Child Vocalisations Count (CVC) and a weak correlation between LENA and human estimates in Conversational Turns Count (CTC). In Study 2, to test the concurrent validity, direct and indirect language measures were considered on a sample of 54 recordings (19 children). Correlational analyses showed that LENA’s CVC and CTC were significantly related to the children’s vocal production, a parent report measure of prelexical vocalizations and the vocal reactivity scores. These results confirm that the automatic analyses performed by the LENA device are reliable and powerful for studying language development in Italian-speaking infants.