To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We investigate the frequency diverse array (FDA) for joint radar and communication systems. The basic idea is to use the transmitter/receiver modules of the radar system for communication purpose during listening mode as a secondary function. The radar will be performing its routine functions during the active mode as a primary function. An FDA at the transmitter side will be used to produce an orthogonal frequency division multiplexed signal, which is proposed for the communication system. The directivity of the radar antenna, FDA in this case, provides an additional advantage to mitigate the interferences other than the Direction of Interest (DoI). The proposed technique allows two beampatterns to be transmitted sequentially from the same FDA structure. Due to the communication signal transmission in the mainlobe of the second beampattern, the bit error rate achieved in the mainlobe is better than the existing techniques using the sidelobe transmission for communications. At the receiver, both incoming signals of radar and communication will share a different spatial angle. Simulation results indicate the novelty of the idea to suppress the interferences in terms of DoI. Furthermore, we analyzed the signal-to-interference ratio and Cramer–Rao lower bounds for angle and range estimation for the proposed technique.
Seven half-day regional listening sessions were held between December 2016 and April 2017 with groups of diverse stakeholders on the issues and potential solutions for herbicide-resistance management. The objective of the listening sessions was to connect with stakeholders and hear their challenges and recommendations for addressing herbicide resistance. The coordinating team hired Strategic Conservation Solutions, LLC, to facilitate all the sessions. They and the coordinating team used in-person meetings, teleconferences, and email to communicate and coordinate the activities leading up to each regional listening session. The agenda was the same across all sessions and included small-group discussions followed by reporting to the full group for discussion. The planning process was the same across all the sessions, although the selection of venue, time of day, and stakeholder participants differed to accommodate the differences among regions. The listening-session format required a great deal of work and flexibility on the part of the coordinating team and regional coordinators. Overall, the participant evaluations from the sessions were positive, with participants expressing appreciation that they were asked for their thoughts on the subject of herbicide resistance. This paper details the methods and processes used to conduct these regional listening sessions and provides an assessment of the strengths and limitations of those processes.
Herbicide resistance is ‘wicked’ in nature; therefore, results of the many educational efforts to encourage diversification of weed control practices in the United States have been mixed. It is clear that we do not sufficiently understand the totality of the grassroots obstacles, concerns, challenges, and specific solutions needed for varied crop production systems. Weed management issues and solutions vary with such variables as management styles, regions, cropping systems, and available or affordable technologies. Therefore, to help the weed science community better understand the needs and ideas of those directly dealing with herbicide resistance, seven half-day regional listening sessions were held across the United States between December 2016 and April 2017 with groups of diverse stakeholders on the issues and potential solutions for herbicide resistance management. The major goals of the sessions were to gain an understanding of stakeholders and their goals and concerns related to herbicide resistance management, to become familiar with regional differences, and to identify decision maker needs to address herbicide resistance. The messages shared by listening-session participants could be summarized by six themes: we need new herbicides; there is no need for more regulation; there is a need for more education, especially for others who were not present; diversity is hard; the agricultural economy makes it difficult to make changes; and we are aware of herbicide resistance but are managing it. The authors concluded that more work is needed to bring a community-wide, interdisciplinary approach to understanding the complexity of managing weeds within the context of the whole farm operation and for communicating the need to address herbicide resistance.
In 2015, the American College of Obstetricians and Gynecologists issued a recommendation to screen women for depression and anxiety symptoms at least once during the perinatal period. Nevertheless, many identified women will not receive care from a behavioral health specialist. Listening Visits (LV), developed for delivery by nurses and validated in the United Kingdom, have recently been evaluated in a US-based randomized controlled trial (RCT) which recruited research participants from three home-visiting programs and an urban OB/GYN practice. RCT results indicated clinically and significant improvement in depression symptoms. To bridge the gap between evidence and practice, and based on experiences garnered at the OB/GYN site during the RCT, this development paper proposes a strategy for implementing depression screening and LV into routine clinical care in this practice setting.
This study is an attempt to investigate the effect of metacognitive instruction through dialogic interaction in a joint activity on advanced Iranian English as a foreign language (EFL) learners’ multimedia listening and their metacognitive awareness in listening comprehension. The data were collected through (N=180) male and female Iranian advanced learners ranging from 16 to 24 years of age in three groups. The first two groups were experimental (n=60), trained through a structured intervention program focusing on metacognitive instruction through dialogic interaction (MIDI) and metacognitive instruction (MI) for 10 sessions. The learners in the experimental group were involved in 60 minutes of practice twice a week. The third group was a control group (n=60), trained through regular classroom listening activities without receiving the structured intervention program. Multimedia listening tests and the Metacognitive Awareness Listening Questionnaire (MALQ) were used to track the advanced learners’ multimedia listening comprehension and metacognitive awareness. The results showed that metacognitive instruction through dialogic interaction did improve both the advanced learners’ multimedia listening comprehension and their metacognitive awareness in listening.
This study investigates the effects of multimedia glosses on text recall and incidental vocabulary learning in a mobile-assisted L2 listening task. A total of 88 participants with a low level of proficiency in English were randomly assigned to one of four conditions that involved single channel (textual-only, pictorial-only) and dual-channel (textual-plus-pictorial) glosses as well as a control condition where no glosses were provided. The participants listened to a story through their mobile phones and were engaged in an immediate free recall task and unannounced vocabulary tests after listening. The findings indicated that access to glosses facilitated recognition and production of vocabulary with the type of gloss having no effect. On the other hand, glosses had no effect on text recall.
As a reliable and valid measures of perceptual auditory laterality, dichotic listening has been successfully applied in studies in many countries and languages. However, languages differ in the linguistic relevance of change in initial phoneme of words (e.g., for word identification). In the present cross-language study, we examine the effect of these differences on dichotic-listening task performance to establish how characteristics of one's native language affect the perception of nonnative phonological features. We compared 33 native speakers of Norwegian, a language characterized by a clear distinction between voiced and unvoiced initial plosive consonants, with 30 native speakers of Estonian, a language that has exclusively unvoiced initial phonemes. Using a free-report dichotic-listening paradigm utilizing pairs of voiced (/ba/, /da/, /ga/) and unvoiced (/pa/, /ta/, /ka/) stop-consonant vowels as stimulus material, the Norwegian native speakers were found to be more sensitive to the voicing of the initial plosive than the Estonian group. “Voicing” explained 69% and 18% of the variance in the perceptual auditory laterality in the Norwegian and the Estonian sample, respectively. This indicates that experiential differences, likely during acquisition of the mother tongue in early development, permanently shape the sensitivity to the voicing contrast.
This paper introduces a novel captioning method, partial and synchronized captioning (PSC), as a tool for developing second language (L2) listening skills. Unlike conventional full captioning, which provides the full text and allows comprehension of the material merely by reading, PSC promotes listening to the speech by presenting a selected subset of words, where each word is synched to its corresponding speech signal. In this method, word-level synchronization is realized by an automatic speech recognition (ASR) system, dedicated to the desired corpora. This feature allows the learners to become familiar with the correspondences between words and their utterances. Partialization is done by automatically selecting words or phrases likely to hinder listening comprehension. In this work we presume that the incidence of infrequent or specific words and fast delivery of speech are major barriers to listening comprehension. The word selection criteria are thus based on three factors: speech rate, word frequency and specificity. The thresholds for these features are adjusted to the proficiency level of the learners. The selected words are presented to aid listening comprehension while the remaining words are masked in order to keep learners listening to the audio. PSC was evaluated against no-captioning and full-captioning conditions using TED videos. The results indicate that PSC leads to the same level of comprehension as the full-captioning method while presenting less than 30% of the transcript. Furthermore, compared with the other methods, PSC can serve as an effective medium for decreasing dependence on captions and preparing learners to listen without any assistance.
Learning how to comprehend while listening to a second language is often considered by learners to be a difficult process that can lead to anxiety when trying to communicate (Graham, 2006; Graham & Macaro, 2008). Computer-mediated communication (CMC) can be used to assist in increasing access to native speakers and opportunities to listen. This study investigates the effectiveness of the use of Second Life and Skype as part of facilitation techniques and the affordances of these online tools for developing listening comprehension. Participants in the study were learning either English or Croatian and were located in Sydney and Brisbane in Australia, Split in Croatia, and Mostar in Bosnia and Hercegovina. A mixed-methods approach was utilised incorporating pre-tests and post-tests (quantitative data) to gain information on the effectiveness of the techniques for developing listening comprehension and in-depth interviews (qualitative data) to gain the participants’ views on the perceived effectiveness of the techniques and the affordances of Second Life or Skype. The results of the study indicate that both techniques resulted in positive gains in the development of listening comprehension. Based on the analysis of the interview data, a more in-depth perspective on the affordances of each online tool was developed, which informed the creation of a new facilitation technique utilising both tools. The study demonstrates how online tools can be used to facilitate interaction between learners and illustrates the need for the selection of online tools for language learning to be based on pedagogy. It is recommended that the selection of tools should be carefully considered in alignment with task aims and the affordances of online tools.
Bilinguals are known to perform worse than monolinguals on speech-in-noise tests. However, the mechanisms underlying this difference are unclear. By varying the amount of linguistic information available in the target stimulus across five auditory-perception-in-noise tasks, we tested if differences in language-independent (sensory/cognitive) or language-dependent (extracting linguistic meaning) processing could account for this disadvantage. We hypothesized that language-dependent processing differences underlie the bilingual disadvantage and predicted that it would manifest on perception-in-noise tasks that use linguistic stimuli. We found that performance differences between bilinguals and monolinguals varied with the linguistic processing demands of each task: early, high-proficiency, Spanish–English bilingual adolescents performed worse than English monolingual adolescents when perceiving sentences, similarly when perceiving words, and better when perceiving tones in noise. This pattern suggests that bottlenecks in language-dependent processing underlie the bilingual disadvantage while language-independent perception-in-noise processes are enhanced.
There is ample evidence that native and non-native listeners use lexical knowledge to retune their native phonetic categories following ambiguous pronunciations. The present study investigates whether a non-native ambiguous sound can retune non-native phonetic categories. After a brief exposure to an ambiguous British English [l/ɹ] sound, Dutch listeners demonstrated retuning. This retuning was, however, asymmetrical: the non-native listeners seemed to show (more) retuning of the /ɹ/ category than of the /l/ category, suggesting that non-native listeners can retune non-native phonetic categories. This asymmetry is argued to be related to the large phonetic variability of /r/ in both Dutch and English.
This investigation aimed to develop and collect psychometric data for two tests assessing listening comprehension of Portuguese students in primary school: the Test of Listening Comprehension of Narrative Texts (TLC-n) and the Test of Listening Comprehension of Expository Texts (TLC-e). Two studies were conducted. The purpose of study 1 was to construct four test forms for each of the two tests to assess first, second, third and fourth grade students of the primary school. The TLC-n was administered to 1042 students, and the TLC-e was administered to 848 students. The purpose of study 2 was to test the psychometric properties of new items for the TLC-n form for fourth graders, given that the results in study 1 indicated a severe lack of difficult items. The participants were 260 fourth graders. The data were analysed using the Rasch model. Thirty items were selected for each test form. The results provided support for the model assumptions: Unidimensionality and local independence of the items. The reliability coefficients were higher than .70 for all test forms. The TLC-n and the TLC-e present good psychometric properties and represent an important contribution to the learning disabilities assessment field.
This paper reports on the impact of computer-mediated input, output and feedback on the development of second language (L2) word recognition from speech (WRS). A quasi-experimental pre-test/treatment/post-test research design was used involving three intact tertiary level English as a Second Language (ESL) classes. Classes were either assigned to a control group (n=31) or to one of two alternative treatment levels which used a web-based computer application enabling self-determined opportunities to repeatedly listen to and reconstruct spoken target text into its written form. Treatment group one (n=30) received text feedback after each of their efforts at target text reconstruction, whereas treatment group two (n=35) did not. Results indicated that word recognition gain scores of those who used the application, regardless of treatment level, were significantly higher than those of the control group. The relationship between the quantity of self-determined exposure to input and word recognition improvements was moderate but not linear, with those choosing moderate levels of speech input deriving the greatest measurable improvement. Neither increased levels of modified output nor the provision of text feedback were associated with significant improvements in word recognition gain scores. Implications for computer-mediated approaches for the development of L2 WRS are described and areas for future empirical research are suggested.
The aim of this study was to provide adaptive assistance to improve the listening comprehension of eleventh grade students. This study developed a video-based language learning system for handheld devices, using three levels of caption filtering adapted to student needs. Elementary level captioning excluded 220 English sight words (see Section 1 for definition), but provided captions and Chinese translations for the remaining words. Intermediate level excluded 1000 high frequency English words, but provided captions for the remaining words, and 2200 high frequency English words were excluded at the high intermediate caption filtering level. The result was that the viewers were provided with captions for words that were likely to be unfamiliar to them. Participants in the experimental group were assigned bilingual caption modes according to their pre-test results, while those in the control group were assigned standard caption modes. Our results indicate that students in the experimental group preferred adaptive captions, enjoyed the exercises more, and gained greater intrinsic motivation compared to those in the control group. The results confirm that different students require different quantities of information to balance listening comprehension and indicate that the proposed adaptive caption filtering approach may be an effective way to improve the skills required for listening proficiency.
Despite the hundreds of Mobile-Assisted Language Learning (MALL) publications over the past twenty years, statistically reliable measures of learning outcomes are few and far between. In part, this is due to the fact that well over half of all MALL-related studies report no objectively quantifiable learning outcomes, either because they did not involve MALL implementation projects, or if they did, learning gains were only based on subjective teacher assessments and/or student self-evaluations. Even more so, the paucity of statistically reliable learning outcome data stems from the short duration of projects and small numbers of students involved. Of the 291 distinct studies examined in this review only 35 meet minimal conditions of duration and sample size, i.e., ten experimental subjects over a period of at least a month. Sixteen of these suffer from serious design shortcomings, leaving only nineteen MALL studies that can reliably serve as a basis for determining the learning outcomes of mobile-based language applications. Of these studies, fifteen can be considered to report unequivocal positive results, with those focusing on reading, listening and speaking without exception evidencing a MALL application advantage. Four studies, all focusing on vocabulary, reported no significant differences.
The aim of this study was twofold: we investigated (a) the effect of two types of captioned video (i.e., on-screen text in the same language as the video) on listening comprehension; (b) L2 learners’ perception of the usefulness of captions while watching L2 video. The participants, 226 university-level students from a Flemish university, watched three short French clips in one of three conditions: the control group watched the clips without captions (N = 70), the second group had fully captioned clips (N = 81), the third group had keyword captioned clips (N = 75). After each clip, all participants took a listening comprehension test, which consisted of global and detailed questions. To answer the detailed questions, participants had access to an audio passage of the corresponding clip. At the end of the experiment, participants completed a questionnaire and open-ended survey questions about their perception of captions. Our findings revealed that the full captioning group outperformed both the no captioning and the keyword captioning group on the global comprehension questions. However, no difference was found between the keyword captioning and the no captioning group. Results of the detailed comprehension questions (with audio) revealed no differences between the three conditions. A content-analysis approach to the questionnaire indicated that learners’ perceived need for full captions is strong. Participants consider captions useful for speech decoding and meaning-making processes. Surprisingly, keyword captions were considered highly distracting. These findings suggest that full rather than keyword captioning should be considered when proposing video-based listening comprehension activities to L2 learners.
For many EFL learners, listening poses a grave challenge. The difficulty in segmenting a stream of speech and limited capacity in short-term memory are common weaknesses for language learners. Specifically, reduced forms, which frequently appear in authentic informal conversations, compound the challenges in listening comprehension. Numerous interventions have been implemented to assist EFL language learners, and of these, the application of captions has been found highly effective in promoting learning. Few studies have examined how different modes of captions may enhance listening comprehension. This study proposes three modes of captions: full, keyword-only, and annotated keyword captions and investigates their contribution to the learning of reduced forms and overall listening comprehension. Forty-four EFL university students participated in the study and were randomly assigned to one of the three groups. The results revealed that all three groups exhibited improvement on the pre-test while the annotated keyword caption group exhibited the best performance with the highest mean score. Comparing performances between groups, the annotated keyword caption group also emulated both the full caption and the keyword-only caption groups, particularly in the ability to recognize reduced forms. The study sheds light on the potential of annotated keyword captions in enhancing reduced forms learning and overall listening comprehension.
Listening comprehension in a second language (L2) is a complex and particularly challenging task for learners. Because of this, L2 learners and instructors alike employ different learning supports as assistance. Captions in multimedia instruction readily provide support and thus have been an ever-increasing focus of many studies. However, captions must eventually be removed, as the goal of language learning is participation in the target language where captions are not typically available. Consequently, this creates a dilemma particularly for language instructors as to the usage of captioning supports, as early removal may cause frustration, while late removal may create learning interference. Accordingly, the goal of the current study was to propose and employ a testing instrument, the Caption Reliance Test (CRT), which evaluates individual learners’ reliance on captioning in second language learning environments; giving a clear indication of the learners’ reliance on captioning, mirroring their support needs. Thus, the CRT was constructed comprised of an auditory track, accompanied by congruent textual captions, as well as particular incongruent textual words, to provide a means for testing. It was subsequently employed in an empirical study involving English as a Foreign Language (EFL) high school students. The results exhibited individual variances in the degree of reliance and, more importantly, exposed a negative correlation between caption reliance and L2 achievement. In other words, learners’ reliance on captions varies individually and lower-level achievers rely on captions for listening comprehension more than their high-level counterparts, indicating that learners at various comprehension levels require different degrees of caption support. Thus, through employment of the CRT, instructors are able to evaluate the degree to which learners rely on the caption supports and thus make informed decisions regarding learners’ requirements and utilization of captions as a multimedia learning support.