To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This article presents reflections from 12 experts on language learners strategy (LLS) research. They were asked to offer their reflections in one of their domains of expertise, linking research into LLS with successful language learning and use practices. In essence, they were called upon to provide a review of recent scholarship by identifying areas where results of research had already led to the enhancement of learner strategy use, as well as to describe ongoing and future research efforts intended to enhance the strategy domain. The LLS areas dealt with include theory building, the dynamics of delivering strategy instruction (SI), meta-analyses of SI, learner diversity, SI for young language learners, SI for fine-tuning the comprehension and production of academic-level, grammar strategies at the macro and micro levels, lessons learned from many years of LLS research in Greece, the past and future roles of technology aimed at enhancing language learning, and applications of LLS in content instruction. This review is intended to provide the field with an updated statement as to where we have been, where we are now, and where we need to go. Ideally, it will provide ideas for future studies.
For many researchers in the social sciences, including those in applied linguistics, the term ethics evokes the bureaucratic process of fulfilling the requirements of an ethics review board (e.g., in the US, an Institutional Review Board, or IRB) as a preliminary step in conducting human subjects research. The expansion of ethics review boards into the social sciences in the early 2000s has led applied linguistics as a field to experience what Haggerty (2004) termed ethics creep, a simultaneous expansion and intensification of external regulation of research activities. The aims of these ethical review boards are: (a) to evaluate the types and risk of harm to participants as a result of research activities, (b) ensure that participants can give informed consent to be part of the research activities, and (c) provide oversight on researcher procedures to maintain participant anonymity/confidentiality (Haggerty, 2004).
Language learning can be very emotional, as anyone who has ever tried to learn or use another language (L2) will attest. The range of emotions varies widely in both type and intensity, from the thrill of successfully articulating yourself, for example, to the anxiety of navigating a high-stakes encounter in an L2. It is not surprising, therefore, that there is a longstanding tradition of research on emotion in the context of L2 learning (e.g., Horwitz et al., 1986). In fact, more than 40 years ago, Scovel (1978) reviewed the accumulated evidence on the role of just one emotion: Anxiety.
Elicited imitation tasks (EITs) have been proposed and examined as a practical measure of second language (L2) proficiency. This study aimed to provide an updated and comprehensive view of the relationship between EITs and other proficiency measures. Toward that end, 46 reports were retrieved contributing 60 independent effect sizes (Pearson’s r) that were weighted and averaged. Several EIT features were also examined as potential moderators. The results portray EIT as a generally consistent measure of L2 proficiency (r = .66). Among other moderators, EIT stimuli length was positively associated with stronger correlations. Overall, the findings provide support for the use of EITs as a means to greater consistency and practicality in measuring L2 proficiency. In our Discussion section, we highlight the need for more transparent reporting and provide empirically grounded recommendations for EIT design and for further research into EIT development.
Meta-analysis overcomes a number of the limitations of traditional literature reviews (Norris & Ortega, 2006). Consequently, the use of meta-analysis as a synthetic technique has been applied across a range of scientific disciplines in recent decades. This paper seeks to formally introduce the potential of meta-analysis to the field of bilingualism. In doing so, we first describe a number of advantages to the meta-analytic approach such as greater systematicity, objectivity, and transparency relative to narrative reviews. We also outline the major stages in conducting a meta-analysis, highlighting critical considerations encountered at each stage. These include (a) domain definition, (b) coding scheme development and implementation, (c) analysis, and (d) interpretation. The focus, however, is on providing a conceptual introduction rather than a full-length tutorial. Meta-analyses in bilingualism and nearby fields are referred to throughout in order to illustrate the points being made.
Data from self-paced reading (SPR) tasks are routinely checked for statistical outliers (Marsden, Thompson, & Plonsky, 2018). Such data points can be handled in a variety of ways (e.g., trimming, data transformation), each of which may influence study results in a different manner. This two-phase study sought, first, to systematically review outlier handling techniques found in studies that involve SPR and, second, to re-analyze raw data from SPR tasks to understand the impact of those techniques. Toward these ends, in Phase I, a sample of 104 studies that employed SPR tasks was collected and coded for different outlier treatments. As found in Marsden et al. (2018), wide variability was observed across the sample in terms of selection of time and standard deviation (SD)-based boundaries for determining what constitutes a legitimate reading time (RT). In Phase II, the raw data from the SPR studies in Phase I were requested from the authors. Nineteen usable datasets were obtained and re-analyzed using data transformations, SD boundaries, trimming, and winsorizing, in order to test their relative effectiveness for normalizing SPR reaction time data. The results suggested that, in the vast majority of cases, logarithmic transformation circumvented the need for SD boundaries, which blindly eliminate or alter potentially legitimate data. The results also indicated that choice of SD boundary had little influence on the data and revealed no meaningful difference between trimming and winsorizing, implying that blindly removing data from SPR analyses might be unnecessary. Suggestions are provided for future research involving SPR data and the handling of outliers in second language (L2) research more generally.
First, we trace the history of second language acquisition (SLA) from early stages in the mid-twentieth century to today. We next consider the status of the field in today's research world with a particular focus on all aspects of methodology and, finally, we take a look at the future and discuss issues related to scientific rigor in light of Open Science.
Second language (L2) anxiety has been the object of constant empirical and theoretical attention for several decades. As a matter of both theoretical and practical interest, much of the research in this domain has examined the relationship between anxiety and L2 achievement. The present study meta-analyzes this body of research. Following a comprehensive search, a sample of 97 reports were identified, contributing a total of 105 independent samples (N = 19,933) from 23 countries. In the aggregate, the 216 effect sizes (i.e., correlations) reported in the primary studies yielded a mean of r = −.36 for the relationship between L2 anxiety and language achievement. Moderator analyses revealed effects sizes to vary across different types of language achievement measures, educational levels, target languages, and anxiety types. Overall, this study provides firm evidence for both the negative role of L2 anxiety in L2 learning and the moderating effects of a number of (non)linguistic variables. We discuss the findings in the context of theoretical and practical concerns, and we provide direction for future research.
Self-paced reading tests (SPRs) are being increasingly adopted by second language (L2) researchers. Using SPR with L2 populations presents specific challenges, and its use is still evolving in L2 research (as well as in first language research, in many respects). Although the topic of several narrative overviews (Keating & Jegerski, 2015; Roberts, 2016), we do not have a comprehensive picture of its usage in L2 research. Building on the growing body of systematic reviews of research practices in applied linguistics (e.g., Liu & Brown, 2015; Plonsky, 2013), we report a methodological synthesis of the rationales, study contexts, and methodological decision making in L2 SPR research. Our comprehensive search yielded 74 SPRs used in L2 research. Each instrument was coded along 121 parameters, including: reported rationales and study characteristics, indicating the scope and nature of L2 SPR research agendas; design and analysis features and reporting practices, determining instrument validity and reliability; and materials transparency, affecting reproducibility and systematicity of agendas. Our findings indicate an urgent need to standardize the use and reporting of this technique, requiring empirical investigation to inform methodological decision making. We also identify several areas (e.g., study design, sample demographics, instrument construction, data analysis, and transparency) where SPR research could be improved to enrich our understanding of L2 processing, reading, and learning.
Second language (L2) research relies heavily and increasingly on ANOVA (analysis of variance)-based results as a means to advance theory and practice. This fact alone should merit some reflection on the utility and value of ANOVA. It is possible that we could use this procedure more appropriately and, as argued here, other analyses such as multiple regression may prove to be more illuminating in certain research contexts. We begin this article with an overview of problems associated with ANOVA; some of them are inherent to the procedure, and others are tied to the way it is applied in L2 research. We then present three rationales for when researchers might turn to multiple regression in place of ANOVA. Output from ANOVA and multiple regression analyses based on published and mock-up studies are used to illustrate major points.
Tasks are frequently used to elicit learner language in second language (L2) research. The purposes for doing so, however, vary widely, covering a range of theoretical models, designs, and analyses. For example, task-based researchers have examined a range of linguistic and interactional features (e.g., accuracy, language-related episodes) that are found in learner production and that vary as a function of task conditions (e.g., +/− complex), modes (oral, written, computer-mediated), and settings (second vs. foreign language). This article presents a synthesis of substantive interests and methodological practices in this area. We first collected a sample of 85 primary studies of task-based language production published from 2006 to 2015. Each study was then coded for the target features it analyzed as well as other contextual and demographic variables. We also coded for methodological features related to study designs, sampling, analyses, and reporting practices. The results indicate a strong preference toward analyses of grammar, vocabulary, accuracy, and different features of L2 interaction, and very little interest in task-induced pronunciation, pragmatics, and the quality of task performance. More fundamentally, this domain may be hindered by a lack of theoretical and operational consistency. The data also point to a number of concerns related to research and reporting practices (e.g., low statistical power; missing data). Based on our findings, we outline a number of pointed recommendations for future research in this domain.
This study assesses research and reporting practices in quantitative second language (L2) research. A sample of 606 primary studies, published from 1990 to 2010 in Language Learning and Studies in Second Language Acquisition, was collected and coded for designs, statistical analyses, reporting practices, and outcomes (i.e., effect sizes). The results point to several systematic strengths as well as many flaws, such as a lack of control in experimental designs, incomplete and inconsistent reporting practices, and low statistical power. I discuss these trends, strengths, and weaknesses in comparison with methodological reviews of L2 research (e.g., Plonsky & Gass, 2011) as well as reviews from other fields (e.g., education, Skidmore & Thompson, 2010). On the basis of the findings, I also make a number of suggestions for methodological reforms in applied linguistics.