To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter underscores the implicit messages about argument-based validity expressed in the volume. It highlights characteristics of validation illuminated by how the validation research was designed, carried out, interpreted, and presented by each researcher. The chapters show that argument-based validity applies not only to large-scale or high-stakes testing, but is relevant to a range of contexts where assessments are trusted and is called on at varying stages of test development. Argument-based validity has sufficiently detailed concepts for guiding research about technology-assisted testing methods, and it provides terms for defining different types of constructs. Argument-based validity frames research goals that are well-suited to mixed-methods designs, as illustrated in the chapters of this volume. The chapter ends by clarifying the limits of argument-based validation research by reviewing the facts about validation: Validity is not a yes-no decision about a test; validity is not an objective, deterministically derived result; and validity is not the sole responsibility of the experts. Argument-based validity does not change these facts, but rather provides a detailed and logical means of working within these parameters despite the desire of many test users for tests that have been validated by experts and can be adopted uncritically.
Validity Argument in Language Testing: Case Studies of Validation Research introduces argument-based validation and illustrates how the framework is used to conceptualize, design, implement, and interpret validation research for language tests and assessments. The first section introduces the principal concepts and key terms required to understand argument-based validity in language testing, and it identifies argument-based validation studies in language testing. The second section contains chapters reporting argument-based validity research to investigate score interpretation in six language assessments by conducting research on such issues as the reliability of scores, rating quality, the constructs assessed, and the abilities required in the domain of interest. The third part contains three chapters reporting studies of test score use, including their consequences. By presenting each of these studies with reference to a consistent, but customizable, framework for test interpretation and use, the chapters show the contribution of multiple types of investigations and the use of mixed methods research. The volume demonstrates the importance of argument-based validation of assessments for varying purposes and at different stages of test development, for technology-mediated language assessment, and for clarifying constructs definition. It also notes the limits of argument-based validity.
This chapter presents argument-based validation research to evaluate the interpretation of scores from an English collocational ability test. The argument-based validity framework guided the development of an interpretation/use argument that helped identify the types and amount of research needed to evaluate the plausibility of the claims about test score interpretation. Research presented in this chapter focuses on the explanation inference which is made when test users interpret the score as having substantive meaning about the construct assessed, specifically the construct of collocational ability in academic writing. The construct was defined by specifying the nature and scope of the construct following an interactionalist construct, which consists of three parts 1) the knowledge skills and abilities of a trait, 2) the types of contexts that delimit the scope of applicability of the trait, and 3) the metacognitive strategies to put the trait into use in those contexts. The target collocation was identified and defined based on applied linguistics theory and research, analysis of test-takers‘ responses to items on the test and statistical analysis of test scores. The relationship between collocational ability and other constructs of language ability were hypothesized in the nomological network to provide a basis for interpreting observed statistical relationships among sets of test scores reflecting those constructs. Evidence from screen capturing analysis, responses on a post-test survey and post-test interviews provided backing of strategy use. Data were collected in two phases of an embedded, sequential explanatory design to first obtain results from qualitative analysis of test takers' responses and then explain the results of the quantitative data with results from the supplementary qualitative data. Evidence collected in this study supported the construct of collocational ability underlying the explanation inference and demonstrated how argument-based validity can be used to lay a foundation for interpretation of test scores that is essential to score meaning.