To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter underscores the implicit messages about argument-based validity expressed in the volume. It highlights characteristics of validation illuminated by how the validation research was designed, carried out, interpreted, and presented by each researcher. The chapters show that argument-based validity applies not only to large-scale or high-stakes testing, but is relevant to a range of contexts where assessments are trusted and is called on at varying stages of test development. Argument-based validity has sufficiently detailed concepts for guiding research about technology-assisted testing methods, and it provides terms for defining different types of constructs. Argument-based validity frames research goals that are well-suited to mixed-methods designs, as illustrated in the chapters of this volume. The chapter ends by clarifying the limits of argument-based validation research by reviewing the facts about validation: Validity is not a yes-no decision about a test; validity is not an objective, deterministically derived result; and validity is not the sole responsibility of the experts. Argument-based validity does not change these facts, but rather provides a detailed and logical means of working within these parameters despite the desire of many test users for tests that have been validated by experts and can be adopted uncritically.
Argument-based validity has evolved in response to the needs of language testing researchers for a systematic approach to investigating validity of the language tests. Based on a collection of 51 recent books, articles, and research reports in language assessment, this chapter describes the fundamental characteristics of an argument-based approach to validity, which has been operationalized in various ways in language assessment. These characteristics demonstrate how argument-based validity operationalizes the ideals for validation presented by Messick (1989) and accepted by most language testers: that a validity argument should be unitary, but multifacted means for integrating a variety of evidence in an ongoing validation process. The chapter describes how validity arguments serve the multiple functions that language testers demand of their validation tools, and takes into account the concepts that are important in language testing. It distinguishes between two formulations of argument-based validity that appear in language testing to introduce the conventions used throughout the papers in the volume.
Validity Argument in Language Testing: Case Studies of Validation Research introduces argument-based validation and illustrates how the framework is used to conceptualize, design, implement, and interpret validation research for language tests and assessments. The first section introduces the principal concepts and key terms required to understand argument-based validity in language testing, and it identifies argument-based validation studies in language testing. The second section contains chapters reporting argument-based validity research to investigate score interpretation in six language assessments by conducting research on such issues as the reliability of scores, rating quality, the constructs assessed, and the abilities required in the domain of interest. The third part contains three chapters reporting studies of test score use, including their consequences. By presenting each of these studies with reference to a consistent, but customizable, framework for test interpretation and use, the chapters show the contribution of multiple types of investigations and the use of mixed methods research. The volume demonstrates the importance of argument-based validation of assessments for varying purposes and at different stages of test development, for technology-mediated language assessment, and for clarifying constructs definition. It also notes the limits of argument-based validity.