Hostname: page-component-8448b6f56d-dnltx Total loading time: 0 Render date: 2024-04-24T05:33:42.339Z Has data issue: false hasContentIssue false

A Failed Challenge to Validity Generalization: Addressing a Fundamental Misunderstanding of the Nature of VG

Published online by Cambridge University Press:  30 August 2017

Frank L. Schmidt
Affiliation:
University of Iowa
Chockalingam Viswesvaran*
Affiliation:
Florida International University
Deniz S. Ones*
Affiliation:
University of Minnesota
Huy Le
Affiliation:
University of Texas at San Antonio
*
Correspondence concerning this article should be addressed to Chockalingam Viswesvaran, Florida International University, Department of Psychology, University Park, Miami, FL 33199. E-mail: Vish@fiu.edu
Deniz S. Ones, University of Minnesota, Department of Psychology, 75 East River Road, Minneapolis, MN 55455. E-mail: Deniz.S.Ones-1@tc.umn.edu

Extract

The lengthy and complex focal article by Tett, Hundley, and Christiansen (2017) is based on a fundamental misunderstanding of the nature of validity generalization (VG): It is based on the assumption that what is generalized in VG is the estimated value of mean rho ($\bar{\rho}$). This erroneous assumption is stated repeatedly throughout the article. A conclusion of validity generalization does not imply that $\bar{\rho}$ is identical across all situations. If VG is present, most, if not all, validities in the validity distribution are positive and useful even if there is some variation in that distribution. What is generalized is the entire distribution of rho ($\bar{\rho}$), not just the estimated $\bar{\rho}$ or any other specific value of validity included in the distribution. This distribution is described by its mean ($\bar{\rho}$) and standard deviation (SDρ). A helpful concept based on these parameters (assuming ρ is normally distributed) is the credibility interval, which reflects the range where most of the values of ρ can be found. The lower end of the 80% credibility interval (the 90% credibility value, CV = $\bar{\rho}$ – 1.28 × SDρ) is used to facilitate understanding of this distribution by indicating the statistical “worst case” for validity, for practitioners using VG. Validity has an estimated 90% chance of lying above this value. This concept has long been recognized in the literature (see Hunter & Hunter, 1984, for an example; see also Schmidt, Law, Hunter, Rothstein, Pearlman, & McDaniel, 1993, and hundreds of VG articles that have appeared in the literature over the past 40 years since the invention of psychometric meta-analysis as a means of examining VG [Schmidt & Hunter, 1977]). The $\bar{\rho}$ is the value in the distribution with the highest likelihood of occurring (although often by only a small amount), but it is the whole distribution that is generalized. Tett et al. (2017) state that some meta-analysis articles claim that they are generalizing only $\bar{\rho}$. If true, this is inappropriate. Because $\bar{\rho}$ has the highest likelihood in the ρ distribution, discussion often focuses on that value as a matter of convenience, but $\bar{\rho}$ is not what is generalized in VG. What is generalized is the conclusion that there is validity throughout the credibility interval. The false assumption that it is $\bar{\rho}$ and not the ρ distribution as a whole that is generalized in VG is the basis for the Tett et al. article and is its Achilles heel. In this commentary, we examine the target article's basic arguments and point out errors and omissions that led Tett et al. to falsely conclude that VG is a “myth.”

Type
Commentaries
Copyright
Copyright © Society for Industrial and Organizational Psychology 2017 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

The order of authorship following the lead author is by seniority. We thank Philip Roth and In-Sue Oh for suggestions and comments on an earlier draft of this manuscript.

References

Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to meta-analysis. Chichester, UK: Wiley.Google Scholar
Burke, M. J., & Landis, R. (2003). Methodological and conceptual issues in applications of meta-analysis. In Murphy, K. (Ed.), Validity generalization: A critical review (pp. 287310). Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
Cooper, H., & Koenka, A. C. (2012). The overview of reviews: Unique challenges and opportunities when research syntheses are the principal elements of new integrative scholarship. American Psychologist, 67, 446462.Google Scholar
Cortina, J. M. (2003). Apples and oranges (and pears, oh my!): The search for moderators in meta-analysis. Organizational Research Methods, 6, 415439.Google Scholar
Harris, W. G., Jones, J. W., Klion, R., Arnold, D, Camara, W., & Cunningham, M. R. (2012). Test publishers' perspective on “An updated meta-analysis”: Comment on Van Iddekinge, Roth, Raymark, and Odle-Dusseau (2012). Journal of Applied Psychology, 97 (3), 531536.Google Scholar
Hezlett, S. A., Kuncel, N. R., Vey, M. A., Ahart, A., Ones, D. S., Campbell, J. P., & Camara, W. (2001, April). The predictive validity of the SAT: A comprehensive meta-analysis. In Ones, D. S. & Hezlett, S. A. (Chairs), Predicting performance: The interface of I-O psychology and educational research. Symposium presented at the 16th Annual Conference of the Society for Industrial and Organizational Psychology, San Diego, CA.Google Scholar
Hunter, J. E., & Hunter, R. F. (1984). Validity and utility of alternative predictors of job performance. Psychological Bulletin, 96 (1), 7298. doi: 10.1037/0033-2909.96.1.72 Google Scholar
Hunter, J. E., & Schmidt, F. L. (2004). Methods of meta-analysis: Correcting error and bias in research finding (2nd ed.). Newbury Park, CA: Sage.Google Scholar
Hunter, J. E., Schmidt, F. L., & Hunter, R. (1979). Differential validity of employment tests by race: A comprehensive review and analysis. Psychological Bulletin, 86 (4), 721735. doi: 10.1037/0033-2909.86.4.721 Google Scholar
Ones, D. S., Dilchert, S., Deller, J., Albrecht, A.-G., Duehr, E. E., & Paulus, F. M. (2012). Cross-cultural generalization: Using meta-analysis to test hypotheses about cultural variability. In Ryan, A. M., Leong, F. T. L., & Oswald, F. L. (Eds.), Conducting multinational research projects in organizational psychology: Challenges and opportunities (pp. 91122). Washington, DC: American Psychological Association.Google Scholar
Ones, D. S., Mount, M. K., Barrick, M. R., & Hunter, J. E. (1994). Personality and job performance: A critique of the Tett, Jackson, and Rothstein (1991) meta-analysis. Personnel Psychology, 47, 147156.Google Scholar
Ones, D. S., Viswesvaran, C., & Schmidt, F. L. (1993). Comprehensive meta-analysis of integrity test validities: Findings and implications for personnel selection and theories of job performance. Journal of Applied Psychology [Monograph], 78, 679703.Google Scholar
Ones, D. S., Viswesvaran, C., & Schmidt, F. L. (2012). Integrity tests predict counterproductive work behaviors and job performance well: A comment on Van Iddekinge et al. Journal of Applied Psychology, 97 (3), 537542. doi: 10.1037/a0024825 Google Scholar
Ones, D.S., Viswesvaran, C., & Schmidt, F. L. (2017). Realizing the full potential of psychometric meta-analysis for a cumulative science and practice of human resource management. Human Resource Management Review, 27 (1), 201215.Google Scholar
Roth, P. L., Le, H., Oh, I.-S., Van Iddekinge, C., Buster, M. A., Robbins, S. B., & Campion, M. A. (2014). Differential validity for cognitive ability tests in employment and educational settings: Not much more than range restriction? Journal of Applied Psychology, 99, 120.Google Scholar
Roth, P. L., Le, H., Oh, I.-S., Van Iddekinge, C. H., & Robbins, S. B. (2017). Who r u? On the (in)accuracy of incumbent-based estimates of range restriction in criterion-related and differential validity research. Journal of Applied Psychology, 102 (5), 802828. doi: 10.1037/apl0000193 Google Scholar
Schmidt, F. L., & Hunter, J. E. (1977). Development of a general solution to the problem of validity generalization. Journal of Applied Psychology, 62, 529540.Google Scholar
Schmidt, F. L., & Hunter, J. E. (2015). Methods of meta-analysis: Correcting error and bias in research findings (3rd ed.). Thousand Oaks, CA: Sage.Google Scholar
Schmidt, F. L., Law, K., Hunter, J. E., Rothstein, H. R., Pearlman, K., & McDaniel, M. (1993). Refinements in validity generalizations methods: Implications for the situational specificity hypothesis. Journal of Applied Psychology, 78, 313.Google Scholar
Schmidt, F. L., & Oh, I.-S. (2013). Methods for second order meta-analysis and illustrative applications. Organizational Behavior and Human Decision Processes, 121, 204218.Google Scholar
Tett, R. P., Hundley, N. A., & Christiansen, N. D. (2017). Meta-analysis and the myth of generalizability. Industrial and Organizational Psychology: Perspectives on Science and Practice, 10 (3), 421456.Google Scholar
Tett, R. P., Jackson, D. N., & Rothstein, M. (1991). Personality measures as predictors of job performance: A meta-analytic review. Personnel Psychology, 44, 703742.Google Scholar
Viswesvaran, C., Ones, D. S., & Schmidt, F. L. (1996). Comparative analysis of the reliability of job performance ratings. Journal of Applied Psychology, 81, 557574.Google Scholar
Viswesvaran, C., Ones, D. S., & Schmidt, F. L. (2016). Comparing rater groups: How to disentangle rating reliability from construct-level disagreements. Industrial and Organizational Psychology, Perspectives on Theory and Practice, 9, 800806.Google Scholar
Viswesvaran, C., Ones, D. S., Schmidt, F. L., Le, H., & Oh, I.-S. (2014). Measurement error obfuscates scientific knowledge: Path to cumulative knowledge requires corrections for unreliability and psychometric meta-analysis. Industrial and Organizational Psychology: Perspectives on Theory and Practice, 7, 507518.Google Scholar