Hostname: page-component-5c6d5d7d68-tdptf Total loading time: 0 Render date: 2024-08-22T01:17:58.590Z Has data issue: false hasContentIssue false

Selection tests work better than we think they do, and have for years

Published online by Cambridge University Press:  20 August 2024

Jeff L. Foster*
Affiliation:
Missouri State University, Springfield, MO, USA
Piers Steel
Affiliation:
University of Calgary, Calgary, AB, Canada
Peter D. Harms
Affiliation:
University of Alabama, Tuscaloosa, AL, USA
Thomas A. O’Neill
Affiliation:
University of Calgary, Calgary, AB, Canada
Dustin Wood
Affiliation:
University of Alabama, Tuscaloosa, AL, USA
*
Corresponding author: Jeff L. Foster; Email: jfoster@missouristate.edu

Abstract

We can make better decisions when we have a better understanding of the different sources of variance that impact job performance ratings. A failure to do so cannot only lead to inaccurate conclusions when interpreting job performance ratings, but often misguided efforts aimed at improving our ability to explain and predict them. In this paper, we outline six recommendations relating to the interpretation of predictive validity coefficients and efforts aimed at predicting job performance ratings. The first three focus on the need to evaluate the effectiveness of selection instruments and systems based only on the variance they can possibly account for. When doing so, we find that it is not only possible to account for the majority of the variance in job performance ratings that most select systems can possibly predict, but that we’ve been able to account for this variance for years. Our last three recommendations focus on the need to incorporate components related to additional sources of variance in our predictive models. We conclude with a discussion of their implications for both research and practice.

Type
Focal Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Society for Industrial and Organizational Psychology

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Arthur, W., Bennett, W., Edens, P. S., & Bell, S. T. (2003). Effectiveness of training in organizations: A meta-analysis of design and evaluation features. Journal of Applied Psychology, 88(2), 234245. https://doi.org/10.1037/0021-9010.88.2.234.CrossRefGoogle ScholarPubMed
Baik, S., & Park, H. (2020). A meta-analytic study on the relationships of person-supervisor fit with job attitudes and behaviors. Korean Journal of Industrial and Organizational Psychology, 33(3), 233265.CrossRefGoogle Scholar
Barrick, M. R., & Parks-Leduc, L. (2019). Selection for fit. Annual Review of Organizational Psychology and Organizational Behavior, 6, 171193.CrossRefGoogle Scholar
Burchett, D., & Ben-Porath, Y. S. (2019). Methodological considerations for developing and evaluating response bias indicators. Psychological Assessment, 31(12), 14971511. https://doi.org/10.1037/pas0000680.CrossRefGoogle ScholarPubMed
Cattell, J. M. (1893). On errors of observation. The American Journal of Psychology, 5(3), 285293.CrossRefGoogle Scholar
Cheng, K. H., Hui, C. H., & Cascio, W. F. (2017). Leniency bias in performance ratings: The big-five correlates. Frontiers in psychology, 8, 521.CrossRefGoogle ScholarPubMed
Christina, S. C., & Latham, G. P. (2004). The situational interview as a predictor of academic and team performance: A study of the mediating effects of cognitive ability and emotional intelligence. International Journal of Selection and Assessment, 12(4), 312320. https://doi.org/10.1111/j.0965-075X.2004.00286.x.CrossRefGoogle Scholar
Cronbach, L. J., Gleser, G. C., Nanda, H., & Rajaratnam, N. (1972). The dependability of behavioral measurements: Theory of generalizability for scores and profiles. Wiley.Google Scholar
DeShon, R. P. (2003). A generalizability theory perspective on measurement error corrections in validity generalization. In Murphy, K. R. (Eds.), Validity generalization: A critical review (pp. 365402). Lawrence Erlbaum Associates Publishers.Google Scholar
Digman, J. M. (1990). Personality structure: Emergence of the Five-Factor Model. Annual Review of Psychology, 41, 417440.CrossRefGoogle Scholar
Funder, D. C., & Ozer, D. J. (2019). Evaluating effect size in psychological research: Sense and nonsense. Advances in Methods and Practices in Psychological Science, 2, 156168. https://doi.org/10.1177/2515245919847202 CrossRefGoogle Scholar
Goldberg, L. R. (1992). The development of markers for the big-five factor structure. Psychological Assessment, 4, 2642. https://doi.org/10.1037/1040-3590.4.1.26.CrossRefGoogle Scholar
Greguras, G. J., & Robie, C. (1998). A new look at within-source interrater reliability of 360-degree feedback ratings. Journal of Applied Psychology, 83, 960968.CrossRefGoogle Scholar
Hoffman, B. J., Lance, C. E., Bynum, B., & Gentry, W. A. (2010). Rater source effects are alive and well after all. Personnel Psychology, 63, 119151.CrossRefGoogle Scholar
Hunter, J. E. (1986). Cognitive ability, cognitive aptitudes, job knowledge and job performance. Journal of Vocational Behavior, 29, 340362.CrossRefGoogle Scholar
Jackson, D. J. R., Michaelides, G., Dewberry, C., Schwencke, B., & Toms, S. (2020). The implications of unconfounding multisource performance ratings. Journal of Applied Psychology, 105, 312329.CrossRefGoogle ScholarPubMed
Jak, S., & Cheung, M. W. L. (2020). Meta-analytic structural equation modeling with moderating effects on SEM parameters. Psychological methods, 25(4), 430455.CrossRefGoogle ScholarPubMed
John, O. P. (1990). The, Big-Five, factor taxonomy: Dimensions of personality in the natural language and in questionnaires. In Pervin, L. A. (Eds.), Handbook of personality theory and research (pp. 66100). Guilford.Google Scholar
Johnson, J. W., Steel, P., Scherbaum, C. A., Hoffman, C. C., Jeanneret, P. R., & Foster, J. (2010). Validation is like motor oil: Synthetic is better. Industrial and Organizational Psychology, 3(3), 305328.CrossRefGoogle Scholar
Junker, N. M., & Van Dick, R. (2014). Implicit theories in organizational settings: A systematic review and research agenda of implicit leadership and followership theories. The Leadership Quarterly, 25(6), 11541173.CrossRefGoogle Scholar
Kane, M. (2011). The errors of our ways. Journal of Educational Measurement, 48(1), 1230.CrossRefGoogle Scholar
Kluemper, D. H., McLarty, B. D., Bishop, T. R., & Sen, A. (2015). Interviewee selection test and evaluator assessments of general mental ability, emotional intelligence and extraversion: Relationships with structured behavioral and situational interview performance. Journal of Business and Psychology, 30(3), 543563. https://doi.org/10.1007/s10869-014-9381-6.CrossRefGoogle Scholar
Kraiger, K., & Teachout, M. S. (1990). Generalizability theory as construct-related evidence of the validity of job performance ratings. Human Performance, 3, 1935.CrossRefGoogle Scholar
LeBreton, J., Scherer, K., & James, L. (2014). Corrections for criterion reliability in validity generalization: A false prophet in a land of suspended judgment. Industrial and Organizational Psychology, 7, 478500.Google Scholar
McCrae, R. R., & Costa, P. T. Jr (1987). Validity of the Five-Factor model of personality across instruments and observers. Journal of Personality and Social Psychology, 52, 8190.CrossRefGoogle ScholarPubMed
Murphy, K. R., & Russell, C. J. (2017). Mend it or end it: Redirecting the search for interactions in the organizational sciences. Organizational Research Methods, 20(4), 549573.CrossRefGoogle Scholar
Nye, C. D., Su, R., Rounds, J., & Drasgow, F. (2017). Interest congruence and performance: Revisiting recent meta-analytic findings. Journal of Vocational Behavior, 98, 138151. https://doi.org/10.1016/j.jvb.2016.11.002.CrossRefGoogle Scholar
O’Boyle, E. H. Jr, Pollack, J. M., Hawver, T. H., & Story, P. A. (2011). The relation between emotional intelligence and job performance: A meta-analysis. Journal of Organizational Behavior, 32(5), 788818. https://doi.org/10.1002/job.714.CrossRefGoogle Scholar
O’Neill, T. A., Goffin, R. D., & Gallatly, I. R. (2012). The use of random coefficient modeling for understanding and predicting job performance ratings: An application with field data. Organizational Research Methods, 15, 436462.CrossRefGoogle Scholar
O’Neill, T. A., McLarnon, M. J. W., & Carswell, J. J. (2015). Variance components of job performance ratings. Human Performance, 28(1), 6669. https://doi.org/10.1080/08959285.2014.974756.CrossRefGoogle Scholar
Park, H.(H), Wiernik, B. M., Oh, I.-S., Gonzalez-Mulé, E., Ones, D. S., & Lee, Y. (2020). Meta-analytic five-factor model personality intercorrelations: Eeny, meeny, miney, moe, how, which, why, and where to go. Journal of Applied Psychology, 105(12), 14901529. https://doi.org/10.1037/apl0000476.CrossRefGoogle ScholarPubMed
Peng, Q., Zhong, X., Liu, S., Zhou, H., & Ke, N. (2022). Job autonomy and knowledge hiding: the moderating roles of leader reward omission and person-supervisor fit. Personnel Review, 51(9), 23712387.CrossRefGoogle Scholar
Putka, D. J., & Hoffman, B. J. (2013). Clarifying the contribution of assessee-, dimension-, exercise-, and assessor-related effects to reliable and unreliable variance in assessment center ratings. Journal of Applied Psychology, 98(1), 114133. https://doi.org/10.1037/a0030887.CrossRefGoogle ScholarPubMed
Putka, D. J., Le, H., McCloy, R. A., & Diaz, T. (2008). Ill-structured measurement designs in organizational research: Implications for estimating interrater reliability. Journal of Applied Psychology, 93(5), 959981.CrossRefGoogle ScholarPubMed
Roch, S. G., Woehr, D. J., Mishra, V., & Kieszczynska, U. (2012). Rater training revisited: An updated meta-analytic review of frame-of-reference training. Journal of Occupational and Organizational Psychology, 85(2), 370395. https://doi.org/10.1111/j.2044-8325.2011.02045.x.CrossRefGoogle Scholar
Rohling, M. L., Larrabee, G. J., Greiffenstein, M. F., Ben-Porath, Y. S., Lees-Haley, P., Green, P., & Greve, K. W. (2011). A misleading review of response bias. Comment on McGrath, Mitchell, Kim, and Hough.Google ScholarPubMed
Sackett, P. R. (2014). When and why correcting validity coefficients for interrater reliability makes sense. Industrial and Organizational Psychology, 7(04), 501506.Google Scholar
Sackett, P. R., Zhang, C., Berry, C. M., & Lievens, F. (2022). Revisiting meta-analytic estimates of validity in personnel selection: Addressing systematic overcorrection for restriction of range. Journal of Applied Psychology, 107(11), 20402068.CrossRefGoogle ScholarPubMed
Salgado, J. F. (2003). Predicting job performance using FFM and non-FFM personality measures. Journal of Occupational and Organizational Psychology, 76(3), 323346. https://doi.org/10.1348/096317903769647201.CrossRefGoogle Scholar
Salgado, J. F., Anderson, N., Moscoso, S., Bertua, C., De Fruyt, F., & Rolland, J. P. (2003). A meta-analytic study of general mental ability validity for different occupations in the European community. Journal of Applied Psychology, 88(6), 10681081. https://doi.org/10.1037/0021-9010.88.6.1068.CrossRefGoogle ScholarPubMed
Salgado, J. F., & Moscoso, S. (2002). Comprehensive meta-analysis of the construct validity of the employment interview. European Journal of Work and Organizational Psychology, 11(3), 299324. https://doi.org/10.1080/13594320244000184.CrossRefGoogle Scholar
Salgado, J. F., Moscoso, S., & Anderson, N. (2016). Corrections for criterion reliability in validity generalization: The consistency of Hermes, the utility of Midas. Journal of Work and Organizational Psychology, 32(1), 1723.CrossRefGoogle Scholar
Schmidt, F. L., & Hunter, J. E. (2014). Methods of meta-analysis: Correcting error and bias in research findings. Sage publications.Google Scholar
Schmidt, F. L., & Rader, M. (1999). Exploring the boundary conditions for interview validity: Meta-analytic validity findings for a new interview type. Personnel Psychology, 52(2), 445464. https://doi.org/10.1111/j.1744-6570.1999.tb00169.x.CrossRefGoogle Scholar
Schmidt, F. L., Shaffer, J. A., & Oh, I. S. (2008). Increased accuracy for range restriction corrections: Implications for the role of personality and general mental ability in job and training performance. Personnel Psychology, 61(4), 827868. https://doi.org/10.1111/j.1744-6570.2008.00132.x.CrossRefGoogle Scholar
Schmidt, J. A., O’Neill, T. A., & Dunlop, P. D. (2021). The effects of team context on peer ratings of task and citizenship performance. Journal of Business and Psychology, 36(4), 573588. https://doi.org/10.1007/s10869-020-09701-8 CrossRefGoogle Scholar
Schmitt, N. (2007). The value of personnel selection: Reflections on some remarkable claims. Academy of Management Perspectives, 21(3), 1923.CrossRefGoogle Scholar
Scullen, S. E., Mount, M. K., & Goff, M. (2000). Understanding the latent structure of job performance ratings. Journal of Applied Psychology, 85, 956970.CrossRefGoogle ScholarPubMed
Shen, W., Cucina, J., Walmsley, P., & Seltzer, B. (2014). When Correcting for Unreliability of Job Performance Ratings, the Best Estimate Is Still .52. Industrial and Organizational Psychology, 7(4), 519524.Google Scholar
Stanek, K. C. (2014). Meta-analyses of personality and cognitive ability. Unpublished doctoral dissertation. University of Minnesota.Google Scholar
Steel, P., Beugelsdijk, S., & Aguinis, H. (2021). The anatomy of an award-winning meta-analysis: Recommendations for authors, reviewers, and readers of meta-analytic reviews. Journal of International Business Studies, 52(1), 2344.CrossRefGoogle Scholar
van Vianen, A. E. (2018). Person-environment fit: A review of its basic tenets. Annual Review of Organizational Psychology and Organizational Behavior, 5, 75101. https://doi.org/10.1146/annurev-orgpsych-032117-104702.CrossRefGoogle Scholar
Viswesvaran, C., Schmidt, F. L., & Ones, D. S. (2005). Is there a general factor in ratings of job performance? A meta-analytic framework for disentangling substantive and error influences. Journal of applied psychology, 90(1), 108.CrossRefGoogle Scholar
Wai, J., & Rindermann, H. (2017). What goes into high educational and occupational achievement? Education, brains, hard work, networks, and other factors. High Ability Studies, 28(1), 127145. https://doi.org/10.1080/13598139.2017.1302874.CrossRefGoogle Scholar
Woehr, D. J., Putka, D. J., & Bowler, M. C. (2012). An examination of G-theory methods for modeling multitrait-multimethod data: Clarifying links to construct validity and confirmatory factor analysis. Organizational Research Methods, 15(1), 134161.CrossRefGoogle Scholar