Skip to main content Accessibility help
×
Home

A Call for Conceptual Models of Technology in I-O Psychology: An Example From Technology-Based Talent Assessment

  • Neil Morelli (a1), Denise Potosky (a2), Winfred Arthur (a3) and Nancy Tippins (a4)

Abstract

The rate of technological change is quickly outpacing today's methods for understanding how new advancements are applied within industrial-organizational (I-O) psychology. To further complicate matters, specific attempts to explain observed differences or measurement equivalence across devices are often atheoretical or fail to explain why a technology should (or should not) affect the measured construct. As a typical example, understanding how technology influences construct measurement in personnel testing and assessment is critical for explaining or predicting other practical issues such as accessibility, security, and scoring. Therefore, theory development is needed to guide research hypotheses, manage expectations, and address these issues at this intersection of technology and I-O psychology. This article is an extension of a Society for Industrial and Organizational Psychology (SIOP) 2016 panel session, which (re)introduces conceptual frameworks that can help explain how and why measurement equivalence or nonequivalence is observed in the context of selection and assessment. We outline three potential conceptual frameworks as candidates for further research, evaluation, and application, and argue for a similar conceptual approach for explaining how technology may influence other psychological phenomena.

Copyright

Corresponding author

Correspondence concerning this article should be addressed to Neil Morelli, The Cole Group – R & D, 300 Brannan St., Suite 304, San Francisco, CA 94107. E-mail: neil.morelli@gmail.com

References

Hide All
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.
Anderson, J. C., & Gerbing, D. W. (1998). Structural equation modeling in practice: A review and recommended two-step approach. Psychological Bulletin, 103, 411423.
Armstrong, M., Landers, R. N., & Collmus, A. (2015, April). Game-thinking in human resource management. Poster session presented at the 30th Annual Conference of the Society for Industrial-Organizational Psychology, Philadelphia, PA.
Arthur, W. Jr., Doverspike, D., Kinney, T. B., & O'Connell, M. (2017). The impact of emerging technologies on selection models and research: Mobile devices and gamification as exemplars. In Farr, J. L. & Tippins, N. T. (Eds.), Handbook of employee selection (2nd ed.) (pp. 967986). New York: Taylor & Francis/Psychology Press.
Arthur, W. Jr., Glaze, R. M., Jarrett, S. M., White, C. D., Schurig, I., & Taylor, J. E. (2014). Comparative evaluation of three situational judgment test response formats in terms of construct-related validity, subgroup differences, and susceptibility to response distortion. Journal of Applied Psychology, 99, 335345.
Arthur, W. Jr., Keiser, N., & Doverspike, D. (2017). An information processing-based conceptual framework of the effects of unproctored Internet-based testing devices on scores on employment-related assessments and tests. Manuscript submitted for publication.
Arthur, W. B. (2009). The nature of technology: What it is and how it evolves. New York: Free Press.
Bank, J., Collins, L., Hartog, S., Hardesty, S., O'Shea, P., & Dapra, R. (2015, April). In Bank, J. (chair), High-fidelity simulations: Refining leader assessment and leadership development. Symposium presented at the 30th Annual Conference of the Society for Industrial-Organizational Psychology, Philadelphia, PA.
Becker, G. (2000). How important is transient error in estimating reliability? Going beyond simulation studies. Psychological Methods, 5, 370379.
Bennett, R. E., & Zhang, M. (2016). Validity and automated scoring. In Drasgow, F. (Ed.), Technology and testing: Improving educational and psychological measurement (pp. 142173). New York: Routledge.
Binning, J. F., & Barrett, G. V. (1989). Validity of personnel decisions: A conceptual analysis of the inferential and evidential bases. Journal of Applied Psychology, 89, 150157.
Chamorro-Premuzic, T., Winsborough, D. Sherman, R. A., & Hogan, R. (2016). New talent signals: Shiny new objects or a brave new world? Industrial and Organizational Psychology: Perspectives on Science and Practice, 9 (3), 120 doi:10.1017/iop.2016.6
Chan, D., & Schmitt, N. (1997). Video-based versus paper-and-pencil method of assessment in situational judgment tests: Subgroup differences in test performance and face validity perceptions. Journal of Applied Psychology, 82, 143159.
Coovert, M. D., & Thompson, L. F. (2014a). Toward a synergistic relationship between psychology and technology. In Coovert, M. D. & Thompson, L. F. (Eds.), The psychology of workplace technology (pp. 121). New York: Routledge.
Coovert, M. D., & Thompson, L. F. (2014b). The psychology of workplace technology. New York: Routledge.
Coyne, I., Warszta, T., Beadle, S., & Sheehan, N. (2005). The impact of mode of administration on the equivalence of a test battery: A quasi-experimental design. International Journal of Selection and Assessment, 13, 220224.
Ferran, C., & Watts, S. (2008). Videoconferencing in the field: A heuristic processing model. Management Science, 54, 565578.
Foster, D. (2016). Testing technology and its effects on test security. In Drasgow, F. (Ed.), Technology and testing: Improving educational and psychological measurement (pp. 235255). New York: Routledge.
Ghiselli, E. E., Campbell, J. P., & Zedeck, S. (1981). Measurement theory for the behavioral sciences. New York: W. H. Freeman & Co.
Gierl, M. J., Lai, H., Fung, K., & Zheng, B. (2016). Using technology-enhanced processes to generate test items in multiple languages. In Drasgow, F. (Ed.), Technology and testing: Improving educational and psychological measurement (pp. 109126). New York: Routledge.
Gray, C., Morelli, N. A., & McLane, W. (2015, April). Does use context affect selection assessments via mobile devices? In Morelli, N. A. (Chair), Mobile devices in talent assessment: The next chapter. Symposium presented at the 30th Annual Conference of the Society for Industrial and Organizational Psychology, Philadelphia, PA.
Guilford, J. P. (1954). Psychometric methods (2nd ed.). New York: McGraw-Hill.
Gulliksen, H. (1950). Theory of mental tests. New York: Wiley.
Hong, E. (1999). Test anxiety, perceived test difficulty, and test performance: Temporal patterns of their effects. Learning and Individual Differences, 11, 431447.
Huang, J., & Yuan, J. (In press.). Bayesian dynamic mediation analysis. Psychological Methods. doi:10.1037/met0000073
King, D. D., Ryan, A. M., Kantrowitz, T., Grelle, D., & Dainis, A. (2015). Mobile Internet testing: An analysis of equivalence, individual differences, and reactions. International Journal of Selection and Assessment, 23, 382394.
Landers, R. N. (2016). An introduction plus a crash course in R. The Industrial-Organizational Psychologist, 54 (1). Retrieved from http://www.siop.org/tip/july16/crash.aspx
Leonardi, P. M. (2012). Materiality, sociomateriality, and socio-technical systems: What do these terms mean? How are they related? Do we need them? In Leonardi, P. M., Nardi, B. A., & Kallinikos, J. (Eds.), Materiality and organizing: Social interaction in a technological world (pp. 2548). Oxford, UK: Oxford University Press.
Luecht, R. M. (2016). Computer-based test delivery models, data, and operational implementation issues. In Drasgow, F. (Ed.), Technology and testing: Improving educational and psychological measurement (pp. 179205). New York: Routledge.
McCornack, R. L. (1956). A criticism of studies comparing item-weighing methods. Journal of Applied Psychology, 40, 343344.
Mead, A. D., & Drasgow, F. (1993). Equivalence of computerized and paper-and-pencil cognitive ability tests: A meta-analysis. Psychological Bulletin, 114, 449459.
Mead, A. D., Olson-Buchanan, , & Drasgow, F. (2014). Technology-based selection. In Coovert, M. D. & Thompson, L. F. (Eds.), The psychology of workplace technology (pp. 2143). New York: Routledge.
Morelli, N., Adler, S., Arthur, W. Jr., Potosky, D., & Tippins, N. (2016, April). Developing a conceptual model of technology applied to I-O psychology. Panel discussion presented at the 31st Annual Conference of the Society for Industrial and Organizational Psychology, Anaheim, CA.
Orilkowski, W. J. (2007). Sociomaterial practices: Exploring technology at work. Organization Studies, 28, 14351448.
Orilkowski, W. J., & Scott, S. V. (2008). Sociomateriality: Challenging the separation of technology, work, and organization. The Academy of Management Annals, 2, 433474.
Potosky, D. (2008). A conceptual framework for the role of the administration medium in the personnel assessment process. Academy of Management Review, 33, 629648.
Ryan, A. M., & Ployhart, R. E. (2014). A century of selection. Annual Review of Psychology, 65, 693717.
Schmitt, N., & Kuljanin, G. (2008). Measurement invariance: Review of practice and implications. Human Resource Management Review, 18, 210222.
Scott, J. C., & Mead, A. D. (2011). Foundations for measurement. In Tippins, N. & Adler, S. (Eds.), Technology-enhanced assessment of talent (pp. 118). San Francisco: John Wiley & Sons, Inc.
Seiler, S., McEwen, D., Benavidez, J., O'Shea, P., Popp, E., & Sydell, E. (2015, April). Under the hood: Practical challenges in developing technology-enhanced assessments. Panel discussion presented at the 30th Annual Conference of the Society for Industrial-Organizational Psychology, Philadelphia, PA.
Spearman, C. (1904). The proof and measurement of the association between two things. American Journal of Psychology, 15, 72101.
Spearman, C. (1910). Correlation calculated from faulty data. British Journal of Psychology, 3, 271295.
Stone, E., Laitusis, C. C., & Cook, L. L. (2016). Increasing the accessibility of assessments through technology. In Drasgow, F. (Ed.), Technology and testing: Improving educational and psychological measurement (pp. 217234). New York: Routledge.
Tay, L., Meade, A. W., & Cao, M. (2014). An overview and practical guide to IRT measurement equivalence analysis. Organizational Research Methods, 18, 346.
Thurstone, L. L. (1931). The reliability and validity of tests: Derivation and interpretation of fundamental formulae concerned with reliability and validity of tests and illustrative problems. Ann Arbor, MI: Edwards Bros.
Tippins, N. (2011). Overview of technology-enabled assessments. In Tippins, N. & Adler, S. (Eds.), Technology-enhanced assessment of talent (pp. 118). San Francisco: John Wiley & Sons, Inc.
To fly, to fall, to fly again. (2015, July). The Economist. Retrieved from http://www.economist.com/news/briefing/21659722-tech-boom-may-get-bumpy-it-will-not-end-repeat-dotcom-crash-fly
Tonidandel, S., Quiñones, M. A., & Adams, A. A. (2002). Computer-adaptive testing: The impact of test characteristics on perceived performance and test takers' reactions. Journal of Applied Psychology, 87, 320332.
Vandenberg, R. J., & Lance, C. E. (2000). A review and synthesis of the measurement invariance literature: Suggestions, practices, and recommendations for organizational research. Organizational Research Methods, 3, 470.
Vandenberg, R. J., & Morelli, N. A. (2016). A contemporary update on testing for measurement equivalence and invariance. In Meyer, J. P. (Ed.), The handbook of employee commitment (pp. 449461). Cheltenham, UK: Edward Elgar.
Zickar, M. J., Cortina, J., & Carter, N. T. (2010). Evaluation of measures: Sources of sufficiency, error, and contamination. In Farr, J. L. & Tippins, N. (Eds.), Handbook of employee selection (pp. 399416). New York: Routledge.

Keywords

A Call for Conceptual Models of Technology in I-O Psychology: An Example From Technology-Based Talent Assessment

  • Neil Morelli (a1), Denise Potosky (a2), Winfred Arthur (a3) and Nancy Tippins (a4)

Metrics

Full text views

Total number of HTML views: 0
Total number of PDF views: 0 *
Loading metrics...

Abstract views

Total abstract views: 0 *
Loading metrics...

* Views captured on Cambridge Core between <date>. This data will be updated every 24 hours.

Usage data cannot currently be displayed