Skip to main content Accessibility help
×
Hostname: page-component-848d4c4894-sjtt6 Total loading time: 0 Render date: 2024-07-01T11:01:44.206Z Has data issue: false hasContentIssue false

Chapter 9 - Technology-Driven Developments in Psychometrics

from Part III - Advances, Trends, and Issues

Published online by Cambridge University Press:  18 December 2017

John C. Scott
Affiliation:
APT Metrics
Dave Bartram
Affiliation:
CEB-SHL
Douglas H. Reynolds
Affiliation:
Development Dimensions International
Get access

Summary

Image of the first page of this content. For PDF version, please use the ‘Save PDF’ preceeding this image.'
Type
Chapter
Information
Next Generation Technology-Enhanced Assessment
Global Perspectives on Occupational and Workplace Testing
, pp. 239 - 264
Publisher: Cambridge University Press
Print publication year: 2017

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Arthur, W., Doverspike, D., Munoz, G. J., Taylor, J. E., & Carr, A. E. (2014). The use of mobile devices in high-stakes remotely delivered assessments and testing. International Journal of Selection and Assessment, 22(2), 113123.Google Scholar
Bejar, I. I. (1991). A methodology for scoring open-ended architectural design problems. Journal of Applied Psychology, 76, 522532Google Scholar
Bock, R. D., & Aitkin, M. (1981). Marginal maximum-likelihood estimation of item parameters: Application of an EM algorithm. Psychometrika, 46, 443459.Google Scholar
Breithaupt, K., & Hare, D. (2016). Automated test assembly. In Drasgow, F. (Ed.), Technology and testing: Improving educational and psychological measurement (pp. 128141). New York: Routledge.Google Scholar
Burstein, J., Tetreault, J., & Madnani, N. (2013). The e-rater automated essay scoring system. Handbook of automated essay evaluation: Current applications and new directions (e-book) (pp. 5567). New York: Routledge/Taylor and Francis Group.Google Scholar
Chan, D., & Schmitt, N. (1997). Video-based versus paper-and-pencil method of assessment in situational judgment tests: Subgroup differences in test performance and face validity perceptions. Journal of Applied Psychology, 82, 143159.CrossRefGoogle ScholarPubMed
Chernyshenko, O. S., & Stark, S. (2016). Mobile psychological assessment. In Drasgow, F. (Ed.), Technology and testing: Improving educational and psychological measurement (pp. 206216). New York: Routledge.Google Scholar
Chow, S., & Knelf, E. H. (2015, April). Gamifying psychometric assessments: Driving engagement for more data. Paper presented at 30th annual conference of the Society for Industrial and Organizational Psychology. Philadelphia, PA.Google Scholar
De Soete, B., Lievens, F., Oostrom, J. and Westerveld, L. (2013), Alternative predictors for dealing with the diversity–validity dilemma in personnel selection: The constructed response multimedia test. International Journal of Selection and Assessment, 21, 239250.Google Scholar
Desmarais, L. B., Masi, D. L., Olson, M. J., Barbera, K. M., & Dyer, P. J. (1994, April). Scoring a multimedia situational judgment test: IBM’s experience. Paper presented at the annual conference of the Society for Industrial and Organizational Psychology, Nashville, TN.Google Scholar
Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011). From game design elements to gamefulness: Defining gamification. In Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments (pp. 915). New York: ACM.Google Scholar
Doverspike, D., Arthur, W., Taylor, J. E., & Carr, A. E. (2012). Mobile mania: Impact of device type on remotely delivered assessment. Panel presentation at the 27th Annual Conference of the Society for Industrial and Organizational Psychology, San Diego, CA.Google Scholar
Drasgow, F. (1991, April). Chair of symposium “Multi-media computerized assessments of individuals.” Society for Industrial and Organizational Psychology, St. Louis.Google Scholar
Drasgow, F. (Ed.). (2016). Technology and testing: Improving educational and psychological measurement. New York: Routledge.Google Scholar
Drasgow, F., Luecht, R., & Bennett, R. (2006). Technology and testing. In Brennan, R. L. (Ed.), Educational measurement (4th ed., pp. 471515). Westport, CT: American Council on Education/Praeger.Google Scholar
Fursman, P. M., & Tuzinski, K. A. (2015, April). Reactions to mobile testing from the perspectives of job applicants. Paper presented at the 30th Annual Conference of the Society for Industrial and Organizational Psychology, Philadelphia, PA.Google Scholar
Gierl, M. J., Lai, H., & Turner, S. (2012). Using automatic item generation to create multiple-choice items for assessments in medical education. Medical Education, 46, 757765.Google Scholar
Gierl, M. J., Zhou, J., & Alves, C. (2008). Developing a taxonomy of item models types to promote assessment engineering. Journal of Technology, Learning, and Assessment, 7, 151.Google Scholar
Glas, C. A. W., & van der Linden, W. J. (2003). Computerized adaptive testing with item cloning. Applied Psychological Measurement, 27, 247261.Google Scholar
Guo, J., Tay, L., & Drasgow, F. (2009). Conspiracies and test compromise: An evaluation of the resistance of test systems to small-scale cheating. International Journal of Testing, 9, 283309.Google Scholar
Gutierrez, S. L., & Meyer, J. M. (2013, April). Assessments on the go: applicant reactions to mobile testing. Paper presented at the 28th Annual Conference of the Society for Industrial and Organizational Psychology, Houston, TX.Google Scholar
Hanson, M. A., Borman, W. C., Mogilka, H. J., Manning, C., & Hedge, J. W. (1999). Computerized assessment of skill for a highly technical job. In Drasgow, F. & Olson-Buchanan, J. B. (Eds.), Innovations in computerized assessment (pp. 197–220). Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
Harmes, J. C., Parshall, C. G., Rendina-Gobioff, G., Jones, P. K., Githens, M. P., & Dennard, A. (2004, November). Integrating usability methods into the CBT development process: Case study of a technology literacy assessment. Paper presented at the annual meeting of the Florida Educational Research Association, Tampa, FL.Google Scholar
Holzinger, K. J., & Swineford, S. (1937). The bi-factor method. Psychometrika, 2, 4154.Google Scholar
Hulin, C. L., Drasgow, F., & Parsons, C. K. (1983). Item response theory: Application to psychological measurement. Homewood, IL: Dorsey Press.Google Scholar
Impelman, K. (2013, April). Mobile assessment: Who is doing it and how it impacts selection. Paper presented at the 28th Annual Conference of the Society for Industrial and Organizational Psychology Conference, Houston, TX.Google Scholar
Junker, B. W., & Sijtsma, K. (2001). Cognitive assessment models with few assumptions, and connections with nonparametric item response theory. Applied Psychological Measurement, 25, 258272.CrossRefGoogle Scholar
Kustis, G. A., Amorati, A., & Parachuri, S. (2015, April). Collecting data for new tests using Facebook games. Paper presented at 30th Annual Conference of the Society for Industrial and Organizational Psychology, Philadelphia, PA.Google Scholar
Lievens, F., Burke, E., & Ditton, T. (2011). Dealing with the threats inherent in unproctored internet testing of cognitive ability: Results from a large-scale operational test program. Journal of Occupational and Organizational Psychology, 84, 817824.Google Scholar
Lievens, F., De Corte, W., & Westerveld, L. (2015). Understanding the building blocks of selection procedures: Effects of response fidelity on performance and validity. Journal of Management, 41, 16041627.Google Scholar
Luecht, R. M., & Nungester, R. J. (1998). Some practical applications of computerized adaptive sequential testing. Journal of Educational Measurement, 35, 229249.Google Scholar
Mead, A. D., & Drasgow, F. (1993). Equivalence of computerized and paper-and-pencil cognitive ability tests: A meta-analysis. Psychological Bulletin, 114, 449458.Google Scholar
Morelli, N. A., Mahan, R. P., & Illingworth, A. J. (2014). Establishing the measurement equivalence of online selection assessments delivered on mobile versus nonmobile devices. International Journal of Selection and Assessment, 22(2), 124–138.Google Scholar
Munson, L. (2013). New year, new workplace! SIOP Item of Interest. Retrieved from www.siop.org/article_view.aspx?article=1203 (accessed January 12, 2016).Google Scholar
Muthén, B., & Asparouhov, T. (2012). Bayesian structural equation modeling: A more flexible representation of substantive theory. Psychological Methods, 17, 313335.Google Scholar
Muthén, L. K., and Muthén, B. O. (1998–2015). Mplus user’s guide (7th ed.) Los Angeles: Muthén and Muthén.Google Scholar
Nye, C. D., & Drasgow, F. (2011). Assessing goodness of fit: Simple rules of thumb simply do not work. Organizational Research Methods, 14, 548570.Google Scholar
Olson, J. B., & Keenan, P. A., (1991, June). Assessing administrative decision making skills. Paper presented at the meeting of the International Personnel Management Association Assessment Council, Chicago.Google Scholar
Olson-Buchanan, J. B., & Drasgow, F. (2006). Multimedia situational judgment tests: The medium creates the message. In Weekley, J. & Ployhart, R. E. (Eds.), Situational judgment tests: Theory, measurement, and application (pp. 253278). Mahwah, NJ: SIOP Frontiers Series. Lawrence Erlbaum Associates.Google Scholar
Olson-Buchanan, J. B., Drasgow, F., Moberg, P. J., Mead, A. D., Keenan, P. A., and Donovan, M. A. (1998), Interactive video assessment of conflict resolution skills. Personnel Psychology, 51, 124.Google Scholar
Oostrom, J. K., Born, M. Ph., Serlie, A. W., & Van der Molen, H. T. (2010). Webcam testing: Validation of an innovative open-ended multimedia test. European Journal of Work and Organizational Psychology, 19, 532550.Google Scholar
Oostrom, J. K., Born, M. P., Serlie, A. W., & van der Molen, H. T. (2011). A multimedia situational test with a constructed-response format. Journal of Personnel Psychology, 10, 78–88.CrossRefGoogle Scholar
Parshall, C. G., Harmes, J. C., Davey, T., & Pashley, P. J. (2010). Innovative item types for computerized testing. In van der Linden, W. J. & Glas, C. A. W. (Eds.), Elements of adaptive testing (pp. 215230). New York: SpringerGoogle Scholar
Popp, E., & Coughlin, C. (2015, April). Examining equivalence of closed items in a game-like simulation. Paper presented at 30th Annual Conference of the Society for Industrial and Organizational Psychology, Philadelphia, PA.Google Scholar
Richman-Hirsch, W. L, Olson-Buchanan, J. B., & Drasgow, F. (2000). Examining the impact of administration medium on examinee perceptions and attitudes. Journal of Applied Psychology, 85, 880887.Google Scholar
Segall, D. O. (1996). Multidimensional adaptive testing. Psychometrika, 61, 331354.Google Scholar
Sinharay, S., Puhan, G., & Haberman, S.J. (2010) Reporting diagnostic scores in educational testing: Temptations, pitfalls, and some solutions. Multivariate Behavioral Research, 45, 553573.Google Scholar
Stark, S., Chernyshenko, O. S., Drasgow, F., & White, L. A. (2012). Adaptive testing with multidimensional pairwise preference items: Improving the efficiency of personality and other noncognitive assessments. Organizational Research Methods, 15, 463–487. doi:10.1177/1094428112444611Google Scholar
Tatsuoka, K. K. (1990). Toward an integration of item response theory and cognitive error diagnosis. In Glaser, F. N., Lesgold, A., & Shafto, M. G. (Eds.), Diagnostic monitoring of skill and knowledge acquisition (pp. 453488). Hillsdale, NJ: Erlbaum.Google Scholar
Templin, J. (2016). Diagnostic assessment: Methods for the reliable measurement of multidimensional abilities. In Drasgow, F. (Ed.), Technology and testing: Improving educational and psychological measurement (pp. 285304). New York: Routledge.Google Scholar
Weiss, D. J. (1982). Improving measurement quality and efficiency with adaptive theory. Applied Psychological Measurement, 6, 473492.Google Scholar
Weiss, D. J., & Gibbons, R. D. (2007). Computerized adaptive testing with the bifactor model. In D. J. Weiss (Ed.), Proceedings of the 2007 GMAC Conference on Computerized Adaptive Testing. Retrieved from www.psych.umn.edu/psylabs/catcentral/pdffiles/cat07weiss&gibbons.pdf (accessed April 3, 2016).Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×