Skip to main content Accessibility help
×
Hostname: page-component-76fb5796d-45l2p Total loading time: 0 Render date: 2024-04-28T18:09:10.670Z Has data issue: false hasContentIssue false

6 - Applying Adaptive Approaches to Talent Management Practices

from Part II - Technology in Staffing

Published online by Cambridge University Press:  18 February 2019

Richard N. Landers
Affiliation:
University of Minnesota
Get access

Summary

Image of the first page of this content. For PDF version, please use the ‘Save PDF’ preceeding this image.'
Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2019

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Ackerman, P. L. (1987). Individual differences in skill learning: An integration of psychometric and information processing perspectives. Psychological Bulletin, 102(1), 327. doi:10.1037/0033–2909.102.1.3.CrossRefGoogle Scholar
Beaty, J. C., Dawson, C. R., Fallaw, S. S., & Kantrowitz, T.M. (2009). Recovering the scientist–practitioner model: How I-Os should respond to unproctored Internet testing. Industrial and Organizational Psychology: Perspectives on Science and Practice, 2, 5863.CrossRefGoogle Scholar
Birkeland, S. A., Manson, T. M., Kisamore, J. L., Brannick, M. T., & Smith, M. A. (2006). A meta-analytic investigation of job applicant faking on personality measures. International Journal of Selection and Assessment, 14(4), 317335. doi:10.1111/j.1468–2389.2006.00354.x.CrossRefGoogle Scholar
Bloom, B. S. (1984). The 2 sigma problem: The search for methods of group instruction as effective as one-to-one tutoring. Educational Researcher, 13(6), 416. doi:10.2307/1175554.CrossRefGoogle Scholar
Boyce, A. S., Conway, J. S., Caputo, P. M., & Huber, C. R. (2015). Development of the adaptive employee personality test (ADEPT-15TM). Paper Presented at the 2015 Conference of the International Personnel Assessment Council, Atlanta, GA.Google Scholar
Brown, A. (2016). Item response models for forced-choice questionnaires: A common framework. Psychometrika, 81(1), 135160. doi:10.1007/s11336-014–9434-9.CrossRefGoogle ScholarPubMed
Brown, A., Inceoglu, I., & Lin, Y. (2017). Preventing rater biases in 360-degree feedback by forcing choice. Organizational Research Methods, 20(1), 121148. doi:10.1177/1094428116668036.CrossRefGoogle Scholar
Brown, A. & Maydeu-Olivares, A. (2011). Item response modeling of forced-choice questionnaires. Educational & Psychological Measurement, 71(3), 460502. doi:10.1177/0013164410375112.CrossRefGoogle Scholar
Brown, A. & Maydeu-Olivares, A. (2013). How IRT can solve problems of ipsative data in forced-choice questionnaires. Psychological Methods, 18(1), 3652. doi:10.1037/a0030641.CrossRefGoogle ScholarPubMed
Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. New York, NY: Cambridge University Press.CrossRefGoogle Scholar
Chen, C., Lee, H., & Chen, Y. (2005). Personalized e-learning system using item response theory. Computers & Education, 44(3), 237255.CrossRefGoogle Scholar
Cheung, M. W. L., & Chan, W. (2002). Reducing uniform response bias with ipsative measurement in multiple-group confirmatory factor analysis. Structural Equation Modeling: A Multidisciplinary Journal, 9(1), 5577. doi:10.1207/S15328007SEM0901_4.CrossRefGoogle Scholar
Christiansen, N. D., Burns, G. N., & Montgomery, G. E. (2005). Reconsidering forced-choice item formats for applicant personality assessment. Human Performance, 18(3), 267307. doi:10.1207/s15327043hup1803_4.CrossRefGoogle Scholar
Corno, L. (2008). On teaching adaptively. Educational Psychologist, 43(3), 161173. doi:10.1080/00461520802178466.CrossRefGoogle Scholar
Costa, P. T. Jr., & McCrae, R. R. (1992). Revised NEO personality inventory (NEO–PI–R) and NEO five-factor inventory (NEO–FFI) professional manual. Odessa, FL: Psychological Assessment Resources.Google Scholar
Daniels, J. A., Spero, R. A., Leonard, J. M., & Schimmel, C. J. (2015). A content analysis of military psychology: 2002–2014. Military Pschology, 27(6), 366375.CrossRefGoogle Scholar
De Boeck, P. (2017). Interwovenness of response time and response accuracy in cognitive tests. Paper Presented at the 82nd International Meeting of the Psychometric Society, Zurich, Switzerland.Google Scholar
Drasgow, F., Stark, S., Chernyshenko, O. S., Nye, C. D., Hulin, C. L., & White, L. A. (2012). Development of the tailored adaptive personality assessment system (TAPAS) to support army selection and classification decisions (tech. rep. no. 1311). Arlington, VA: U.S. Army Research Institute for the Behavioral and Social Sciences.CrossRefGoogle Scholar
Embretson, S., & Reise, S. (2000). Item response theory for psychologists. Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
Fetzer, M. (2009, April). Validity and utility of computer adaptive testing in personnel selection. Symposium presented at the annual meeting for the Society for Industrial and Organizational Psychology. New Orleans, LA.CrossRefGoogle Scholar
Gartner (2015). www.gartner.com/newsroom/id/3088221. Retrieved on December 20, 2015.Google Scholar
Georgiadou, E., Triantafillou, E., & Economides, A. A. (2007). A review of item exposure control strategies for computerized adaptive testing developed from 1983 to 2005. The Journal of Technology, Learning, and Assessment, 5(8), 338.Google Scholar
Goldberg, B., Brawner, K., Sottilare, R., Tarr, R., Billings, D. R., & Malone, N. (2012). Use of evidence-based strategies to enhance the extensibility of adaptive tutoring technologies. Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC), Orlando, FL.Google Scholar
Grelle, D., Gutierrez, S. L., & Fetzer, M. (2010, June). Validity of CAT in Personnel Selection. Presented at the Annual Meeting of the International Association of Computer Adaptive Testing, Arnhem, Netherlands.Google Scholar
Gutierrez, S. L. (2011, October). Perceptions of fairness and opportunity to perform on CAT in personnel selection. Paper presented at the annual meeting of the International Associated of Computer Adaptive Testing, Pacific Grove, CA.Google Scholar
Gutierrez, S., Grelle, D., & Borneman, M. (2009, April). Computer Adaptive Measures of Cognitive Ability: Validity and Utility. Presented at the Annual Meeting of the Society of Industrial Organizational Psychologists, New Orleans, LA.Google Scholar
Han, K. T. (2012). SimulCAT: Windows software for simulating computerized adaptive test administration. Applied Psychological Measurement, 36(1), 6466.CrossRefGoogle Scholar
Houston, J. S., Borman, W. C., Farmer, W. L., & Bearden, R. M. (2006). Development of the navy computer adaptive personality scales (NCAPS). (No. NPRST-TR-06–2). Millington, TN: Navy Personnel Research, Studies, and Technology Division, Bureau of Naval Personnel (NPRST/PERS-1).Google Scholar
Jackson, D. N., Wroblewski, V. R., & Ashton, M. C. (2000). The impact of faking on employment tests: Does forced choice offer a solution? Human Performance, 13(4), 371388.CrossRefGoogle Scholar
Johnson, C. E., Wood, R., & Blinkhorn, S. F. (1988). Spuriouser and spuriouser: The use of ipsative personality tests. Journal of Occupational Psychology, 61(2), 153162.CrossRefGoogle Scholar
Johnston, J. H., Goodwin, G., Moss, J., Sottilare, R., Ososky, S., Cruz, D. & Graesser, A. (2015). Effectiveness evaluation tools and methods for adaptive training and education in support of the US army learning model: Research outline. Retrieved from www.dtic.mil/get-tr-doc/pdf?AD=ADA621295.Google Scholar
Kantrowitz, T. M., Fetzer, M. S., & Dawson, C. R. (2011). Computer adaptive testing (CAT): A faster, smarter, and more secure approach to pre-employment testing. Journal of Business and Psychology, 26, 227232.CrossRefGoogle Scholar
Kantrowitz, T. M. & Robie, C. (2011). Estimates of faking on computer adaptive and static personality assessments. In T. Kantrowitz (Chair) Innovations in Mitigating Faking on Personality Assessments. Paper presented at the annual meeting of the Society for Industrial and Organizational Psychology, Chicago, IL.Google Scholar
King, D., Ryan, A. M., Kantrowitz, T. M., Grelle, D., & Dainis, A. (2015). Mobile internet testing: An analysis of equivalence, individual differences, and reactions. International Journal of Selection and Assessment, 23, 382394.CrossRefGoogle Scholar
Liu, M. T. & Yu, P. T. (2011). Aberrant learning achievement detection based on person-fit statistics in personalized e-learning systems. Educational Technology & Society, 14(1), 107120.Google Scholar
Lu, Y. & Sireci, S. G. (2007). Validity issues in test speededness. Educational Measurement: Issues and Practice, 26(4), 2937. doi:10.1111/j.1745–3992.2007.00106.x.CrossRefGoogle Scholar
Maynes, D. (2010). Research in test security: The latest research in test security data analysis. Symposium presented at the 11th annual conference of the Association of Test Publishers, Orlando, FL.Google Scholar
McGrew, K. S. (1997). Analysis of the major intelligence batteries according to a proposed comprehensive Gf–Gc framework. In Flanagan, D. P., Genshaft, J. L. & Harrison, P. L. (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (pp. 151179). New York, NY: Guilford.Google Scholar
Moclaire, C., Middleton, E., Fox, B., Foster, C., & Prettyman, T. (2012). Balancing security and efficiency in limited-size computer adaptive test libraries. Poster presented at the annual meeting of the Society for Industrial and Organizational Psychology, San Diego, CA.Google Scholar
Ozer, D. J. & Benet-Martínez, V. (2006). Personality and the prediction of consequential outcomes. Annual Review of Psychology, 57, 401421. doi:10.1146/annurev.psych.57.102904.190127.CrossRefGoogle ScholarPubMed
Reckase, M. D. (2009). Multidimensional item response theory. New York, NY: Springer.CrossRefGoogle Scholar
Roberts, B. W., Kuncel, N. R., Shiner, R., Caspi, A., & Goldberg, L. R. (2007). The power of personality: The comparative validity of personality traits, socioeconomic status, and cognitive ability for predicting important life outcomes. Perspectives on Psychological Science (Wiley-Blackwell), 2(4), 313345. doi:10.1111/j.1745–6916.2007.00047.x.CrossRefGoogle ScholarPubMed
Sawtooth Software, Inc. ACBC technical paper [Sawtooth Software technical paper series]. Sequim (WA): Sawtooth Software, Inc., 2009 [online]. Retrieved August 15, 2017, from www.sawtoothsoftware.com/download/techpap/acbctech. Pdf.Google Scholar
Schmidt, F., & Hunter, J. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124, 262274.CrossRefGoogle Scholar
Schneider, R. J., McLellan, R. A., Kantrowitz, T. M., Houston, J. S., & Borman, W. C. (2009). Criterion-related validity of an innovative CAT-based personality measure. Proceedings from the GMAC Conference on Computerized Adaptive Testing.Google Scholar
Segall, D. O. (1996). Multidimensional adaptive testing. Psychometrika, 61(2), 331354. doi:10.1007/BF02294343.CrossRefGoogle Scholar
Seo, D. G. & Weiss, D. J. (2015). Best design for multidimensional computerized adaptive testing with the bifactor model. Educational and Psychological Measurement, 75(6), 954978. doi:10.1177/0013164415575147.CrossRefGoogle ScholarPubMed
SHL. (2009–2014). Global personality inventory – adaptive: Technical manual. Surrey, UK: SHL.Google Scholar
SHL. (2016). Apta™ architecture technical manual. Surrey, UK: SHL.Google Scholar
Smittle, P. (1993). Computer adaptive testing: A new era. Journal of Development Education, 17, 810.Google Scholar
Snow, R. E. (1997). Individual differences. In Tennyson, R. D., Schott, F., Seel, N. M., & Dijkstra, S. (Eds.), Instructional design: International perspectives (vol. 1, pp. 215242). Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
Stark, S. & Chernyshenko, O. S. (2007). Adaptive testing with the multi-unidimensional pairwise preference model. Proceedings of the 2007 GMAC Conference on Computerized Adaptive Testing, Minneapolis, MN.Google Scholar
Stark, S. & Chernyshenko, O. S. (2011). Computerized adaptive testing with the Zinnes and Griggs pairwise preference ideal point model. International Journal of Testing, 11(3), 231247. doi:10.1080/15305058.2011.561459.CrossRefGoogle Scholar
Stark, S., Chernyshenko, O. S., Drasgow, F., & White, L. A. (2012). Adaptive testing with multidimensional pairwise preference items: Improving the efficiency of personality and other noncognitive assessments. Organizational Research Methods, 15(3), 463487. doi:10.1177/1094428112444611.CrossRefGoogle Scholar
Stark, S., Chernyshenko, O. S., Drasgow, F., Nye, C. D., White, L. A., Heffner, T., & Farmer, W. L. (2014). From ABLE to TAPAS: A new generation of personality tests to support military selection and classification decisions. Military Psychology, 26(3), 153164. doi:10.1037/mil0000044.CrossRefGoogle Scholar
Sullivan, J. (2014). The power has shifted to the candidate, so current recruiting practices will stop working. Retrieved December 20, 2015, from www.eremedia.com/ere/the-power-has-shifted-to-the-candidate-so-current-recruiting-practices-will-stop-workingGoogle Scholar
Tippins, N. T. (2009). Internet alternatives to traditional proctored testing: Where are we now? Industrial and Organizational Psychology: Perspectives on Science and Practice, 2, 210.CrossRefGoogle Scholar
Training. (2016). 2016 training industry report. Retrieved from https://trainingmag.com/trgmag-article/2o16-training-industry-report.Google Scholar
van Buuren, N., & Eggen, T. J. H. M. (2017). Latent-class-based item selection for computerized adaptive progress tests. Journal of Computerized Adaptive Testing, 5(2), 2243. Retrieved from http://iacat.org/jcat/index.php/jcat/article/view/62/29.CrossRefGoogle Scholar
van der Linden, W. J. (2009). Conceptual issues in response-time modeling. Journal of Educational Measurement, 46(3), 247272. doi:10.1111/j.1745–3984.2009.00080.x.CrossRefGoogle Scholar
van der Linden, W. J., & Pashley, P. J. (2000). Item selection and ability estimation in adaptive testing. In van der Linden, W. J., & Glas, G. A. W. (Eds.), Computerized adaptive testing: Theory and practice (pp. 125). Dordrecht: Springer Netherlands. doi:10.1007/0–306-47531–6_1.CrossRefGoogle Scholar
VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46(4), 197221. doi:10.1080/00461520.2011.611369.CrossRefGoogle Scholar
Veerkamp, W. J. J. & Berger, M. P. F. (1997). Some new item selection criteria for adaptive testing. Journal of Educational and Behavioral Statistics, 22(2), 203226. doi:10.2307/1165378.CrossRefGoogle Scholar
Veldkamp, B. P. (2016). On the issue of item selection in computerized adaptive testing with response times. Journal of Educational Measurement, 53(2), 212228. doi:10.1111/jedm.12110.CrossRefGoogle Scholar
Vogel-Walcutt, J., Ross, K. G., Phillips, J. K., & Stensrud, B. (2016). Improving the efficiency and effectiveness of adaptive training: Using developmental models as a framework and foundation for human-centred instructional design. Theoretical Issues in Ergonomics Science, 17(2), 127148. doi:10.1080/1463922X.2015.1111460.CrossRefGoogle Scholar
Weiss, D. J. (1982). Improving measurement quality and efficiency with adaptive testing. Applied Psychological Measurement, 6(4), 473492. doi:10.1177/014662168200600408.CrossRefGoogle Scholar
Weiss, D. J. & Kingsbury, G. G. (1984). Application of computerized adaptive testing to educational problems. Journal of Educational Measurement, 21(4), 361375. doi:10.1111/j.1745–3984.1984.tb01040.x.CrossRefGoogle Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×