Skip to main content Accessibility help
×
Hostname: page-component-76fb5796d-dfsvx Total loading time: 0 Render date: 2024-04-26T08:58:22.100Z Has data issue: false hasContentIssue false

Chapter 19 - After Admissions: What Comes Next in Higher Education?

from Part IV - Rethinking Higher Education Admissions

Published online by Cambridge University Press:  09 January 2020

María Elena Oliveri
Affiliation:
Educational Testing Service, Princeton, New Jersey
Cathy Wendler
Affiliation:
Educational Testing Service, Princeton, New Jersey
Get access

Summary

This chapter provides an overview of changes in the higher education environment that inform admissions and placement decisions. Various factors that must be considered when reconceptualizing current admission and placement practices are discussed. An expanded assessment framework based on two models – the multilevel design model and the complementarity model – are described. These models aim to better support diverse students’ learning by improving the connection between assessments and instruction once students are admitted to higher education institutions. Finally, the contributions of technological advancements, measurement of noncognitive skills, and innovations in task design are described.

Type
Chapter
Information
Higher Education Admissions Practices
An International Perspective
, pp. 347 - 375
Publisher: Cambridge University Press
Print publication year: 2020

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Altbach, P. G., Reisberg, L., & Rumbley, L. E. (2009). Trends in global higher education: Tracking an academic revolution. Report prepared for the UNESCO 2009 World Conference on Higher Education. Paris: United Nations Educational, Scientific and Culture Organization. Retrieved from http://unesdoc.unesco.org/images/0018/001831/183168e.pdf.Google Scholar
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.Google Scholar
Bachman, L. F., & Palmer, A. S. (2010). Language assessment in practice: Developing language assessments and justifying their use the real world. Oxford: Oxford University Press.Google Scholar
Bennett, R. E. (2010). Cognitively based assessment of, for, and as learning: A preliminary theory of action for summative and formative assessment. Measurement: Interdisciplinary Research and Perspectives, 8, 7091. https://doi.org/10.1080/15366367.2010.508686.Google Scholar
Bereiter, C., & Scardamalia, M. (2012). What will it mean to be an educated person in mid-21st century? Princeton, NJ: The Gordon Commission on the Future of Assessment in Education. Retrieved from https://gordoncommissionblog.wordpress.com/commissioned-papers/what-will-it-mean-to-be-an-educated-person-in-mid-21st-century/.Google Scholar
Braun, H. I., & Jones, D. H. (1984). Use of empirical Bayes methods in the study of the validity of academic predictors of graduate school performance (ETS RR-84-48). Princeton, NJ: Educational Testing Service. https://doi.org/10.1002/j.2330-8516.1984.tb00074.x.Google Scholar
Breland, H. M., Maxey, J., Gernand, R., Cumming, T., & Trapani, C. (2002). Trends in college admission 2000: A report of a national survey of undergraduate admissions policies, practices, and procedures. Princeton, NJ: ACT, Inc.; Association for Institutional Research; The College Board; Educational Testing Service; and National Association for College Admission Counseling. Retrieved from www.ets.org/research/policy_research_reports/publications/report/2002/cnrr.Google Scholar
Burstein, J., Chodorow, M., & Leacock, C. (2004). Automated essay evaluation: The Criterion Online service. AI Magazine, 25(3), 2736.Google Scholar
Burstein, J., Elliot, N., Beigman Klebanov, B., Madnani, M., Napolitano, D., Schwartz, M., Houghton, P., & Molloy, H. (2018). Writing Mentor™: Writing progress using self-regulated writing support. Journal of Writing Analytics, 2, 285313. Retrieved from https://wac.colostate.edu/docs/jwa/vol2/bursteinetal.pdf.Google Scholar
Chen, X. (2016). Remedial coursetaking at U.S. public 2- and 4-year institutions: Scope, experiences, and outcomes (NCES Report No. 2016-405). Washington, DC: National Center for Education Statistics. Retrieved from https://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2016405.Google Scholar
Cisco Systems Inc. (2010). Cisco packet tracer data sheet. Retrieved from www.cisco.com/c/dam/en_us/training-events/netacad/course_catalog/docs/Cisco_PacketTracer_DS.pdf.Google Scholar
Cleary, T. A. (1968). Test bias: Prediction of grades of Negro and White students in integrated colleges. Journal of Educational Measurement, 5, 115124. https://doi.org/10.1111/j.1745-3984.1968.tb00613.x.Google Scholar
Coley, R. J., Goodman, M. J., & Sands, A. M. (2015). America’s skills challenge: Millennials and the future. Princeton, NJ: Educational Testing Service. Retrieved from www.ets.org/s/research/30079/asc-millennials-and-the-future.pdf.Google Scholar
College Board. (n.d.). AP central. Retrieved from https://apcentral.collegeboard.org/.Google Scholar
Council of Writing Program Administrators, National Council of Teachers of English, & National Writing Project. (2011). Framework for success in postsecondary writing. Retrieved from www.nwp.org/img/resources/framework_for_success.pdf.Google Scholar
Dorans, N. J., & Cook, L. L. (Eds.). (2016). Fairness in educational assessment and measurement. New York, NY: Routledge. https://doi.org/10.4324/9781315774527.CrossRefGoogle Scholar
Elliot, N. (2016). A theory of ethics for writing assessment. Journal of Writing Assessment, 9(1). Retrieved from http://journalofwritingassessment.org/article.php?article=98.Google Scholar
Ercikan, K., & Pellegrino, J. W. (Eds.). (2017). Validation of score meaning for the next generation of assessments: The use of response processes. New York, NY: Taylor & Francis. https://doi.org/10.4324/9781315708591.CrossRefGoogle Scholar
Feuer, M. (2013). Validity issues in international large-scale assessments: “Truth” and “consequences.” In Chatterji, M. (Ed.). Validity and test use: An international dialogue on educational assessment, accountability and equity (pp. 197216). Bingley: Emerald Group.Google Scholar
Frezzo, D. C., Behrens, J. T., & Mislevy, R. J. (2009). Design patterns for learning and assessment: Facilitating the introduction of a complex simulation-based learning environment into a community of instructors. Journal of Science Education and Technology, 19, 105114. https://doi.org/10.1007/s10956–009-9192-0.Google Scholar
Glaser, R., & Nitko, A. (1970). Measurement in learning and instruction. Pittsburgh, PA: Learning Research and Development Center, University of Pittsburgh.Google Scholar
Haertel, E. H., & Herman, J. L. (2005). A historical perspective on validity arguments for accountability testing. In Herman, J. L. & Haertel, E. H. (Eds.). Uses and misuses of data for educational accountability and improvement. The 104th Yearbook of the National Society for the Study of Education, Part II (pp. 134). Malden, MA: Blackwell. https://doi.org/10.1111/j.1744-7984.2005.00023.x.Google Scholar
Harackiewicz, J. M., Canning, E. A., Tibbetts, Y., Priniski, S. J., & Hyde, J. S. (2016). Closing achievement gaps with a utility-value intervention: Disentangling race and social class. Journal of Personality and Social Psychology, 111, 745-765. http://dx.doi.org/10.1037/pspp0000075.Google Scholar
Hart Research Associates. (2010). Raising the bar: Employers’ views on college learning in the wake of the economic downturn. Washington, DC: Association of American Colleges and Universities. Retrieved from www.aacu.org/sites/default/files/files/LEAP/2009_EmployerSurvey.pdf.Google Scholar
Hart-Davidson, B., & Meeks, R. (forthcoming). Behavioral indicators of writing improvement: Feedback analytics for peer learning. In Kelly-Riley, D. and Elliot, N. (Eds.). Improving outcomes: Disciplinary Writing, local assessment, and the aim of fairness. New York, NY: Modern Language Association.Google Scholar
Hesse, F., Care, E., Buder, J., Sassenberg, K., & Griffin, P. (2015). A framework for teachable collaborative problem solving skills. In Griffin, P. & Care, E. (Eds.). Assessment and teaching of 21st century skills: Methods and approach. Dordrecht: Springer.Google Scholar
Holland, P. W. (2008, March). The first four generations of test theory. Paper presented at the Association of Test Publishers on Innovations in Testing, Dallas, Texas.Google Scholar
Hussar, W. J., & Bailey, T. M. (2017). Projections of education statistics to 2025 (NCES Report No. 2017-019). Washington, DC: U.S. Department of Education, National Center for Education Statistics. Retrieved from https://files.eric.ed.gov/fulltext/ED576296.pdf.Google Scholar
Idaho State Board of Education. (2015). Governing policies and procedures: Section III: postsecondary affairs, subsection S: Remedial education. Retrieved from https://boardofed.idaho.gov/board-policies-rules/board-policies/higher-education-affairs-section-iii/iii-s-development-and-remedial-education/.Google Scholar
International Test Commission. (2018). ITC guidelines for the large-scale assessment of linguistically and culturally diverse populations. Retrieved from www.intestcom.org/files/guideline_diverse_populations.pdf.Google Scholar
Kane, M. T. (2006). Validation. In Brennan, R. J. (Ed.). Educational measurement (4th ed.) ( pp. 1864). Westport, CT: Praeger.Google Scholar
Kane, M. T., & Mislevy, R. J. (2017). Validating score interpretation based on response processes. In Ercikan, K. & Pellegrino, J. W. (Eds.). Validation of score meaning for the next generation of assessments: The use of response processes (pp. 1124). New York, NY: Routledge. https://doi.org/10.4324/9781315708591-2.Google Scholar
Kelly, P., Moores, J., & Moogan, Y. (2012). Culture shock and higher education performance: Implications for teaching. Higher Education Quarterly, 66, 2446. https://doi.org/10.1111/j.1468-2273.2011.00505.x.CrossRefGoogle Scholar
Klobucar, A., Elliot, N., Deess, P., Rudniy, O., & Joshi, K. (2013). Automated scoring in context: Rapid assessment for placed students. Assessing Writing, 18(1), 6284. https://doi.org/10.1016/j.asw.2012.10.001.Google Scholar
Mislevy, R. J. (2018). Sociocognitive foundations of educational measurement. London: Routledge. https://doi.org/10.4324/9781315871691.Google Scholar
Mislevy, R. J., Almond, R. G., & Lukas, J. (2004). A brief introduction to evidence-centered design (CSE Technical Report No. 632). Los Angeles, CA: The National Center for Research on Evaluation, Standards, and Student Testing (CRESST), Center for Studies in Education, UCLA.Google Scholar
Mislevy, R. J., & Elliot, N. (forthcoming). Ethics, psychometrics, and writing assessment: A conceptual model. In Duffy, J. & Agnew, L. P. (Eds.). Rewriting Plato’s legacy: Ethics, rhetoric, and writing studies. Logan, UT: Utah State University Press.Google Scholar
Mislevy, R. J., & Haertel, G. D. (2006). Implications of evidence‐centered design for educational testing. Educational Measurement: Issues and Practice, 25(4), 620. https://doi.org/10.1111/j.1745-3992.2006.00075.x.Google Scholar
Myford, C. M., & Mislevy, R. J. (1995). Monitoring and improving a portfolio assessment system (CSE Technical Report No. 402). Los Angeles, CA: The National Center for Research on Evaluation, Standards, and Student Testing (CRESST), Center for Studies in Education, UCLA.Google Scholar
National Research Council. 2012. Education for life and work: Developing transferable knowledge and skills in the 21st century. Washington, DC: The National Academies Press. https://doi.org/10.17226/13398.Google Scholar
Novick, M. R., & Petersen, N. S. (1976). Towards equalizing educational and employment opportunity. Journal of Educational Measurement, 13, 7788. https://doi.org/10.1111/j.1745-3984.1976.tb00183.x.Google Scholar
The Occupational Information Network Resource Center. (2017). O*NET 22.2 Database [Data file and code book]. Retrieved from www.onetcenter.org/database.html.Google Scholar
Oliveri, M. E. (2018, April). Kitchen Design: A research prototype to assess communication at work. In Troitschanskaia, O. (Chair), Assessing student learning outcomes in higher education. Symposium conducted at the meeting of the American Educational Research Association, New York, NY.Google Scholar
Oliveri, M. E., & Lawless, R. R. (2018). The validity of inferences from locally developed assessments administered globally. (ETS RR-18-35). Princeton, NJ: Educational Testing Service. https://doi.org/10.1002/ets2.12221.Google Scholar
Oliveri, M. E., Lawless, R. R., & Mislevy, R. J. (2019). Using evidence-centered design to support the development of culturally and linguistically sensitive collaborative problem-solving assessments. International Journal of Testing, 19(1), 131. https://doi: 10.1080/15305058.2018.1543308.Google Scholar
Oliveri, M. E., Lawless, R. R., Molloy, H. (2017). A literature review of collaborative problem solving for college and workforce readiness. (ETS RR-17-06; ETS GRE RR-17-03). Princeton, NJ: Educational Testing Service. https://doi.org/10.1002/ets2.12133.Google Scholar
Oliveri, M. E., & Markle, R. (2017). Continuing a culture of evidence: Expanding skills in higher education (ETS RR-17-09). Princeton, NJ: Educational Testing Service. https://onlinelibrary.wiley.com/doi/pdf/10.1002/ets2.12137.Google Scholar
Oliveri, M. E., McCaffrey, D., Ezzo, C., & Holtzman, S. (2017). A multilevel factor analysis of third-party evaluations of noncognitive constructs used in admissions decision-making. Applied Measurement in Education, 30, 297313. http://dx.doi.org/10.1080/08957347.2017.1353989.Google Scholar
Oliveri, M. E., & McCulla, L. (forthcoming). Using the occupational network database to assess and improve English language communication for the workplace (Research Report Series). Princeton, NJ: Educational Testing Service.Google Scholar
Oliveri, M. E., Rutkowski, D., & Rutkowski, L. (2018). Bridging validity and evaluation to match international large-scale assessment claims and country aims (ETS RR-18-27). Princeton, NJ: Educational Testing Service. https://doi.org/10.1002/ets2.12214.CrossRefGoogle Scholar
Oliveri, M. E. & Wendler, C. (2017, April). Enhancing the validity argument of assessments: Identifying, understanding, and mitigating unintended consequences of test use. Professional development workshop presented at American Educational Research Association, San Antonio, Texas.Google Scholar
O’Sullivan, B., & Weir, C. J. (2011). Test development and validation. In O’Sullivan, B. (Ed.). Language testing: Theories and practices (pp. 1332). Basingstoke: Palgrave Macmillan.Google Scholar
Pane, J. F., Steiner, E. D., Baird, M. D., Hamilton, L. S., & Pane, J. D. (2017). Informing progress: Insights on personalized learning implementation and effects. Santa Monica, CA: RAND Corporation. Retrieved from http://rand.org/t/RR2042.Google Scholar
Riconscente, M. M., Mislevy, R. J., & Corrigan, S. (2016). Evidence-centered design. In Lane, S., Raymond, M. R., & Haladyna, T. M. (Eds.). Handbook of test development (2nd ed.) (pp. 4063). New York, NY: Routledge.Google Scholar
Shermis, M. D., & Burstein, J. (Eds.). (2013). Handbook of automated essay evaluation: Current applications and new directions. New York, NY: Routledge. https://doi.org/10.4324/9780203122761.CrossRefGoogle Scholar
Shermis, M. D., Burstein, J., Elliot, N., Miel, S., & Foltz, P. W. (2016). Automated writing evaluation: An expanding body of knowledge. In McArthur, C. A., Graham, S., & Fitzgerald, J. (Eds.). Handbook of writing research (2nd ed.) (pp. 395409). New York, NY: Guilford.Google Scholar
Snyder, T. D., de Brey, C., & Dillow, S. A. (2019). Digest of education statistics 2017 (NCES 2018-070). Washington, DC: National Center for Education Statistics, Institute of Education Sciences, US Department of Education. Retrieved from https://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2018070.Google Scholar
Toulmin, S. E. (2003). The uses of argument. Cambridge: Cambridge University Press. (Original work published 1958). Retrieved from https://doi.org/10.1017/CBO9780511840005.Google Scholar
Weir, C. (2005). Language testing and validation: An evidence-based approach. Basingstoke: Palgrave Macmillan. https://doi.org/10.1057/9780230514577.Google Scholar
Zwick, R. (2017). Who gets in? Strategies for fair and effective college admissions. Cambridge, MA: Harvard University Press. https://doi.org/10.4159/9780674977648.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×