Skip to main content Accessibility help
×
Home
  • Print publication year: 2014
  • Online publication date: February 2015

Chapter 26 - Quantitative and Mixed Methods Research

Summary

Introduction

A concern across the field of education, as well as within engineering education, is the identification of effective instructional approaches. “Effective” can be defined in many ways, including increased learning gains, improved attitudes, and changes in the general appeal of a subject or topic to students. To determine the effectiveness of an approach, it is often necessary to measure changes in student constructs over time or to acquire a snapshot of students’ performances at a given point. In addition, teachers and researchers may be concerned with determining whether their approaches are equally effective across different student populations.

In engineering education, each of these assessment purposes receives increased emphasis at the program and student level owing to the existence of an accreditation board, ABET, Inc. (formerly known as the Accreditation Board for Engineering and Technology or ABET; see www.abet.org/history.shtml). ABET, Inc. requests that each accredited program demonstrate that its graduating seniors have achieved a set of program outcomes that can be found at the referenced website.

References
Addison, V., Hipp, C., & Lyons, J. (2007). A study on the effects of timing on engineering students’ abilities to solve open-ended problems with computers. In Proceedings of the 2007 American Society for Engineering Education Annual Conference & Exposition. Honolulu, HI, Paper 2007-1383.
American Psychological Association. (1999). Standards for educational and psychological testing. Washington, DC: Author.
Bernard, R. (2000). Social research methods: Qualitative and quantitative approaches. Thousand Oaks, CA: SAGE.
Borrego, M., Douglas, E., & Amelink, C. (2009). Quantitative, qualitative, and mixed research methods in engineering education. Journal of Engineering Education, 98(1), 53–66.
Box, G. E. P., & Hunter, J. S. (2005). Statistics for the experimenter: Design, innovation and discovery. New York, NY: John Wiley & Sons.
Campbell, D. T., & Stanely, J. T. (1963). Experimental and quasi-experimental designs for research. Boston, MA: Houghton-Mifflin.
Cohen, L., Manion, L., & Morrison, K. (2007). Research methods in education. Milton Park, Abingdon, Oxon, UK: Routledge.
Corder, G., & Foreman, D. (2009). Nonparametric statistics for non-statisticians: A step-by-step approach. Hoboken, NJ: John Wiley & Sons.
Cortina, J. (1993). What is coefficient alpha? An examination of theory and applications. Journal of Applied Psychology, 73(1), 98–104.
Costello, A. B., & Osboren, J. W. (2005). Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Practical Assessment, Research & Evaluation, 10(7). Retrieved from
Creswell, J. W., & Plano Clark, V. L. (2007). Designing and conducting mixed methods research. Thousand Oaks, CA: SAGE.
Denzin, N. K. (1978). Sociological methods. New York, NY: McGraw-Hill.
Felder, R., Felder, G., & Dietz, E. (1998). A longitudinal study of engineering student performance and retention. V. Comparisons with traditionally-taught students. Journal of Engineering Education, 87(4), 469–480.
Gorsuch, R. (1983). Factor analysis. New York, NY: Taylor & Francis.
Greene, J. C., Caracelli, V. J., & Graham, W. F. (1989). Toward a conceptual framework for mixed-method evaluation designs. Educational Evaluation and Policy Analysis, 11(3), 255–274.
Holloway, B. M., Imbrie, P. K., & Reed-Rhoads, T. (2011). A holistic review of gender differences in engineering admissions and early retention. In 15th International Conference for Women Engineers and Scientists. Adelaide, Australia, Paper 0243. Retrieved from
Holloway, B. M., Reed-Rhoads, T., & Groll, L. E. (2011). Women as the miner's canary in undergraduate engineering education. In Proceedings of the 2011 American Society for Engineering Education Annual Conference & Exposition, Vancouver, BC, Canada, Paper 2011-1382.
Hopkins, K. D. (1998). Educational and psychological measurement and evaluation (8th ed.). Nedam Heights, MA: Allyn and Bacon.
Huck, S. (2011). Reading statistics and research (6th ed.). New York, NY: Pearson.
Hunter, S. H., Tobolowsky, B. F., Gardner, J. H., Evenbeck, S. E., Pattengale, J. A., Schaller, M. A., Schreiner, L. A., & Associates, (2010). Helping sophomores succeed: Understanding and improving the second-year experience. San Francisco, CA: Jossey-Bass.
Immekus, J. C., Imbrie, P. K., & Maller, S. J. (2004). The influence of pre-college factors on first-year engineering students’ academic success and persistence. In 34th ASEE/IEEE Frontiers in Education Conference Proceedings, Savannah, GA (pp. F3F-1–2).
Immekus, J. C., Maller, S. J., Imbrie, P. K., Wu, N., & McDermott, P. A. (2005). Work In Progress: An analysis of students’ academic success and persistence using pre-college factors. In 35th ASEE/IEEE Frontiers in Education Conference Proceedings, Indianapolis, IN (pp. S2C-3–4).
Johnson, B., & Christensen, L. (2008). Educational research: Quantitative, qualitative and mixed methods approaches (3rd ed.). Thousand Oaks, CA: SAGE.
Johnson, R. G., & Onweugbuzie, A. J. (2004). Mixed methods research: A research paradigm whose time has come. Educational Researcher, 33(7), 14–26.
Kashy, D., Albertelli, G., Kashy, E., & Thoennessen, M. (2001). Teaching with ALN technology: Benefits and costs. Journal of Engineering Education, 90(4), 499–505.
Kilgore, D., Atman, C., Yasuhara, K., Barker, T., & Morozov, A. (2007). Considering context: A study of first-year engineering students. Journal of Engineering Education, 96(4), 321–334.
Kowalski, F., & Kowalski, S. (2008). Exploring the role of tablet PCs in promoting active learning and real-time communication to enhance learning in the university setting. Retrieved from
Leydens, J., Moskal, B., & Pavelich, M. (2004). Qualitative methods used in the assessment of engineering education. Journal of Engineering Education, 93(1), 65–72.
Matusovich, H., Streveler, R., & Miller, R. (2010, October). Why do students choose engineering? A qualitative, longitudinal investigation of students’ motivational values, Journal of Engineering Education, 289–303.
Merino, D., & Abel, K. (2003). Evaluating the effectiveness of computer tutorials versus traditional lecturing in accounting topics. Journal of Engineering Education, 92(1), 189–194.
Messick, S. (1989). Educational measurement (3rd ed.). Linn, R. L. (Ed.), New York, NY: Macmillan.
Messick, S. (1998). Test validity: A matter of consequence. Social Indicators Research, 45(1), 35–44.
Miller, M. (1995). Coefficient Alpha: A basic introduction from the perspectives of classical test theory and structural equation modeling. Structural Equation Modeling, 2(3), 255–273.
Moskal, B. (2011). Bechtel K-5 educational excellence initiative. Retrieved from
Moskal, B., & Leydens, J. A. (2000). Scoring rubric development: Validity and reliability. Practical Assessment, Research & Evaluation, 7(10). Retrieved from
Moskal, B., Leydens, J., & Pavelich, M. (2002). Validity, reliability and the assessment of engineering education. Journal of Engineering Education, 91(3), 351–354.
Moskal, B., & Skokan, C. (2011). Supporting the K-12 classroom through university outreach. Journal of Higher Education Outreach and Engagement, 15(1), 53–75. Retrieved from
National Academy of Engineering, (2008). Changing the conversation: Messages for improving public understanding of engineering. Washington, DC: The National Academies Press.
Ogot, M., Elliott, G., &Glumac, N. (2003). An assessment of in-person and remotely operated laboratories. Journal of Engineering Education, 92(1).
Olds, B., Moskal, B., & Miller, R. (2005). Assessment in engineering education: Evolution, approaches and future collaborations. Journal of Engineering Education, 94(1), 13–25.
Reed-Rhoads, T., & Imbrie, P. K. (2008). Concept inventories in engineering education. Evidence on Promising Practices in Undergraduate Science, Technology, Engineering, and Mathematics (STEM) Education Workshop 2 October 13–14. Retrieved from
Reid, K. & Imbrie, P. K. (2008). Noncognitive characteristics of incoming engineering students compared to incoming engineering technology students: A preliminary examination. In Proceedings of the 2008 American Society for Engineering Education Annual Conference & Exposition, Pittsburgh, PA, Paper 2008-1995.
Sheppard, S., Atman, C., Stevens, R., Fleming, L., Streveler, R., Adams, R., & Barker, T. (2004). Studying the engineering student experience: Design of a longitudinal study. In Proceedings of the 2004 American Society for Engineering Education Annual Conference & Exposition, Salt Lake City, UT, Paper 2004-1736.
Sicker, D., Lookabaugh, T., Santos, J., & Barnes, F. (2005). Assessing the effectiveness of remote networking laboratories. In 35th ASEE/IEEE Frontiers in Education Conference Proceedings, Indianapolis, IN (pp. S3F-7-12).
Suhr, D. D. (2003). Principal component analysis vs. exploratory factor analysis. In SUGI 30 Proceedings. Retrieved from
Tashakkori, A., & Teddlie, C. (Eds.). (2003). Handbook of mixed methods in social and behavioral research. Thousand Oaks, CA: SAGE.
U.S. Department of Education, Institute of Education Science, and National Center for Education, Evaluation and Regional Assistance (2003). Identifying and implementing educational practices Supported by rigorous evidence: A user friendly guide. Retrieved from
Yadav, A., Subedu, D., Lundeber, M., & Bunting, C. (2011). Problem-based learning: Influence on students’ learning in an electrical engineering course. Journal of Engineering Education, 100(2), 254–280.