Hostname: page-component-76fb5796d-22dnz Total loading time: 0 Render date: 2024-04-25T15:14:01.224Z Has data issue: false hasContentIssue false

Validity and reliability of an in-training evaluation report to measure the CanMEDS roles in emergency medicine residents

Published online by Cambridge University Press:  04 March 2015

Aliya Kassam*
Affiliation:
Office of Postgraduate Medical Education and Medical Education Research Unit, Department of Community Health Sciences
Tyrone Donnon
Affiliation:
Office of Postgraduate Medical Education and Medical Education Research Unit, Department of Community Health Sciences
Ian Rigby
Affiliation:
Department of Emergency Medicine, University of Calgary, Calgary, AB
*
Office of Postgraduate Medical Education, Faculty of Medicine, University of Calgary, 3330 Hospital Drive NW, Calgary, AB T2N 4N1; kassama@ucalgary.ca

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.
Background:

There is a question of whether a single assessment tool can assess the key competencies of residents as mandated by the Royal College of Physicians and Surgeons of Canada CanMEDS roles framework.

Objective:

The objective of the present study was to investigate the reliability and validity of an emergency medicine (EM) in-training evaluation report (ITER).

Method:

ITER data from 2009 to 2011 were combined for residents across the 5 years of the EM residency training program. An exploratory factor analysis with varimax rotation was used to explore the construct validity of the ITER. A total of 172 ITERs were completed on residents across their first to fifth year of training.

Results:

A combined, 24-item ITER yielded a five-factor solution measuring the CanMEDs role Medical Expert/ Scholar, Communicator/Collaborator, Professional, Health Advocate and Manager subscales. The factor solution accounted for 79% of the variance, and reliability coefficients (Cronbach alpha) ranged from α = 0.90 to 0.95 for each subscale and α = 0.97 overall. The combined, 24-item ITER used to assess residents’ competencies in the EM residency program showed strong reliability and evidence of construct validity for assessment of the CanMEDS roles.

Conclusions:

Further research is needed to develop and test ITER items that will differentiate each CanMEDS role exclusively.

Type
Education • Enseignement
Copyright
Copyright © Canadian Association of Emergency Physicians 2014

References

REFERENCES

1. Frank, JR. The CanMEDS 2005 physician competency framework. Better standards. Better physicians. Better care. Ottawa: Royal College of Physicians and Surgeons of Canada; 2005.Google Scholar
2. Bandiera, G, editor, Sherbino, J, Frank, JR. The CanMEDS assessment tools handbook. An introductory guide to assessment methods for the CanMEDS competencies. Ottawa: Royal College of Physicians and Surgeons of Canada; 2006.Google Scholar
3. Chou, S, Cole, G, McLaughlin, K, et al. CanMEDS evaluation in Canadian postgraduate training programs: tools used and program director satisfaction. Med Educ 2009;42:879–86, doi:10.1111/j.1365-2923.2008.03111.x.CrossRefGoogle Scholar
4. Lurie, S. History and practice of competency-based assessment. Med Educ 2011;46:4957, doi:10.1111/j.1365-2923.2011.04142.x.Google Scholar
5. Chou, S, Lockyer, J, Cole, G, et al. Assessing postgraduate trainees in Canada: are we achieving diversity in methods? Med Teach 2009;31:e5863, doi:10.1080/01421590802512938.Google Scholar
6. Turnbull, J, Van Barneveld, C. Assessment of clinical performance: in-training evaluation In: Norman, G, van der Vleuten, C, editors. International handbook of research in medical education. Dordrecht (Holland): Kluwer Academic Publishers; 2002. p. 793810.CrossRefGoogle Scholar
7. Kogan, JR, Hess, BJ, Conforti, LN, et al. What drives faculty ratings of residents’ clinical skills? The impact of faculty’s own clinical skills. Acad Med 2010;85 Suppl 10:S25–8, doi:10.1097/ACM.0b013e3181ed1aa3.Google Scholar
8. Watling, CJ, Kenyon, CF, Schulz, V, et al. An exploration of faculty perspectives on the in-training evaluation of residents. Acad Med 2010;85:1157–62, doi:10.1097/ACM.0b013e3181e19722.Google Scholar
9. Watling, CJ, Kenyon, CF, Zibrowski, EM, et al. Rules of engagement: residents’ perceptions of the in-trainingevaluation process. Acad Med 2008;83(10 Suppl):S97100, doi:10.1097/ACM.0b013e318183e78c.CrossRefGoogle ScholarPubMed
10. Bandiera, G, Lendrum, D. Dispatches from the front: emergency medicine teachers’ perceptions of competencybased education. CJEM 2011;13:155–61.Google Scholar
11. Sherbino, J, Upadhye, S, Worster, A. Self-reported priorities and resources of academic emergency physicians for the maintenance of clinical competence: a pilot study. CJEM 2009;11:230–4.Google Scholar
12. MacCallum, RC, Widaman, KF, Zhang, S, et al. Sample size in factor analysis. Psychol Methods 1999;4:8499, doi:10.1037/1082-989X.4.1.84.Google Scholar
13. Lai, J. Health advocacy in emergency medicine: a resident’s perspective. CJEM 2009;11:99100.CrossRefGoogle Scholar
14. Sherbino, J, Bandiera, G, Frank, JR. Assessing competence in emergency medicine trainees: an overview of effective methodologies. CJEM 2008;10:365–71.CrossRefGoogle Scholar
15. Royal College of Physicans and Surgeons of Canada. Objectives of training in emergency medicine 2011. Available at: (accessed April, 2012).Google Scholar
16. Tabachnick, BG, Fidell, LS. Using multivariate statistics. 3rd ed. Northridge (CA): Pearson Education; 2007.Google Scholar
17. Bryant, FB, Yarnold, PR. Principal components analysis and exploratory and confirmatory factor analysis. In: Grimm, LG, Yarnold, RR, editors. Reading and understanding multivariale statistics. Washington (DC): American Psychological Association; 1995. p. 99136.Google Scholar