Please note, due to essential maintenance online transactions will not be possible between 01:30 and 03:00 BST, on Tuesday 28th January 2020 (19:30-21:00 EDT). We apologise for any inconvenience.
Introduction: The use of free open access medicine, particularly open educational resources (OERs), by medical educators and learners continues to increase. As OERs, especially blogs and podcasts, rise in popularity, their ease of dissemination raises concerns about their quality. While critical appraisal of primary research and journal articles is formally taught, no training exists for the assessment of OERs. Thus, the ability of educators and learners to effectively assess the quality of OERs using gestalt alone has been questioned. Our goal is to determine whether gestalt is sufficient for emergency medicine learners (EM) and physicians to consistently rate and reliably recommend OERs to their colleagues. We hypothesized that EM physicians and learners would differ substantively in their assessment of the same resources. Methods: Participants included 31 EM learners and 23 EM attending physicians from Canada and the U.S. A modified Dillman technique was used to administer 4 survey blocks of 10 blog posts per subject between April and August, 2015. Participants were asked whether they would recommend each OER to 1) a learner or 2) an attending physician. The ratings reliability was assessed using single measures intraclass correlations and their correlations amongst the groups were assessed using Spearman’s rho. Family-wise adjustments were made for multiple comparisons using the Bonferroni technique. Results: Learners demonstrated poor reliability when recommending resources for other learners (ICC= 0.21, 95% CI 0.13-0.39) and attending physicians (ICC = 0.16, 95% CI=0.09-0.30). Similarly, attendings had poor reliability when recommending resources for learners (ICC= 0.27, 95% CI 0.18-0.41) and other attendings (ICC=0.22, 95% CI 0.14-0.35). Learners and attendings demonstrated moderate consistency between them when recommending resources for learners (rs=0.494, p<.01) and attendings (rs=0.491, p<.01). Conclusion: Using a gestalt-based rating system is neither reliable nor consistent when recommending OERs to learners and attending physicians. Learners’ gestalt ratings for recommending resources for other learners and attendings were especially unreliable. Our findings suggests the need for structured rating systems to rate OERs.