Hostname: page-component-cd9895bd7-gvvz8 Total loading time: 0 Render date: 2024-12-21T14:03:16.306Z Has data issue: false hasContentIssue false

Does E-learning Facilitate Medical Education in Pediatric Neurology?

Published online by Cambridge University Press:  27 February 2023

Brittany Curry
Affiliation:
Schulich School of Medicine and Dentistry, Western University, ON, Canada Children’s Hospital of Eastern Ontario (CHEO), Ottawa, ON, Canada
Sarah Buttle
Affiliation:
Children’s Hospital of Eastern Ontario (CHEO), Ottawa, ON, Canada University of Ottawa Faculty of Medicine, Ottawa, ON, Canada
Hugh J. McMillan
Affiliation:
Montreal Children’s Hospital, McGill University, Montreal, QC, Canada
Richard Webster
Affiliation:
Children’s Hospital of Eastern Ontario (CHEO), Ottawa, ON, Canada
Deepti Reddy
Affiliation:
Children’s Hospital of Eastern Ontario (CHEO), Ottawa, ON, Canada
Aneesh Karir
Affiliation:
Division of Plastic and Reconstructive Surgery, Department of Surgery, University of Manitoba, Winnipeg, Canada
Stewart Spence
Affiliation:
The Ottawa Hospital, Ottawa, ON, Canada
Aleksandra Mineyko
Affiliation:
Alberta Children’s Hospital, Calgary, AB, Canada
Hilary Writer
Affiliation:
Children’s Hospital of Eastern Ontario (CHEO), Ottawa, ON, Canada University of Ottawa Faculty of Medicine, Ottawa, ON, Canada
Heather MacLean
Affiliation:
University of Ottawa Faculty of Medicine, Ottawa, ON, Canada The Ottawa Hospital, Ottawa, ON, Canada
Daniela Pohl*
Affiliation:
Children’s Hospital of Eastern Ontario (CHEO), Ottawa, ON, Canada University of Ottawa Faculty of Medicine, Ottawa, ON, Canada
*
Corresponding author: Daniela Pohl, Children’s Hospital of Eastern Ontario (CHEO), 401 Smyth Road, Ottawa, ON, K1H 8L1, Canada. Email: dpohl@cheo.on.ca
Rights & Permissions [Opens in a new window]

Abstract:

Background:

E-learning has become commonplace in medical education. Incorporation of multimedia, clinical cases, and interactive elements has increased its attractiveness over textbooks. Although there has been an expansion of e-learning in medicine, the feasibility of e-learning in pediatric neurology is unclear. This study evaluates knowledge acquisition and satisfaction using pediatric neurology e-learning compared to conventional learning.

Methods:

Residents of Canadian pediatrics, neurology, and pediatric neurology programs and medical students from Queens University, Western University, and the University of Ottawa were invited to participate. Learners were randomly assigned two review papers and two ebrain modules in a four-topic crossover design. Participants completed pre-tests, experience surveys, and post-tests. We calculated the median change in score from pre-test to post-test and constructed a mixed-effects model to determine the effect of variables on post-test scores.

Results:

In total, 119 individuals participated (53 medical students; 66 residents). Ebrain had a larger positive change than review papers in post-test score from pre-test score for the pediatric stroke learning topic but a smaller positive change for Duchenne muscular dystrophy, childhood absence epilepsy, and acute disseminated encephalomyelitis. Learning topics showed statistical relationship to post-test scores (p = 0.04). Depending on topic, 57–92% (N = 59–66) of respondents favored e-learning over review article learning.

Conclusions:

Ebrain users scored higher on post-tests than review paper users. However, the effect is small and it is unclear if it is educationally meaningful. Although the difference in scores may not be substantially different, most learners preferred e-learning. Future projects should focus on improving the quality and efficacy of e-learning modules.

Résumé :

RÉSUMÉ :

L’apprentissage en ligne facilite-t-il la formation médicale en neurologie pédiatrique?

Contexte :

L’apprentissage en ligne est devenu chose courante en formation médicale. L’intégration du multimédia, des cas cliniques et de l’interactivité a eu pour effet de rendre ce mode d’apprentissage plus attrayant que les livres. Certes, l’apprentissage en ligne a pris de l’essor en médecine, mais on ne connaît pas très bien la faisabilité de l’apprentissage en ligne en neurologie pédiatrique. L’étude visait donc à évaluer l’acquisition des connaissances en neurologie pédiatrique par l’apprentissage en ligne, de même que le degré de satisfaction, et à les comparer avec la formule classique d’apprentissage.

Méthode :

Des résidents et résidentes inscrits aux programmes de pédiatrie, de neurologie et de neurologie pédiatrique au Canada ainsi que des étudiants et étudiantes en médecine de l’Université Queen’s, de l’Université Western et de l’Université d’Ottawa ont été invités à participer à l’étude. Les apprenants ont été répartis au hasard vers deux articles de synthèse ou vers deux modules d’apprentissage ebrain selon un plan d’étude croisé comptant quatre sujets. Les participants ont répondu à des prétests et à des post-tests ainsi qu’à un questionnaire d’enquête sur leur expérience. L’équipe de recherche a par la suite calculé l’écart médian entre les résultats obtenus aux prétests et aux post-tests, puis élaboré un modèle à effets mixtes pour déterminer l’effet des variables sur les résultats aux post-tests.

Résultats :

Au total, 119 sujets (53 étudiants en médecine et 66 résidents) ont participé à l’étude. Les résultats aux post-tests par rapport aux prétests ont révélé un écart favorable plus grand pour ebrain que pour les articles de synthèse en ce qui concerne les accidents vasculaires cérébraux chez les enfants, mais cet écart favorable était plus mince pour la myopathie de Duchenne, les absences de l’enfant et l’encéphalite aiguë disséminée. Une relation statistique a été établie entre les sujets d’apprentissage et les résultats obtenus aux post-tests (p = 0,04). Selon les sujets étudiés, de 57 à 92 % (n = 59–66) des répondants préféraient l’apprentissage en ligne à celui par les articles de synthèse.

Conclusion :

Les utilisateurs d’ebrain ont enregistré des résultats plus élevés aux post-tests que les apprenants par les articles de synthèse. Toutefois, l’écart est faible, et peut-être n’est-il pas significatif sur le plan pédagogique. Malgré des différences peu importantes, la plupart des apprenants ont indiqué une préférence pour l’apprentissage en ligne. Aussi faudrait-il à l’avenir s’attacher à améliorer la qualité et l’efficacité des modules d’apprentissage en ligne.

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Canadian Neurological Sciences Federation

Introduction

E-learning is an increasingly used teaching modality in medical education. The use of e-learning has become even more valuable with the COVID-19 pandemic, which has limited in-person learning for many medical students and residents. Reference Merzouk, Kurosinski and Kostikas1 In addition, with the continuous rise of neurology’s complexity and diminishing clinical opportunities for residents resulting from increasing resident numbers and shortened lengths of patient hospitalization, e-learning provides the unique opportunity to complement clinical learning. Reference Elkind2 E-learning grants access at the user’s convenience, holds potential for frequent updates to reflect current guidelines, and can provide virtual clinical exposure to rare diseases not seen frequently in clinical practice. Reference Lewis, Cidon, Seto, Chen and Mahan3Reference Masic7 It can contain multimedia to adapt to various learning styles and has the potential to provide equivalent learning opportunities for trainees no matter their location. Reference Letterie8 Limitations of e-learning include the skill and time necessary for educators to create these tools as well as the cost for design and maintenance. Reference Tarpada, Morris and Burton9 E-learning has been shown to be equally or more effective than conventional learning through research primarily conducted in surgical specialties. Reference O’Doherty, Dromey, Lougheed, Hannigan, Last and McGrath10Reference Lee, Chao and Huang13 However, the effectiveness of e-learning has been shown to vary across medical disciplines and e-learning types and has not been examined in pediatric neurology. Reference Cook, Levinson, Garside, Dupras, Erwin and Montori14,Reference Cook, Levinson and Garside15

Ebrain (ebrain.net) is a not-for-profit web-based training resource and is the world’s largest in the domain of clinical neuroscience. Reference Holmes16,Reference Thomson17 This training resource involves over 650 short lessons and has been utilized mainly across Europe since 2011.

Although there has been an expansion in the use of e-learning in medical education, the feasibility and benefits of these tools within the discipline of pediatric neurology are unclear. Our study evaluates the learning outcomes and satisfaction of e-learning compared to conventional review paper learning on four pediatric neurology topics, with the aim of determining the value of pediatric neurology e-learning. Ultimately, our results may help medical educators to tailor the curriculum to learners’ needs.

Methods

Recruitment

Medical students and residents from Canadian universities were invited to participate. All medical students from the University of Ottawa, Queens University, and Western University were approached about the study via email from their institution and were asked to participate if they had interest in pursuing neurology, pediatric neurology, or pediatrics. They were invited to participate from June to November 2020. Pediatric, pediatric neurology, and neurology residents in postgraduate year 1–5 from the University of Ottawa and the University of Calgary received email invitations to participate in the study between July 2019 and January 2021. Furthermore, all Canadian residents in these specialties were eligible to participate after informally hearing about the study from peer residents and reaching out to our team. REB approval was obtained at all participating sites (CHEO REB #16/89X). The target number of participants to complete the study was 60. This was estimated by data simulations to be the number required to detect the hypothesized difference in learning gains between conventional with sufficient power of 90%.

E-learning Modules

The topics for the learning sessions included pediatric stroke, childhood absence epilepsy (CAE), acute disseminated encephalomyelitis (ADEM), and Duchenne muscular dystrophy (DMD). These topics were chosen as they were considered to be highly relevant for pediatrics, pediatric neurology, and neurology. We created a four-topic crossover design with participants randomly assigned to two conventional learning tools and two ebrain modules. Conventional learning was in the form of pre-selected review articles. The time to complete each article set was estimated by a medical student to take between 20 and 40 minutes. Expert pediatric neurologists created the four ebrain learning modules, utilizing information from the peer-reviewed review articles. Each module was approximately 20 minutes in length. The ebrain content incorporated the use of multimedia, practice questions, and cases.

Evaluations

Participants received pre-tests via the survey tool REDCap™ for each of the four topics and then completed their respective learning sessions and a survey on their experience. 18 The survey included Likert scale questions on participant engagement, applicability of concepts learned, further questions or feedback participants had, the use of the tool in the future, and which method of learning they preferred. Participants received a post-test for each topic 1 week after completing the respective learning session.

We developed a bank of 30 multiple-choice case-based questions per learning topic to create pre- and post-tests. Experts in pediatric neurology created the questions, and an expert with knowledge in multiple-choice question development reviewed the questions (HW). The team grouped the questions into pairs that were of similar difficulty and covered comparable topics. We randomized one question from each pair to the pre-test for each participant, with the remaining question delivered in the post-test.

Data Analysis

The primary outcome variable in this study was median pre-post change in test score (%). We calculated this across learning formats (ebrain versus review paper) and topics. To test if there was a statistical difference in the performance between learning formats, we constructed a mixed-effects model. This allowed us to control for fixed effects (e.g., pre-test score, and the lag time between learning sessions and testing) and random effects (e.g., differences between individuals). Further data exploration compared the learning experience between ebrain and review paper approaches in the Likert scale survey questions. We performed all analyses in the R statistical programing language. 18

Results

Demographics

A total of 119 individuals consented to participate in the study. Of these, 53 were medical students and 66 were residents. Among medical student participants, 5 were from Queens University, 23 went to the University of Ottawa, and 25 were from Western University. There were 6 participants in their first year of medical school, 20 in their second, 20 in their third, and 7 in their fourth year. Among residents, there were 14 residents from the University of Calgary and 43 residents from the University of Ottawa. There were two residents from Queens University. There was one participant each from the schools Dalhousie University, McGill University, McMaster University, University of Alberta, University of British Columbia, and the University of Toronto. These participants were residents who reached out to our research team directly to participate after learning about the project from their resident colleagues. Forty-one participants were in a pediatrics residency program, 14 were in neurology, and 10 were in pediatric neurology. Demographics can be found in Table 1.

Table 1: Participant demographics including school, year of undergraduate medicine training, and residency program

Pre- and Post-Test Scores

There was statistical evidence of a difference in pre-learning and post-learning test scores for each learning topic (p < 0.05). The median [interquartile range (IQR)] change in score in the pediatric stroke topic was 6.7 (−6.6, 20.0) among review paper learners and 20.0 (6.7, 33.3) among ebrain module learners. There was a median (IQR) change in score of 10.0 (0.0, 20.0) for review paper learners and 13.4 (6.7, 26.6) for ebrain learners for the DMD topic. In the ADEM topic, the median (IQR) change in score among review paper learners was 26.7 (−6.6, 36.7) and was 13.3 (0.0, 23.3) among ebrain learners. The median change in score for CAE was 13.4 (0.0, 21.7) among review paper learners and 13.3 (6.7, 20.0) among ebrain module learners.

A mixed-effects model taking into consideration the effect of individuals, pre-test score, module, learning tool, and time between the end of the learning session and the post-test on the post-test score showed that ebrain users scored 4.21% higher on post-tests as compared to review paper users (p = 0.03). Although the pre-test score appears to have a strong relationship to post-test score (p = 0.01), a 1% increase in pre-test score resulted in a 0.19% increase in post-test score. Learning topic showed statistical relationship to post-test score (p = 0.04). Compared with the ADEM module, participants scored 6.99% higher on the pediatric stroke post-test. Similarly, they scored 5.15% higher and 1.93% higher on the CAE and DMD modules, respectively. There is some evidence that residents perform better than medical students (β = 5.34% (95% CI: −0.45%, 11.1%)), but the confidence interval is very wide. In other words, residents’ post-test scores are 5.34% higher than medical students’, with a large uncertainty around this estimate. The time elapsed from completing the learning session to completing the post-test did not have a significant effect (p > 0.05) on the post-test score. This mixed-effects model can be seen in Table 2.

Table 2: Mixed-effects model on the effect of individuals, pre-test score, module, learning tool, and time between the end of the learning session and the post-test on the post-test score

1CI = confidence interval.

Subjective Learning Experience (Likert Scales)

Depending on the module topic, 57–92% (n = 59–66) of survey respondents favored e-learning over review articles (Likert response 4 or 5). Between 84 and 87% of e-learning users agreed that their experience was engaging, where 7–39% of review paper users agreed (Likert response 4 or 5). 62–82% of e-learning users felt comfortable applying the concepts covered in the learning tool, meanwhile 23–53% of review papers agreed with this statement (Likert response 4 or 5). Among e-learning users, 22–39% had questions with regard to the learning topics that were not answered by the learning material and 13–61% of review papers had questions (Likert response 4 or 5). Lastly, 83–92% of e-learning users agreed with using the learning tool in the future to refresh their understanding of these concepts, meanwhile 44–85% of review paper learners agreed with this statement (Likert response 4 or 5). The percent responses to the learning experience questions for each topic can be found in Figure 1.

Figure 1: Likert scale responses post-learning session for a) pediatric stroke (32 review paper responses, 37 ebrain responses), b) Duchenne muscular dystrophy (30 review paper responses, 29 ebrain responses), c) childhood absence epilepsy (31 review paper responses, 28 ebrain responses), and d) acute disseminated encephalomyelitis (21 review paper responses, 39 ebrain responses).

Discussion

Our results demonstrate that most participants preferred utilizing e-learning. Ebrain users scored higher on post-tests than review paper learners. These findings are consistent with a study by Cook et al., who found that among internal medicine residents from the Mayo School of Graduate Medical Education, there was no difference between web module and paper-based formats in knowledge-test-score change, but residents preferred learning with web-based modules. Reference Cook, Dupras, Thompson and Pankratz19 In our study, the effect of the increase in post-test scores from ebrain to review papers was small and may not be educationally meaningful. A systematic review of plastic surgery e-learning demonstrated that the majority of participants showed higher satisfaction and knowledge gains in e-learning than conventional learning, with novice learners benefiting more than senior learners. Reference Lin, Lee and Mauch20 However, in our study’s mixed-effects model, there was some evidence that residents appeared to have a larger learning gain than medical students. This could be due to the complexity of topics covered. Further evaluation of the knowledge acquisition of more novice learners to senior learners in undergraduate and postgraduate medical education could prove beneficial for medical educators and provide an additional factor to consider when implementing e-learning. In addition, further analyzing why users enjoy e-learning more than conventional review papers may improve the medical education curriculum. Our study showed that ebrain participants more commonly noted their experience was engaging and felt more comfortable applying concepts from the modules than review paper learners, which may have impacted learning preference.

The topic of module was found to have a significant effect on post-test score in our study. This suggests that e-learning modules may increase the knowledge of users; however, the amount of learning depends on the module topic. This result is similar to a systemic review analyzing internet-based learning on health profession education which found that learning efficacy depended on the nature of the module. Reference Cook, Levinson, Garside, Dupras, Erwin and Montori14 Our study reinforces the importance of undergraduate and postgraduate medical education programs dedicating resources and time to create online learning modules for students and piloting them to ensure educational efficacy. However, current literature demonstrates a lack of consensus of what indicators to use to evaluate the efficacy of these modules in postgraduate medical education and the need for a homogenous way to evaluate e-learning. Reference de Leeuw, de Soet, van der Horst, Walsh, Westerman and Scheele21 Our study also suggests that a learning topic itself may be more suitable for a specific learning modality which is important to consider when creating these modules. Upon examination of our modules, each was similarly organized into epidemiology/risk factors, pathogenesis and clinical features, diagnostic workup, management/future directions, and key points to consider. A short test was provided to consolidate learning. However, each module was slightly different in the learning tools incorporated, including visual algorithms for the DMD module, diagnostic imaging for pediatric stroke, CAE and ADEM modules, and videos of clinical presentations for the CAE module. Perhaps the use of these features impacted the inter-topic differences in learning and appreciation. The length of review papers (ADEM and CAE as the shortest and DMD as the longest) and use of visual media in certain papers may have also impacted learning and enjoyment.

Among both e-learning and conventional learning users, there was a significant change between pre-learning and post-learning test scores for each learning topic. However, the median change in score for each topic was only approximately a 13% improvement, a value smaller than anticipated. Upon data review, although participants completed module completion surveys stating they read the appropriate review paper or ebrain module, a proportion of participants using ebrain as well as those participants using the review paper scored lower on the post-tests as compared to the pre-tests. These data suggest that learning modules and review papers may not be the most effective learning method for all residents and their unique learning styles. This consolidates the need to incorporate other methods of learning like in-person clinical learning in medical and residency education. The increasingly popular technique of a blended education method, involving online and in-person learning, has been favored among students. Reference Rajab, Gazal and Alkattan22

There are limitations to our study. One limitation is the relatively small number of questions in the knowledge tests. The number was chosen as we wanted to limit the time for study completion due to the busy schedules of residents. Another limitation is the number of participants completing all modules of the study which is 49 (41% of the 119 participants), which is slightly under our desired participant completion goal of 60. This was estimated by data simulations to be the number required to detect the hypothesized difference in learning gains between conventional and e-learning with sufficient power of 90%. This limitation was offset by utilizing data from residents who completed some but not all of the e-learning topics. Residents were found to complete individual learning topics at a lower rate than medical students. Due to participant feedback noting difficulty allotting time to this study because of their busy schedules, the study was extended over a period of 19.5 months to attempt to increase study completion and additional participants were recruited. Due to this limitation in sample size, further subgroup analysis comparing medical learners to resident learners and in-between specialty programs was not feasible in our study.

Conclusion

This study highlights that, despite no meaningful increase in test scores as compared to conventional learning, e-learning is a preferred learning modality for most medical students and residents in pediatric neurology. Learning acquisition varies across module topics. Incorporating e-learning into pediatric neurology residency and medical education should be increasingly implemented, due to its preference among learners, and non-inferiority as compared to conventional learning via review papers.

Acknowledgements

We recognize financial support from the University of Ottawa Educational Initiatives in Residency Education in 2016-2017.

Conflict of Interest

The authors declare no conflicts of interest.

Statement of Authorship

BC contributed to the conception, design, and analysis of this study and provided equal authorship of the manuscript. SB, HM, AK, SS, AM, HM, and DP contributed to the creation of the e-learning modules, study design, and critical revision of the manuscript. RW and DR contributed to the study design, data analysis, and critical revision of the study. HW contributed to the study design and critical revision of the manuscript.

References

Merzouk, A, Kurosinski, P, Kostikas, K. e-Learning for the medical team: the present and future of ERS Learning Resources. Breathe. 2014;10:296304. DOI 10.1183/20734735.008814.CrossRefGoogle Scholar
Elkind, MSV. Teaching the next generation of neurologists. Neurology. 2009;72:65763. DOI 10.1212/01.wnl.0000342516.08077.55.CrossRefGoogle ScholarPubMed
Lewis, KO, Cidon, MJ, Seto, TL, Chen, H, Mahan, JD. Leveraging e-learning in medical education. Curr Probl Pediatr Adolesc Health Care. 2014;44:15063. DOI 10.1016/j.cppeds.2014.01.004.Google ScholarPubMed
Davies, A, Macleod, R, Bennett-Britton, I, Mcelnay, P, Bakhbakhi, D, Sansom, J. E-learning and near-peer teaching in electrocardiogram education: a randomised trial. Clin Teach. 2016;13:22730. DOI 10.1111/tct.12421.CrossRefGoogle ScholarPubMed
Ruiz, JG, Mintzer, MJ, Leipzig, RM. The impact of e-learning in medical education. Acad Med. 2006;81:20712. DOI 10.1097/00001888-200603000-00002.CrossRefGoogle ScholarPubMed
Jones, O, Saunders, H, Mires, G. The e-learning revolution in obstetrics and gynaecology. Best Pract Res Clin Obstet Gynaecol. 2010;24:73146. DOI 10.1016/j.bpobgyn.2010.04.009.CrossRefGoogle ScholarPubMed
Masic, I. E-Learning as new method of medical education. Acta Inform Medica. 2008;16:10217. DOI 10.5455/aim.2008.16.102-117.CrossRefGoogle ScholarPubMed
Letterie, GS. Medical education as a science: the quality of evidence for computer-assisted instruction. Am J Obstet and Gynecol. 2003;188:84953. DOI 10.1067/mob.2003.168.CrossRefGoogle ScholarPubMed
Tarpada, SP, Morris, MT, Burton, DA. E-learning in orthopedic surgery training: a systematic review. J Orthop. 2016;13:42530. DOI 10.1016/j.jor.2016.09.004.Google ScholarPubMed
O’Doherty, D, Dromey, M, Lougheed, J, Hannigan, A, Last, J, McGrath, D. Barriers and solutions to online learning in medical education – an integrative review. BMC Med Educ. 2018;18:130. DOI 10.1186/s12909-018-1240-0.CrossRefGoogle ScholarPubMed
Jayakumar, N, Brunckhorst, O, Dasgupta, P, Khan, MS, Ahmed, K. E-Learning in surgical education: a systematic review. J Surg Educ. 2015;72:114557. DOI 10.1016/j.jsurg.2015.05.008.10.1016/j.jsurg.2015.05.008CrossRefGoogle ScholarPubMed
Burford, C, Guni, A, Rajan, K, et al. Designing undergraduate neurosurgical e-learning: medical students’ perspective. Br J Neurosurg. 2019;33:7979. DOI 10.1080/02688697.2018.1520806.CrossRefGoogle ScholarPubMed
Lee, LA, Chao, YP, Huang, CG, et al. Cognitive style and mobile e-learning in emergent otorhinolaryngology-head and neck surgery disorders for millennial undergraduate medical students: randomized controlled trial. J Med Internet Res. 2018;20:e56. DOI 10.2196/jmir.8987.CrossRefGoogle ScholarPubMed
Cook, DA, Levinson, AJ, Garside, S, Dupras, DM, Erwin, PJ, Montori, VM. Instructional design variations in internet-based learning for health professions education: a systematic review and meta-analysis. Acad Med. 2010;85:90922. DOI 10.1097/ACM.0b013e3181d6c319.Google ScholarPubMed
Cook, DA, Levinson, AJ, Garside, S. Time and learning efficiency in Internet-based learning: a systematic review and meta-analysis. Adv Heal Sci Educ. 2010;15:75570. DOI 10.1007/s10459-010-9231-x.Google ScholarPubMed
Holmes, D. ebrain brings the e-learning revolution to the neurosciences. Lancet Neurol. 2012;11:1267. DOI 10.1016/S1474-4422(12)70009-1.CrossRefGoogle Scholar
Thomson, S. ebrain: the electronic learning platform for clinical neuroscience. Br J Neurosurg. 2013;5:5779. DOI 10.3109/02688697.2013.771731.Google Scholar
R Core Team. R: a language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2019, URL, https://www.R-project.org/.Google Scholar
Cook, DA, Dupras, DM, Thompson, WG, Pankratz, VS. Web-based learning in residents' continuity clinics: a randomized, controlled trial. Acad Med. 2005;80:907. DOI 10.1097/00001888-200501000-00022.Google ScholarPubMed
Lin, IC, Lee, A, Mauch, JT. Does e-learning improve plastic surgery education? A systematic review of asynchronous resources. Ann Plast Surg. 2021;87:S4051. DOI 10.1097/SAP.0000000000002806.CrossRefGoogle ScholarPubMed
de Leeuw, R, de Soet, A, van der Horst, S, Walsh, K, Westerman, M, Scheele, F. How we evaluate postgraduate medical e-learning: systematic review. JMIR Med Educ. 2019;5:e13128. DOI 10.2196/13128.CrossRefGoogle ScholarPubMed
Rajab, MH, Gazal, AM, Alkattan, K. Challenges to online medical education during the COVID-19 pandemic. Cureus. 2020;12:e8966. DOI 10.7759/cureus/8966.Google ScholarPubMed
Figure 0

Table 1: Participant demographics including school, year of undergraduate medicine training, and residency program

Figure 1

Table 2: Mixed-effects model on the effect of individuals, pre-test score, module, learning tool, and time between the end of the learning session and the post-test on the post-test score

Figure 2

Figure 1: Likert scale responses post-learning session for a) pediatric stroke (32 review paper responses, 37 ebrain responses), b) Duchenne muscular dystrophy (30 review paper responses, 29 ebrain responses), c) childhood absence epilepsy (31 review paper responses, 28 ebrain responses), and d) acute disseminated encephalomyelitis (21 review paper responses, 39 ebrain responses).