No CrossRef data available.
Published online by Cambridge University Press: 24 May 2024
Background: The integration of Artificial Intelligence (AI) in medical education is an area of growing importance. While AI models have been evaluated extensively in multiple-choice question formats, their proficiency in written exams remains to be explored. Methods: Four AI models—GPT-4 (OpenAI), Claude-2.1 (Anthropic), Gemini Pro (Google), and Perplexity 70B (Perplexity)—were tested using the Canadian Royal College Sample Neurosurgery Exam. The written exam covered diagnostic reasoning, knowledge of neurosurgical conditions, and understanding of radiographic imaging techniques. Results: GPT-4 and Perplexity 70B both achieved a score of 68.42%, followed by Claude-2.1 with 60.53%, and Gemini Pro with 57.89%. The models showed proficiency in answering questions that required factual knowledge, such as identifying pathogens in spinal epidural abscess. However, they struggled with more complex diagnostic reasoning tasks, particularly in explaining the pathophysiology behind a sudden rise in blood pressure during surgery and interpreting radiographic characteristics of intracranial abscesses on MRI. Conclusions: The findings indicate that while AI models like GPT-4 and Perplexity 70B are adept at handling factual neurosurgical questions, their performance in complex diagnostic reasoning in a written format is less consistent. This underscores the need for more advanced and specialized AI training, particularly in the nuances of medical diagnostics and decision-making.