Hostname: page-component-cd9895bd7-7cvxr Total loading time: 0 Render date: 2024-12-26T19:32:04.415Z Has data issue: false hasContentIssue false

P.110 Evaluating AI performance in written neurosurgery exams: a comparative analysis of large language models

Published online by Cambridge University Press:  24 May 2024

E Guo
Affiliation:
(Calgary)
R Sanguinetti
Affiliation:
(Calgary)*
R Ramchandani
Affiliation:
(Ottawa)
S Lama
Affiliation:
(Calgary)
GR Sutherland
Affiliation:
(Calgary)
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Background: The integration of Artificial Intelligence (AI) in medical education is an area of growing importance. While AI models have been evaluated extensively in multiple-choice question formats, their proficiency in written exams remains to be explored. Methods: Four AI models—GPT-4 (OpenAI), Claude-2.1 (Anthropic), Gemini Pro (Google), and Perplexity 70B (Perplexity)—were tested using the Canadian Royal College Sample Neurosurgery Exam. The written exam covered diagnostic reasoning, knowledge of neurosurgical conditions, and understanding of radiographic imaging techniques. Results: GPT-4 and Perplexity 70B both achieved a score of 68.42%, followed by Claude-2.1 with 60.53%, and Gemini Pro with 57.89%. The models showed proficiency in answering questions that required factual knowledge, such as identifying pathogens in spinal epidural abscess. However, they struggled with more complex diagnostic reasoning tasks, particularly in explaining the pathophysiology behind a sudden rise in blood pressure during surgery and interpreting radiographic characteristics of intracranial abscesses on MRI. Conclusions: The findings indicate that while AI models like GPT-4 and Perplexity 70B are adept at handling factual neurosurgical questions, their performance in complex diagnostic reasoning in a written format is less consistent. This underscores the need for more advanced and specialized AI training, particularly in the nuances of medical diagnostics and decision-making.

Type
Abstracts
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Canadian Neurological Sciences Federation