To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We provide an updated estimate of adult stroke event rates by age group, sex, and stroke type using Canadian administrative data. In the 2017–2018 fiscal year, there were an estimated 81,781 hospital or emergency department visits for stroke events in Canada, excluding Quebec. Our findings show that overall, the event rate of stroke is similar between women and men. There were slight differences in stroke event rate at various ages by sex and stroke type and emerging patterns warrant attention in future studies. Our findings emphasize the importance of continuous surveillance to monitor the epidemiology of stroke in Canada.
Government policy guidance in Victoria, Australia, encourages schools to provide affordable, healthy foods in canteens. This study analysed the healthiness and price of items available in canteens in Victorian primary schools and associations with school characteristics.
Dietitians classified menu items (main, snack and beverage) using the red, amber and green traffic light system defined in the Victorian government’s School Canteens and Other School Food Services Policy. This system also included a black category for confectionary and high sugar content soft drinks which should not be supplied. Descriptive statistics and regressions were used to analyse differences in the healthiness and price of main meals, snacks and beverages offered, according to school remoteness, sector (government and Catholic/independent) size, and socio-economic position.
State of Victoria, Australia
A convenience sample of canteen menus drawn from three previous obesity prevention studies in forty-eight primary schools between 2016 and 2019.
On average, school canteen menus were 21 % ‘green’ (most healthy – everyday), 53 % ‘amber’ (select carefully), 25 % ‘red’ (occasional) and 2 % ‘black’ (banned) items, demonstrating low adherence with government guidelines. ‘Black’ items were more common in schools in regional population centres. ‘Red’ main meal items were cheaper than ‘green’% (mean difference –$0·48 (95 % CI –0·85, –0·10)) and ‘amber’ –$0·91 (–1·27, –0·57)) main meal items. In about 50 % of schools, the mean price of ‘red’ main meal, beverages and snack items were cheaper than ‘green’ items, or no ‘green’ alternative items were offered.
In this sample of Victorian canteen menus, there was no evidence of associations of healthiness and pricing by school characteristics except for regional centres having the highest proportion of ‘black’ (banned) items compared with all other remoteness categories examined. There was low adherence with state canteen menu guidelines. Many schools offered a high proportion of ‘red’ food options and ‘black’ (banned) options, particularly in regional centres. Unhealthier options were cheaper than healthy options. More needs to be done to bring Victorian primary school canteen menus in line with guidelines.
The cornerstone of obesity treatment is behavioural weight management, resulting in significant improvements in cardio-metabolic and psychosocial health. However, there is ongoing concern that dietary interventions used for weight management may precipitate the development of eating disorders. Systematic reviews demonstrate that, while for most participants medically supervised obesity treatment improves risk scores related to eating disorders, a subset of people who undergo obesity treatment may have poor outcomes for eating disorders. This review summarises the background and rationale for the formation of the Eating Disorders In weight-related Therapy (EDIT) Collaboration. The EDIT Collaboration will explore the complex risk factor interactions that precede changes to eating disorder risk following weight management. In this review, we also outline the programme of work and design of studies for the EDIT Collaboration, including expected knowledge gains. The EDIT studies explore risk factors and the interactions between them using individual-level data from international weight management trials. Combining all available data on eating disorder risk from weight management trials will allow sufficient sample size to interrogate our hypothesis: that individuals undertaking weight management interventions will vary in their eating disorder risk profile, on the basis of personal characteristics and intervention strategies available to them. The collaboration includes the integration of health consumers in project development and translation. An important knowledge gain from this project is a comprehensive understanding of the impact of weight management interventions on eating disorder risk.
Although age-standardized stroke occurrence has been decreasing, the absolute number of stroke events globally, and in Canada, is increasing. Stroke surveillance is necessary for health services planning, informing research design, and public health messaging. We used administrative data to estimate the number of stroke events resulting in hospital or emergency department presentation across Canada in the 2017–18 fiscal year.
Hospitalization data were obtained from the Canadian Institute for Health Information (CIHI) Discharge Abstract Database and the Ministry of Health and Social Services in Quebec. Emergency department data were obtained from the CIHI National Ambulatory Care Reporting System (Alberta and Ontario). Stroke events were identified using ICD-10 coding. Data were linked into episodes of care to account for readmissions and interfacility transfers. Projections for emergency department visits for provinces/territories outside of Alberta and Ontario were generated based upon age and sex-standardized estimates from Alberta and Ontario.
In the 2017–18 fiscal year, there were 108,707 stroke events resulting in hospital or emergency department presentation across the country. This was made up of 54,357 events resulting in hospital admission and 54,350 events resulting in only emergency department presentation. The events resulting in only emergency department presentation consisted of 25,941 events observed in Alberta and Ontario and a projection of 28,409 events across the rest of the country.
We estimate a stroke event resulting in hospital or emergency department presentation occurs every 5 minutes in Canada.
The 2022 update of the Canadian Stroke Best Practice Recommendations (CSBPR) for Acute Stroke Management, 7th edition, is a comprehensive summary of current evidence-based recommendations, appropriate for use by an interdisciplinary team of healthcare providers and system planners caring for persons with an acute stroke or transient ischemic attack. These recommendations are a timely opportunity to reassess current processes to ensure efficient access to acute stroke diagnostics, treatments, and management strategies, proven to reduce mortality and morbidity. The topics covered include prehospital care, emergency department care, intravenous thrombolysis and endovascular thrombectomy (EVT), prevention and management of inhospital complications, vascular risk factor reduction, early rehabilitation, and end-of-life care. These recommendations pertain primarily to an acute ischemic vascular event. Notable changes in the 7th edition include recommendations pertaining the use of tenecteplase, thrombolysis as a bridging therapy prior to mechanical thrombectomy, dual antiplatelet therapy for stroke prevention,1 the management of symptomatic intracerebral hemorrhage following thrombolysis, acute stroke imaging, care of patients undergoing EVT, medical assistance in dying, and virtual stroke care. An explicit effort was made to address sex and gender differences wherever possible. The theme of the 7th edition of the CSBPR is building connections to optimize individual outcomes, recognizing that many people who present with acute stroke often also have multiple comorbid conditions, are medically more complex, and require a coordinated interdisciplinary approach for optimal recovery. Additional materials to support timely implementation and quality monitoring of these recommendations are available at www.strokebestpractices.ca.
The Passive Surveillance Stroke Severity (PaSSV) Indicator was derived to estimate stroke severity from variables in administrative datasets but has not been externally validated.
We used linked administrative datasets to identify patients with first hospitalization for acute stroke between 2007-2018 in Alberta, Canada. We used the PaSSV indicator to estimate stroke severity. We used Cox proportional hazard models and evaluated the change in hazard ratios and model discrimination for 30-day and 1-year case fatality with and without PaSSV. Similar comparisons were made for 90-day home time thresholds using logistic regression. We also linked with a clinical registry to obtain National Institutes of Health Stroke Scale (NIHSS) and compared estimates from models without stroke severity, with PaSSV, and with NIHSS.
There were 28,672 patients with acute stroke in the full sample. In comparison to no stroke severity, addition of PaSSV to the 30-day case fatality models resulted in improvement in model discrimination (C-statistic 0.72 [95%CI 0.71–0.73] to 0.80 [0.79–0.80]). After adjustment for PaSSV, admission to a comprehensive stroke center was associated with lower 30-day case fatality (adjusted hazard ratio changed from 1.03 [0.96–1.10] to 0.72 [0.67–0.77]). In the registry sample (N = 1328), model discrimination for 30-day case fatality improved with the inclusion of stroke severity. Results were similar for 1-year case fatality and home time outcomes.
Addition of PaSSV improved model discrimination for case fatality and home time outcomes. The validity of PASSV in two Canadian provinces suggests that it is a useful tool for baseline risk adjustment in acute stroke.
Many triage algorithms exist for use in mass-casualty incidents (MCIs) involving pediatric patients. Most of these algorithms have not been validated for reliability across users.
Investigators sought to compare inter-rater reliability (IRR) and agreement among five MCI algorithms used in the pediatric population.
A dataset of 253 pediatric (<14 years of age) trauma activations from a Level I trauma center was used to obtain prehospital information and demographics. Three raters were trained on five MCI triage algorithms: Simple Triage and Rapid Treatment (START) and JumpSTART, as appropriate for age (combined as J-START); Sort Assess Life-Saving Intervention Treatment (SALT); Pediatric Triage Tape (PTT); CareFlight (CF); and Sacco Triage Method (STM). Patient outcomes were collected but not available to raters. Each rater triaged the full set of patients into Green, Yellow, Red, or Black categories with each of the five MCI algorithms. The IRR was reported as weighted kappa scores with 95% confidence intervals (CI). Descriptive statistics were used to describe inter-rater and inter-MCI algorithm agreement.
Of the 253 patients, 247 had complete triage assignments among the five algorithms and were included in the study. The IRR was excellent for a majority of the algorithms; however, J-START and CF had the highest reliability with a kappa 0.94 or higher (0.9-1.0, 95% CI for overall weighted kappa). The greatest variability was in SALT among Green and Yellow patients. Overall, J-START and CF had the highest inter-rater and inter-MCI algorithm agreements.
The IRR was excellent for a majority of the algorithms. The SALT algorithm, which contains subjective components, had the lowest IRR when applied to this dataset of pediatric trauma patients. Both J-START and CF demonstrated the best overall reliability and agreement.
We examined the accuracy of International Classification of Disease 10th iteration (ICD-10) diagnosis codes within Canadian administrative data in identifying cerebral venous thrombosis (CVT). Of 289 confirmed cases of CVT admitted to our comprehensive stroke center between 2008 and 2018, 239/289 were new diagnoses and 204/239 were acute events with only 75/204 representing symptomatic CVTs not provoked by trauma or structural processes. Using ICD-10 codes in any position, sensitivity was 39.1% and positive predictive value was 94.2% for patients with a current or history of CVT and 84.0% and 52.5% for acute and symptomatic CVTs not provoked by trauma or structural processes.
Mass-casualty incident (MCI) algorithms are used to sort large numbers of patients rapidly into four basic categories based on severity. To date, there is no consensus on the best method to test the accuracy of an MCI algorithm in the pediatric population, nor on the agreement between different tools designed for this purpose.
This study is to compare agreement between the Criteria Outcomes Tool (COT) to previously published outcomes tools in assessing the triage category applied to a simulated set of pediatric MCI patients.
An MCI triage category (black, red, yellow, and green) was applied to patients from a pre-collected retrospective cohort of pediatric patients under 14 years of age brought in as a trauma activation to a Level I trauma center from July 2010 through November 2013 using each of the following outcome measures: COT, modified Baxt score, modified Baxt combined with mortality and/or length-of-stay (LOS), ambulatory status, mortality alone, and Injury Severity Score (ISS). Descriptive statistics were applied to determine agreement between tools.
A total of 247 patients were included, ranging from 25 days to 13 years of age. The outcome of mortality had 100% agreement with the COT black. The “modified Baxt positive and alive” outcome had the highest agreement with COT red (65%). All yellow outcomes had 47%-53% agreement with COT yellow. “Modified Baxt negative and <24 hours LOS” had the highest agreement with the COT green at 89%.
Assessment of algorithms for triaging pediatric MCI patients is complicated by the lack of a gold standard outcome tool and variability between existing measures.
It is a cliché of self-help advice that there are no problems, only opportunities. The rationale and actions of the BSHS in creating its Global Digital History of Science Festival may be a rare genuine confirmation of this mantra. The global COVID-19 pandemic of 2020 meant that the society's usual annual conference – like everyone else's – had to be cancelled. Once the society decided to go digital, we had a hundred days to organize and deliver our first online festival. In the hope that this will help, inspire and warn colleagues around the world who are also trying to move online, we here detail the considerations, conversations and thinking behind the organizing team's decisions.
The Eating Assessment in Toddlers FFQ (EAT FFQ) has been shown to have good reliability and comparative validity for ranking nutrient intakes in young children. With the addition of food items (n 4), we aimed to re-assess the validity of the EAT FFQ and estimate calibration factors in a sub-sample of children (n 97) participating in the Growing Up Milk – Lite (GUMLi) randomised control trial (2015–2017). Participants completed the ninety-nine-item GUMLi EAT FFQ and record-assisted 24-h recalls (24HR) on two occasions. Energy and nutrient intakes were assessed at months 9 and 12 post-randomisation and calibration factors calculated to determine predicted estimates from the GUMLi EAT FFQ. Validity was assessed using Pearson correlation coefficients, weighted kappa (κ) and exact quartile categorisation. Calibration was calculated using linear regression models on 24HR, adjusted for sex and treatment group. Nutrient intakes were significantly correlated between the GUMLi EAT FFQ and 24HR at both time points. Energy-adjusted, de-attenuated Pearson correlations ranged from 0·3 (fibre) to 0·8 (Fe) at 9 months and from 0·3 (Ca) to 0·7 (Fe) at 12 months. Weighted κ for the quartiles ranged from 0·2 (Zn) to 0·6 (Fe) at 9 months and from 0·1 (total fat) to 0·5 (Fe) at 12 months. Exact agreement ranged from 30 to 74 %. Calibration factors predicted up to 56 % of the variation in the 24HR at 9 months and 44 % at 12 months. The GUMLi EAT FFQ remained a useful tool for ranking nutrient intakes with similar estimated validity compared with other FFQ used in children under 2 years.
Emergency Medical Services (EMS) systems have developed protocols for prehospital activation of the cardiac catheterization laboratory for patients with suspected ST-elevation myocardial infarction (STEMI) to decrease first-medical-contact-to-balloon time (FMC2B). The rate of “false positive” prehospital activations is high. In order to decrease this rate and expedite care for patients with true STEMI, the American Heart Association (AHA; Dallas, Texas USA) developed the Mission Lifeline PreAct STEMI algorithm, which was implemented in Los Angeles County (LAC; California USA) in 2015. The hypothesis of this study was that implementation of the PreAct algorithm would increase the positive predictive value (PPV) of prehospital activation.
This is an observational pre-/post-study of the effect of the implementation of the PreAct algorithm for patients with suspected STEMI transported to one of five STEMI Receiving Centers (SRCs) within the LAC Regional System. The primary outcome was the PPV of cardiac catheterization laboratory activation for percutaneous coronary intervention (PCI) or coronary artery bypass graft (CABG). The secondary outcome was FMC2B.
A total of 1,877 patients were analyzed for the primary outcome in the pre-intervention period and 405 patients in the post-intervention period. There was an overall decrease in cardiac catheterization laboratory activations, from 67% in the pre-intervention period to 49% in the post-intervention period (95% CI for the difference, -14% to -22%). The overall rate of cardiac catheterization declined in post-intervention period as compared the pre-intervention period, from 34% to 30% (95% CI, for the difference -7.6% to 0.4%), but actually increased for subjects who had activation (48% versus 58%; 95% CI, 4.6%-15.0%). Implementation of the PreAct algorithm was associated with an increase in the PPV of activation for PCI or CABG from 37.9% to 48.6%. The overall odds ratio (OR) associated with the intervention was 1.4 (95% CI, 1.1-1.8). The effect of the intervention was to decrease variability between medical centers. There was no associated change in average FMC2B.
The implementation of the PreAct algorithm in the LAC EMS system was associated with an overall increase in the PPV of cardiac catheterization laboratory activation.
The Sort, Access, Life-saving interventions, Treatment and/or Triage (SALT) mass-casualty incident (MCI) algorithm is unique in that it includes two subjective questions during the triage process: “Is the victim likely to survive given the resources?” and “Is the injury minor?”
Given this subjectivity, it was hypothesized that as casualties increase, the inter-rater reliability (IRR) of the tool would decline, due to an increase in the number of patients triaged as Minor and Expectant.
A pre-collected dataset of pediatric trauma patients age <14 years from a single Level 1 trauma center was used to generate “patients.” Three trained raters triaged each patient using SALT as if they were in each of the following scenarios: 10, 100, and 1,000 victim MCIs. Cohen’s kappa test was used to evaluate IRR between the raters in each of the scenarios.
A total of 247 patients were available for triage. The kappas were consistently “poor” to “fair:” 0.37 to 0.59 in the 10-victim scenario; 0.13 to 0.36 in the 100-victim scenario; and 0.05 to 0.36 in the 1,000-victim scenario. There was an increasing percentage of subjects triaged Minor as the number of estimated victims increased: 27.8% increase from 10- to 100-victim scenario and 7.0% increase from 100- to 1,000-victim scenario. Expectant triage categorization of patients remained stable as victim numbers increased.
Overall, SALT demonstrated poor IRR in this study of increasing casualty counts while triaging pediatric patients. Increased casualty counts in the scenarios did lead to increased Minor but not Expectant categorizations.
OBJECTIVES/SPECIFIC AIMS: The aim of this study is to compare intestinal phosphorus absorption in healthy adults and moderate stage chronic kidney disease patients in the context of a controlled feeding study METHODS/STUDY POPULATION: Participants are 30-75 years old and include 10 healthy subjects and 10 moderate-staged CKD patients. Each subject pool will be enrolled in a 9-day study period including 7 days of controlled feeding of a 1500 mg phosphorus diet. Following the controlled feeding, two days of absorption tests will take place (oral and IV tests) utilizing radioisotopic phosphorus to calculate fractional absorption efficiency. RESULTS/ANTICIPATED RESULTS: Current enrollment has produced 7 total matched subject (current n = 14/20). Four of the 7 pairs of completed subjects are female and 3 of 7 are black. Preliminary kinetic modeling data from the first enrolled subject show a moderate CKD patient with fractional absorption of 0.375. With forthcoming analyses, we expect that this fractional absorption result will not be statistically different from this subject’s matched pair, nor will each groups average absorption be different from the other. Additionally, we expect absorption to be maintained even with changes in secondary outcomes measures in serum (FGF23, 1,25-dihydroxyvitamin D, parathyroid hormone, and total phosphorus) in CKD patients. DISCUSSION/SIGNIFICANCE OF IMPACT: Lack of statistical difference in fractional phosphorus absorption between gropus would support that intestinal phosphorus absorption is inappropriately normal in CKD patients compared to healthy adults, despite evidence of abnormal phosphorus homeostatic mechanisms. Future studies will consider the effect of dietary P restriction, the most common nutrition intervention in moderate stage CKD, on fractional absorption efficiency in CKD.
The second year of life is a period of nutritional vulnerability. We aimed to investigate the dietary patterns and nutrient intakes from 1 to 2 years of age during the 12-month follow-up period of the Growing Up Milk – Lite (GUMLi) trial. The GUMLi trial was a multi-centre, double-blinded, randomised controlled trial of 160 healthy 1-year-old children in Auckland, New Zealand and Brisbane, Australia. Dietary intakes were collected at baseline, 3, 6, 9 and 12 months post-randomisation, using a validated FFQ. Dietary patterns were identified using principal component analysis of the frequency of food item consumption per d. The effect of the intervention on dietary patterns and intake of eleven nutrients over the duration of the trial were investigated using random effects mixed models. A total of three dietary patterns were identified at baseline: ‘junk/snack foods’, ‘healthy/guideline foods’ and ‘breast milk/formula’. A significant group difference was observed in ‘breast milk/formula’ dietary pattern z scores at 12 months post-randomisation, where those in the GUMLi group loaded more positively on this pattern, suggesting more frequent consumption of breast milk. No difference was seen in the other two dietary patterns. Significant intervention effects were seen on nutrient intake between the GUMLi (intervention) and cows’ milk (control) groups, with lower protein and vitamin B12, and higher Fe, vitamin D, vitamin C and Zn intake in the GUMLi (intervention) group. The consumption of GUMLi did not affect dietary patterns, however, GUMLi participants had lower protein intake and higher Fe, vitamins D and C and Zn intake at 2 years of age.
A simple, portable capillary refill time (CRT) simulator is not commercially available. This device would be useful in mass-casualty simulations with multiple volunteers or mannequins depicting a variety of clinical findings and CRTs. The objective of this study was to develop and evaluate a prototype CRT simulator in a disaster simulation context.
A CRT prototype simulator was developed by embedding a pressure-sensitive piezo crystal, and a single red light-emitting diode (LED) light was embedded, within a flesh-toned resin. The LED light was programmed to turn white proportionate to the pressure applied, and gradually to return to red on release. The time to color return was adjustable with an external dial. The prototype was tested for feasibility among two cohorts: emergency medicine physicians in a tabletop exercise and second year medical students within an actual disaster triage drill. The realism of the simulator was compared to video-based CRT, and participants used a Visual Analog Scale (VAS) ranging from “completely artificial” to “as if on a real patient.” The VAS evaluated both the visual realism and the functional (eg, tactile) realism. Accuracy of CRT was evaluated only by the physician cohort. Data were analyzed using parametric and non-parametric statistics, and mean Cohen’s Kappas were used to describe inter-rater reliability.
The CRT simulator was generally well received by the participants. The simulator was perceived to have slightly higher functional realism (P=.06, P=.01) but lower visual realism (P=.002, P=.11) than the video-based CRT. Emergency medicine physicians had higher accuracy on portrayed CRT on the simulator than the videos (92.6% versus 71.1%; P<.001). Inter-rater reliability was higher for the simulator (0.78 versus 0.27; P<.001).
A simple, LED-based CRT simulator was well received in both settings. Prior to widespread use for disaster triage training, validation on participants’ ability to accurately triage disaster victims using CRT simulators and video-based CRT simulations should be performed.
ChangTP, SantillanesG, Claudius I, PhamPK, KovedJ, CheyneJ, Gausche-HillM, KajiAH, SrinivasanS, DonofrioJJ, BirC. Use of a Novel, Portable, LED-Based Capillary Refill Time Simulator within a Disaster Triage Context. Prehosp Disaster Med. 2017;32(4):451–456.
Background: Stroke is often preceded by transient symptoms. Although global stroke rates have been shown to be declining, previous studies have reported inconsistent temporal trends of transient ischemic attacks (TIA). The objective of the current study is to report the temporal trends of TIA admissions and outcomes in Canada over the last 11 years. Methods: We conducted a complete population cohort study using a national administrative database to study the temporal trend of age- and sex-adjusted TIA admission rates in Canada from 2003 to 2013. We also determined the rates of TIA and stroke diagnoses in the emergency department in the province of Ontario during the same period. We used multivariable analyses to study discharge location after acute hospitalization as well as 90-day stroke and/or TIA readmission rates. Results: Of 425,799 admissions to an acute care hospital for all stroke and TIA, 71,443 (16.8%) were TIA. The age- and sex-standardized rates of TIA admission decreased significantly during the study period from 30.0 to 20.6 per 100,000 (p<0.0001). In Ontario, decreasing TIA admissions is mirrored by decreasing rates of TIA directly discharged from the emergency department (55.1 to 46.8 per 100,000, p = 0.002). The odds of 90-day readmission rates for stroke or TIA are also decreasing (adjusted odds ratio, 0.97; 95% confidence interval, 0.96-0.99). Conclusions: We show that TIA admission rates have declined in the past 11 years in Canada, reflecting improved vascular risk reduction and stroke care. Future studies to confirm our findings on improved stroke or TIA recurrence rates are necessary.
Using the pediatric version of the Simple Triage and Rapid Treatment (JumpSTART) algorithm for the triage of pediatric patients in a mass-casualty incident (MCI) requires assessing the results of each step and determining whether to move to the next appropriate action. Inappropriate application can lead to performance of unnecessary actions or failure to perform necessary actions.
To report overall accuracy and time required for triage, and to assess if the performance of unnecessary steps, or failure to perform required steps, in the triage algorithm was associated with inaccuracy of triage designation or increased time to reach a triage decision.
Medical students participated in an MCI drill in which they triaged both live actors portraying patients and computer-based simulated patients to the four triage levels: minor, delayed, immediate, and expectant. Their performance was timed and compared to intended triage designations and a priori determined critical actions.
Thirty-three students completed 363 scenarios. The overall accuracy was 85.7% and overall mean time to assign a triage designation was 70.4 seconds, with decreasing times as triage acuity level decreased. In over one-half of cases, the student omitted at least one action and/or performed at least one action that was not required. Each unnecessary action increased time to triage by a mean of 8.4 seconds and each omitted action increased time to triage by a mean of 5.5 seconds.
Increasing triage level, performance of inappropriate actions, and omission of recommended actions were all associated with increasing time to perform triage.
ClaudiusI, KajiAH, SantillanesG, CiceroMX, DonofrioJJ, Gausche-HillM, SrinivasanS, ChangTP. Accuracy, Efficiency, and Inappropriate Actions Using JumpSTART Triage in MCI Simulations. Prehosp Disaster Med. 2015;30(5):457–460.
Multiple modalities for simulating mass-casualty scenarios exist; however, the ideal modality for education and drilling of mass-casualty incident (MCI) triage is not established.
Medical student triage accuracy and time to triage for computer-based simulated victims and live moulaged actors using the pediatric version of the Simple Triage and Rapid Treatment (JumpSTART) mass-casualty triage tool were compared, anticipating that student performance and experience would be equivalent.
The victim scenarios were created from actual trauma records from pediatric high-mechanism trauma presenting to a participating Level 1 trauma center. The student-reported fidelity of the two modalities was also measured. Comparisons were done using nonparametric statistics and regression analysis using generalized estimating equations.
Thirty-three students triaged four live patients and seven computerized patients representing a spectrum of minor, immediate, delayed, and expectant victims. Of the live simulated patients, 92.4% were given accurate triage designations versus 81.8% for the computerized scenarios (P=.005). The median time to triage of live actors was 57 seconds (IQR=45-66) versus 80 seconds (IQR=58-106) for the computerized patients (P<.0001). The moulaged actors were felt to offer a more realistic encounter by 88% of the participants, with a higher associated stress level.
While potentially easier and more convenient to accomplish, computerized scenarios offered less fidelity than live moulaged actors for the purposes of MCI drilling. Medical students triaged live actors more accurately and more quickly than victims shown in a computerized simulation.
ClaudiusI, KajiA, SantillanesG, CiceroM, DonofrioJJ, Gausche-HillM, SrinivasanS, ChangTP.
Comparison of Computerized Patients versus Live Moulaged Actors for a Mass-casualty Drill. Prehosp Disaster Med.2015; 30(5): 438–442.