To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Many triage algorithms exist for use in mass-casualty incidents (MCIs) involving pediatric patients. Most of these algorithms have not been validated for reliability across users.
Investigators sought to compare inter-rater reliability (IRR) and agreement among five MCI algorithms used in the pediatric population.
A dataset of 253 pediatric (<14 years of age) trauma activations from a Level I trauma center was used to obtain prehospital information and demographics. Three raters were trained on five MCI triage algorithms: Simple Triage and Rapid Treatment (START) and JumpSTART, as appropriate for age (combined as J-START); Sort Assess Life-Saving Intervention Treatment (SALT); Pediatric Triage Tape (PTT); CareFlight (CF); and Sacco Triage Method (STM). Patient outcomes were collected but not available to raters. Each rater triaged the full set of patients into Green, Yellow, Red, or Black categories with each of the five MCI algorithms. The IRR was reported as weighted kappa scores with 95% confidence intervals (CI). Descriptive statistics were used to describe inter-rater and inter-MCI algorithm agreement.
Of the 253 patients, 247 had complete triage assignments among the five algorithms and were included in the study. The IRR was excellent for a majority of the algorithms; however, J-START and CF had the highest reliability with a kappa 0.94 or higher (0.9-1.0, 95% CI for overall weighted kappa). The greatest variability was in SALT among Green and Yellow patients. Overall, J-START and CF had the highest inter-rater and inter-MCI algorithm agreements.
The IRR was excellent for a majority of the algorithms. The SALT algorithm, which contains subjective components, had the lowest IRR when applied to this dataset of pediatric trauma patients. Both J-START and CF demonstrated the best overall reliability and agreement.
Mass-casualty incident (MCI) algorithms are used to sort large numbers of patients rapidly into four basic categories based on severity. To date, there is no consensus on the best method to test the accuracy of an MCI algorithm in the pediatric population, nor on the agreement between different tools designed for this purpose.
This study is to compare agreement between the Criteria Outcomes Tool (COT) to previously published outcomes tools in assessing the triage category applied to a simulated set of pediatric MCI patients.
An MCI triage category (black, red, yellow, and green) was applied to patients from a pre-collected retrospective cohort of pediatric patients under 14 years of age brought in as a trauma activation to a Level I trauma center from July 2010 through November 2013 using each of the following outcome measures: COT, modified Baxt score, modified Baxt combined with mortality and/or length-of-stay (LOS), ambulatory status, mortality alone, and Injury Severity Score (ISS). Descriptive statistics were applied to determine agreement between tools.
A total of 247 patients were included, ranging from 25 days to 13 years of age. The outcome of mortality had 100% agreement with the COT black. The “modified Baxt positive and alive” outcome had the highest agreement with COT red (65%). All yellow outcomes had 47%-53% agreement with COT yellow. “Modified Baxt negative and <24 hours LOS” had the highest agreement with the COT green at 89%.
Assessment of algorithms for triaging pediatric MCI patients is complicated by the lack of a gold standard outcome tool and variability between existing measures.
The Sort, Access, Life-saving interventions, Treatment and/or Triage (SALT) mass-casualty incident (MCI) algorithm is unique in that it includes two subjective questions during the triage process: “Is the victim likely to survive given the resources?” and “Is the injury minor?”
Given this subjectivity, it was hypothesized that as casualties increase, the inter-rater reliability (IRR) of the tool would decline, due to an increase in the number of patients triaged as Minor and Expectant.
A pre-collected dataset of pediatric trauma patients age <14 years from a single Level 1 trauma center was used to generate “patients.” Three trained raters triaged each patient using SALT as if they were in each of the following scenarios: 10, 100, and 1,000 victim MCIs. Cohen’s kappa test was used to evaluate IRR between the raters in each of the scenarios.
A total of 247 patients were available for triage. The kappas were consistently “poor” to “fair:” 0.37 to 0.59 in the 10-victim scenario; 0.13 to 0.36 in the 100-victim scenario; and 0.05 to 0.36 in the 1,000-victim scenario. There was an increasing percentage of subjects triaged Minor as the number of estimated victims increased: 27.8% increase from 10- to 100-victim scenario and 7.0% increase from 100- to 1,000-victim scenario. Expectant triage categorization of patients remained stable as victim numbers increased.
Overall, SALT demonstrated poor IRR in this study of increasing casualty counts while triaging pediatric patients. Increased casualty counts in the scenarios did lead to increased Minor but not Expectant categorizations.
Email your librarian or administrator to recommend adding this to your organisation's collection.