Hostname: page-component-7bb8b95d7b-fmk2r Total loading time: 0 Render date: 2024-09-27T02:17:26.461Z Has data issue: false hasContentIssue false

Electronic surveillance criteria for non–ventilator-associated hospital-acquired pneumonia: Assessment of reliability and validity

Published online by Cambridge University Press:  15 March 2023

Sarah E. Stern*
Affiliation:
Division of Pulmonary & Critical Care Medicine, University of Utah, Salt Lake City, Utah
Matthew A. Christensen
Affiliation:
Division of Allergy, Pulmonary, & Critical Care Medicine, Vanderbilt University Medical Center, Nashville, Tennessee
McKenna R. Nevers
Affiliation:
Division of Epidemiology, University of Utah, Salt Lake City, Utah
Jian Ying
Affiliation:
Division of Epidemiology, University of Utah, Salt Lake City, Utah
Caroline McKenna
Affiliation:
Department of Population Medicine, Harvard Medical School and Harvard Pilgrim Health Care Institute, Boston, Massachusetts
Shannon Munro
Affiliation:
Department of Veterans’ Affairs Medical Center, Salem, Virginia
Chanu Rhee
Affiliation:
Department of Population Medicine, Harvard Medical School and Harvard Pilgrim Health Care Institute, Boston, Massachusetts Department of Medicine, Brigham and Women’s Hospital, Boston, Massachusetts
Matthew H. Samore
Affiliation:
Division of Epidemiology, University of Utah, Salt Lake City, Utah VA Salt Lake City Health Care System, Salt Lake City, Utah
Michael Klompas
Affiliation:
Department of Population Medicine, Harvard Medical School and Harvard Pilgrim Health Care Institute, Boston, Massachusetts Brigham and Women’s Hospital, Boston, Massachusetts
Barbara E. Jones
Affiliation:
Division of Pulmonary & Critical Care Medicine, University of Utah, Salt Lake City, Utah VA Salt Lake City Health Care System, Salt Lake City, Utah
*
Author for correspondence: Sarah Stern, MD, Division of Pulmonary & Critical Care Medicine, University of Utah, 701 Wintrobe, 26 North 1900 East, Salt Lake City, Utah 84132. E-mail: sarah.stern16@gmail.com
Rights & Permissions [Opens in a new window]

Abstract

Objective:

Surveillance of non–ventilator-associated hospital-acquired pneumonia (NV-HAP) is complicated by subjectivity and variability in diagnosing pneumonia. We compared a fully automatable surveillance definition using routine electronic health record data to manual determinations of NV-HAP according to surveillance criteria and clinical diagnoses.

Methods:

We retrospectively applied an electronic surveillance definition for NV-HAP to all adults admitted to Veterans’ Affairs (VA) hospitals from January 1, 2015, to November 30, 2020. We randomly selected 250 hospitalizations meeting NV-HAP surveillance criteria for independent review by 2 clinicians and calculated the percent of hospitalizations with (1) clinical deterioration, (2) CDC National Healthcare Safety Network (CDC-NHSN) criteria, (3) NV-HAP according to a reviewer, (4) NV-HAP according to a treating clinician, (5) pneumonia diagnosis in discharge summary; and (6) discharge diagnosis codes for HAP. We assessed interrater reliability by calculating simple agreement and the Cohen κ (kappa).

Results:

Among 3.1 million hospitalizations, 14,023 met NV-HAP electronic surveillance criteria. Among reviewed cases, 98% had a confirmed clinical deterioration; 67% met CDC-NHSN criteria; 71% had NV-HAP according to a reviewer; 60% had NV-HAP according to a treating clinician; 49% had a discharge summary diagnosis of pneumonia; and 82% had NV-HAP according to any definition according to at least 1 reviewer. Only 8% had diagnosis codes for HAP. Interrater agreement was 75% (κ = 0.50) for CDC-NHSN criteria and 78% (κ = 0.55) for reviewer diagnosis of NV-HAP.

Conclusions:

Electronic NV-HAP surveillance criteria correlated moderately with existing manual surveillance criteria. Reviewer variability for all manual assessments was high. Electronic surveillance using clinical data may therefore allow for more consistent and efficient surveillance with similar accuracy compared to manual assessments or diagnosis codes.

Type
Original Article
Creative Commons
To the extent this is a work of the US Government, it is not subject to copyright protection within the United States. Published by Cambridge University Press on behalf of The Society for Healthcare Epidemiology of America.
Copyright
© Veterans Health Administration and the Author(s), 2023

Hospital-acquired pneumonia (HAP) is the most common hospital-acquired infection worldwide and is associated with high morbidity, mortality, and healthcare costs.Reference Giuliano, Baker and Quinn1Reference Micek, Chew, Hampton and Kollef3 Non–ventilator-associated HAP (NV-HAP) accounts for most of the HAP,Reference Giuliano, Baker and Quinn1 but prevention measures are hindered by the difficulty measuring and tracking HAP incidence and outcomes using current definitions. Clinical and surveillance definitions for HAP are subjective, complex, and ambiguous on account of the uncertainty inherent in the diagnosis of pneumonia.Reference Horan, Andrus and Dudeck4Reference Ramirez Batlle, Klompas and Program7 Prior surveillance efforts using administrative claims data, chart review, or even histologic definitions have historically demonstrated poor sensitivity, low reproducibility, and only moderate accuracy.Reference van Mourik, van Duijn, Moons, Bonten and Lee5,Reference Klompas8Reference Kerlin, Trick and Anderson10 More objective, consistent, and efficient surveillance may be feasible using readily available information in the electronic health record (EHR) including vital signs, oxygenation data, administration of antibiotics, and chest imaging.Reference Ji, McKenna and Ochoa6,Reference Ramirez Batlle, Klompas and Program7 However, this approach requires validation. We aimed (1) to implement an electronic surveillance definition for NV-HAP in a large healthcare system using granular clinical data for case detection rather than diagnosis codes or claims data, and (2) to conduct detailed chart reviews on a random subset of patients to assess the reliability, validity, and overlap between electronic surveillance and other existing NV-HAP definitions.

Methods

Setting and participants

We retrospectively applied an electronic NV-HAP surveillance definition to all hospitalizations at acute-care facilities within the Veterans’ Affairs (VA) healthcare system between January 1, 2015, and November 30, 2020, in patients aged ≥18 years. The VA network is the largest integrated healthcare network in the United States and includes 152 VA medical centers in all 50 US states.11 The study was approved by the Veterans’ Health Administration (VHA) and University of Utah Institutional Review Boards.

Electronic NV-HAP surveillance definition

The electronic surveillance definition was designed to identify nonventilated patients with new respiratory deterioration (≥2 days of decreased oxygen saturation or increase in supplemental oxygen after ≥2 days of stable or improving oxygenation) and concurrent fever or leukocytosis, performance of chest imaging, and the initiation of new antibiotics continued for at least 3 days (Table 1).Reference Ji, McKenna and Ochoa6 This definition was previously pilot tested in 4 hospitals, where it generated credible NV-HAP incidence and mortality estimates and was shown to detect pneumonia with similar accuracy compared to the Centers for Disease Control and Prevention’s National Healthcare Safety Network (CDC-NHSN) PNU1 surveillance criteria for NV-HAP.Reference Ji, McKenna and Ochoa6,Reference Ramirez Batlle, Klompas and Program7 Complete details and SAS codes describing data extraction and definitions are available in Supplementary Appendix 1 (Supplementary Table 2: Oxygen Device Hierarchy and Supplementary Table 3: Definition of New Antibiotic, and SAS Code online). Patient demographics, comorbidities, and clinical outcomes were extracted using previously validated methods.Reference Charlson, Pompei, Ales and MacKenzie12 Electronic health record data were accessed through the Veterans’ Informatics and Computing Infrastructure, a platform that stores VA clinical data for research purposes.13

Table 1. Electronic Non–ventilator-Associated Hospital-Acquired Pneumonia (NV-HAP) Surveillance Definition

Note. WBC, white blood cell count.

Table 2. Interrater reliability and Prevalence of Other Reviewed Definitions of Pneumonia Among 250 Charts Meeting Electronic Surveillance Definitiona

Note. PABAK, prevalence-adjusted, bias-adjusted κ; CI, confidence interval. CDC, Centers for Disease Control and Prevention; NHSN, National Healthcare Safety Network; NV-HAP, non–ventilator-associated healthcare-associated pneumonia; N/A, not available.

a Each chart reviewed by 2 clinicians. Estimates and 95% confidence intervals are shown.

b Of the 215 reviewed cases occurring after October 1, 2015.

Claims-based NV-HAP definition

We assessed the overlap between the electronic NV-HAP definition and a claims-based NV-HAP definition used by a VA quality improvement initiative. The claims-based criteria defined NV-HAP as the presence of a primary or secondary discharge diagnosis code for pneumonia (ICD-10 codes B95.3, B96.0, J13, J15.X, J16.X, J17.X, J18.X, J84.111, J84.116, J84.117, J84.2, J85.1, and J85.2) that was not present on admission.Reference Carey, Blankenhorn, Chen and Munro14

Medical record review

We randomly selected 250 hospitalizations meeting the electronic NV-HAP surveillance definition for medical record review. Each case was independently reviewed by 2 of 3 clinician reviewers (S.E.S., M.A.C., and B.E.J.). Reviewers utilized a guide that specified a structured, standardized review process (Supplementary Appendix online). Reviewers underwent an iterative adjudication and training process with 4 batches of 10 charts per reviewer before beginning the formal case reviews for the study. Reviewers first confirmed the presence of worsening oxygenation during a 2-day period surrounding the potential NV-HAP index date (Supplementary Tables 1 and 2 online). They then reviewed all clinical notes and imaging results to assess for each of the following aspects: (1) whether the patient experienced a clinical deterioration, or a qualitative worsening of the clinical status, according to the reviewer; (2) CDC-NHSN PNU1 surveillance criteria15 ; (3) whether the treating clinician diagnosed NV-HAP; (4) whether the discharge summary mentioned a diagnosis of pneumonia; and (5) the reviewer’s net clinical impression of whether NV-HAP was suspected, possible, or unlikely based on the totality of data available (the patient’s clinical trajectory, vital signs, imaging, microbiology, response to treatment if provided, and whether there was an alternative diagnosis). CDC-NHSN criteria were modified to provide a more specific definition of criteria for oxygen deterioration and infiltrate on chest imaging (Supplementary Appendix 2 online). Reviewers also provided a summary narrative of the case, including their determination of the most likely etiology of clinical deterioration if present.

Statistical analysis

Among the entire population, we calculated the incidence of NV-HAP using both the electronic surveillance criteria and the claims-based criteria per 100 hospitalizations and 1,000 hospital days. For each hospitalization, only the first electronic surveillance event was counted. For the claims-based definition, we calculated the incidence only among hospitalizations occurring after October 1, 2015, the date when conversion from International Classification of Disease, Ninth Revision (ICD-9) to ICD-10 codes and adoption of present-on-admission codes occurred.

Among the 250 cases, we assessed the interrater reliability of the reviewer assessments between the 2 reviewers for each of the 5 clinical definitions assessed by the reviewers: clinical deterioration, CDC-NHSN criteria, reviewer assessment of NV-HAP, treating clinician assessment of NV-HAP, and pneumonia diagnosis present in discharge summary. We calculated simple agreement (the number of cases in which both reviewers agreed divided by the total number of cases), the Cohen κ (kappa) statistic, and prevalence-adjusted bias-adjusted κ (PABAK). The PABAK method is used to estimate the true proportion of agreement beyond expected chance agreement that provides more stable estimates of interrater reliability when data patterns are rare or very frequent, leading to paradoxical results from the Cohen κ analysis.Reference Byrt, Bishop and Bias16

We calculated the positive predictive value (PPV) of the electronic surveillance definition against each definition as the percent of cases identified by electronic NV-HAP surveillance criteria that were also positive according to (1) both reviewers, and (2) at least 1 reviewer. For reviewer assessment of NV-HAP, both “NV-HAP suspected” and “NV-HAP possible” were treated as a diagnosis of NV-HAP according to a reviewer. We created a matrix plot of intersecting sets using UpSetR to visualize the degree to which the electronic surveillance definition and the 6 existing definitions overlap with each other.Reference Conway, Lex and Gehlenborg17 All statistical analyses were performed using RStudio version 1.4 software (RStudio, PBC, Boston, MA, 2021).18

Analysis of sources of discordance

Among the cases in which the 2 clinician reviewers (S.E.S. and B.E.J.) disagreed on whether a patient had NV-HAP according to CDC-NHSN criteria, reviewer assessment, or clinician documentation, the 2 reviewers conducted independent secondary reviews to identify sources of disagreement. These secondary reviews, which were free-text entries, were then classified by the reviewers together into categories to identify and explore the discrepancies posed by human review methods. False-positive NV-HAP cases identified by the electronic NV-HAP surveillance definition but in which both reviewers felt that NV-HAP was unlikely were secondarily reviewed in a similar fashion: the 2 clinician reviewers reviewed the cases again for alternative causes of the clinical event identified by the surveillance definition and classified cases into categories defined in the preliminary review.

Results

Implementation of electronic surveillance definition

Among 3.1 million hospitalizations and 17.9 million hospital days, 2.3 million hospitalizations had a length of stay ≥3 days and 14,023 met the electronic surveillance definition for NV-HAP, for an incidence of 0.45 per 100 admissions and 0.78 per 1,000 hospital days. Among the 2.7 million hospitalizations occurring after October 1, 2015, 11,264 cases of NV-HAP were detected using the claims-based definition, for an incidence of 0.42 per 100 admissions and 0.73 per 1,000 hospital days (Fig. 1).

Fig. 1. Incidence and overlap among non–ventilator-associated hospital-acquired pneumonia (NV-HAP) electronic surveillance definition and claims-based definition.

Variability among reviewers

Among the 250 cases selected for medical record review, interrater reliability between 2 reviewers was moderate for CDC-NHSN criteria, NV-HAP according to a reviewer assessment, and NV-HAP according to a treating clinician, with simple agreement ranging from 75% to 82% with PABAK ranging from 0.50 to 0.64 (Table 2). Interrater reliability was highest for presence of pneumonia in discharge summary and presence of clinical deterioration (86% and 89% by simple agreement and PABAKs of 0.72 and 0.78, respectively).

Medical record review

The electronic surveillance definition for NV-HAP had moderate PPV compared to multiple definitions of NV-HAP by medical record review (Fig. 2, left margin, and Table 2). Clinical deterioration was deemed present in nearly all cases of electronic NV-HAP (87% by both reviewers, and 98% by at least 1 reviewer). CDC-NHSN criteria were met in 42% of cases according to both reviewers and in 67% of cases according to at least 1 reviewer. NV-HAP was present by reviewer assessment in 50% according to both reviewers and 71% according to at least 1 reviewer, and NV-HAP according to a treating clinician was present in 42% according to both reviewers and 60% according to at least 1 reviewer. A pneumonia diagnosis was listed in the discharge summary in less than half of all cases (35% according to both reviewers, 49% according to at least 1 reviewer). Among the 215 cases occurring after October 1, 2015, only 7.9% of reviewed patients were also identified by the claims-based definition (Table 2).

Fig. 2. Matrix layout for all intersections of the 6 pneumonia definitions. Overlap of cases meeting any of the 6 pneumonia definitions among cases meeting the electronic surveillance definition for non–ventilator-associated hospital-acquired pneumonia (NV-HAP). Each horizontal bar (left margin) represents the number of cases meeting each of the 6 definitions of pneumonia. Each vertical column represents the number of cases meeting multiple definitions, indicated by black dots in the matrix.

We found substantial but imperfect overlap between the existing definitions of NV-HAP in our medical record review (Fig. 2). Ten cases were positive by all 6 definitions, and 79 cases met all definitions except for the claims-based definition. Collectively, 206 (82%) of 250 cases met at least 1 of the reviewed definitions of NV-HAP (CDC-NHSN criteria, reviewer, clinician, or discharge summary diagnosis). Incorporating the claims-based definition in addition to chart review did not identify additional cases. There was more overlap between clinical criteria and bedside clinician diagnosis than there was with discharge or claims-based diagnosis: 123 cases had clinical deterioration, CDC-NHSN criteria, NV-HAP according to reviewer, and treating clinician diagnosis, versus 99 cases with these clinical criteria and a discharge summary or claims-based diagnosis. Moreover, 24 cases met all clinical criteria of NV-HAP by CDC-NHSN and reviewer diagnosis and clinical deterioration but lacked a diagnosis of pneumonia in the medical record according to treating clinician, discharge summary, or diagnostic coding.

Sources of discordance between reviewers

Among 168 cases in which at least 1 reviewer thought CDC-NHSN criteria were met, there were 62 cases (37%) in which reviewers disagreed on the CDC-NHSN criteria. The most common source of discordance between reviewers was interpretation of chest imaging reports (60%). Among 178 cases in which at least 1 reviewer believed NV-HAP was present, there were 54 cases (30%) in which reviewers disagreed on the NV-HAP diagnosis; the most common source of discordance was the interpretation of chest imaging reports (56%). Among 151 cases in which at least 1 reviewer believed the treating clinicians diagnosed NV-HAP, there were 45 cases (30%) in which the reviewers disagreed on whether treating clinicians diagnosed NV-HAP. Discordance was due to differences in clinician attribution of deterioration between reviewers, including sepsis, aspiration, and pulmonary edema.

Sources of false-positive NV-HAP determinations

Among the 250 cases flagged by electronic NV-HAP surveillance criteria that underwent medical record review, both reviewers deemed that NV-HAP was not present in the final review in 72 cases (29%). Of these, 26 cases (36%) were attributable to perioperative airway management with increased respiratory support and antibiotics. The other false-positive results were attributable to sepsis or acute respiratory distress syndrome (N = 22, 31%) not caused by pneumonia, community-acquired pneumonia or pneumonia that was present on arrival (N = 6, 8%), heart failure or pulmonary edema (N = 5, 7%), airway protection related to encephalopathy (N = 5, 7%), cardiac arrest (N = 2, 3%), chronic obstructive pulmonary disease or asthma (N = 2, 3%),VAP for which the existence of mechanical ventilation was not documented (N = 3, 4%), and progression of malignancy (N = 1, 1%).

Discussion

In a detailed chart review analysis of an electronic surveillance definition of NV-HAP using clinical data in a large healthcare system, we found moderate correlation between an electronic NV-HAP definition and existing manual surveillance criteria. The PPV was as high as 82% using the most permissive definition (NV-HAP according to either CDC-NSHN criteria, reviewer assessment, treating clinician diagnosis, or discharge summary diagnosis according to at least 1 reviewer) but as low as 42% using the strictest definition (both reviewers agreed that CDC-NHSN criteria were met). In contrast, a claims-based strategy to identify NV-HAP using diagnosis codes detected <10% of patients flagged by the electronic surveillance definition and correlated poorly with other definitions. The variable PPV of the electronic surveillance criteria mirrors the high rates of reviewer variability that we found in all strategies for identifying clinical diagnoses of NV-HAP. Agreement levels between reviewers were moderate regardless of whether reviewers were applying formal CDC-NHSN criteria (κ = 0.50), assessing whether bedside clinicians diagnosed NV-HAP (κ = 0.64), or whether the discharge summary documented pneumonia (κ = 0.72). These findings underscore the complexity and subjectivity of NV-HAP diagnosis and surveillance using manual chart review, even when trained reviewers apply formal criteria.

The moderate accuracy and reviewer variability that we detected for the electronic criteria is similar to that of other definitions used to identify hospital-acquired pneumonia, including facility reporting, diagnosis codes, and medical record review. In a retrospective chart review by See et al,Reference See, Chang and Gualandi19 CDC medical epidemiologists independently reviewed 250 cases reported to the CDC-NHSN with pneumonia or lower respiratory tract infection and found that 8% of reported adult pneumonia cases did not meet CDC-NHSN criteria for NV-HAP and that 15% lacked clinician diagnoses. Similarly, Wolfensberger et alReference Wolfensberger, Meier, Kuster, Mehra, Meier and Sax20 found that the PPV of ICD codes for NV-HAP was 35% and sensitivity was 59% compared to validated surveillance definitions. A systematic review summarizing the accuracy of diagnosis codes for NV-HAP reported similar performance, with sensitivity and specificity of 40% compared to clinical review.Reference van Mourik, van Duijn, Moons, Bonten and Lee5

To address problems with diagnosis codes and manual evaluation of medical records, others have begun to develop approaches that either augment or replace these approaches. Wolfensberger et alReference Wolfensberger, Jakob and Faes Hesse21 validated a semi-automated surveillance system for NV-HAP. By using an EHR-based surveillance definition to identify patients at-risk for NV-HAP, they were able to rule out NV-HAP in 94% of patients and significantly reduce the workload of manual review with high sensitivity and negative predictive value (NPV).Reference Wolfensberger, Jakob and Faes Hesse21 Similar to our study, Ramirez Battle et alReference Ramirez Batlle, Klompas and Program7 found similar accuracy of electronic surveillance criteria for NV-HAP and CDC-NHSN criteria relative to expert chart review at a single center among 120 cases with oxygen deterioration. The electronic surveillance definition demonstrated sensitivity of 71%, PPV of 48%, and NPV of 90%. The CDC-NHSN definition demonstrated sensitivity of 61%, PPV of 59%, and NPV of 88%.Reference Ramirez Batlle, Klompas and Program7 These findings raise the possibility that EHR-based surveillance strategies could improve reproducibility and efficiency without dramatically reducing accuracy.

We detected substantial reviewer variability, despite adhering to a rigorous training process and formal consensus guide. This finding mirrors previous observations of low reliability of human assessment of pneumonia in hospital-acquired pneumonia.Reference Klompas8,Reference Kerlin, Trick and Anderson10,Reference Naidech, Liebling, Duran, Moore, Wunderink and Zembower22 Klompas et alReference Klompas8 reported 62% agreement with a κ = 0.40 among 3 infection control personnel using CDC criteria for the identification of VAP. Kerlin et alReference Kerlin, Trick and Anderson10 reported interreviewer agreement of 66%–83%, with a κ of 0.12 among infection preventionists and a κ of 0.34 among intensivists assessing VAP. In the same vein, humans demonstrate substantial variability in identifying pneumonia by chest imaging, both among reviewers interpreting reports and radiologists evaluating images.Reference See, Chang and Gualandi19,Reference Melbye and Dale23 Human review has historically been considered the gold standard for case detection, but our study adds to the growing evidence suggesting that it may not be an ideal form of measurement.Reference Vassar and Holzmann24 High levels of disagreement between reviewers despite using a common framework to apply agreed-upon definitions demonstrate the subjectivity of pneumonia diagnosis and the difficulty that human reviewers have applying complex definitions in a consistent fashion. This finding supports the development of surveillance approaches that are independent of human review to increase consistency, reduce burden, and ensure scalability.Reference Schreiber, Krauss, Blake, Boone and Almonte25

Our study had several limitations. The small sample size of charts reviewed may not be truly representative of the variability of the population. Additionally, by restricting our analysis to cases meeting the electronic NV-HAP surveillance definition, we did not assess its sensitivity, which could have been affected by missing data. To provide a reliable measure that is amenable to large-scale examination of system-wide quality improvement interventions, electronic surveillance requires high-quality and stable clinical data that are routinely collected and entered without variation across settings or time. The criteria require physical signs of pneumonia (oxygenation, WBC count, temperature) as well as clinical recognition and responses to those physical signs (antibiotic use and chest imaging). Thus, variation in diagnosis and treatment patterns across settings or time could also influence the surveillance measure. Although we have previously validated several of the data elements used for the surveillance criteria,Reference Jones, Haroldsen and Madaras-Kelly26 continuous validation and analysis of variation in the quality of detailed clinical data across settings, systems, and time is essential before facility comparisons or intervention tracking can be pursued. Finally, our estimates of the accuracy of surveillance strategies are limited by the challenges of identifying a reference standard of “true” pneumonia. Although it does not entirely circumvent these challenges, electronic surveillance that does not rely upon diagnostic labels increases reproducibility and efficiency with accuracy that appears consistent with other approaches.

Our findings have important implications for clinical care and public health. NV-HAP is one of the most common and morbid hospital-acquired infections.Reference Giuliano, Baker and Quinn1,Reference Baker and Quinn2,Reference Magill, O’Leary and Janelle27 Robust surveillance and prevention programs are needed, but robust prevention programs need robust surveillance programs to measure and inform progress.Reference Kazaure, Martin, Yoon and Wren28Reference Lacerna, Patey and Block32 We cannot improve what we cannot measure, and measurement must be timely and consistent as well as accurate. Implementation of systemwide surveillance and prevention efforts are limited by the poor reliability and validity of current approaches, a reflection of the highly variable and often subjective clinical diagnosis of pneumonia. Electronic surveillance has the potential advantage of being more reproducible and more amenable to large-scale examination of systemwide quality-improvement interventions. Our analysis highlights the ongoing challenge with accurately identifying pneumonia but also suggests a potential strategy to increase the scale, efficiency, and reliability of surveillance. No surveillance approach for NV-HAP is perfect. However, applying clinical criteria using data that are routinely entered into the EHR may provide a practical means to characterize the frequency and morbidity of NV-HAP, catalyze prevention programs, and reliably measure impacts on NV-HAP rates and outcomes.

Supplementary material

To view supplementary material for this article, please visit https://doi.org/10.1017/ice.2022.302

Acknowledgments

Financial support

This work was supported by a grant from the US Centers of Disease Control (grant no. 200-2019-05998).

Conflicts of interest

C.R. reports royalties from UpToDate, Inc. (authoring chapters related to procalcitonin use in pneumonia), and consulting fees from Cytovale (sepsis diagnostics) and Pfizer (Lyme disease surveillance). M.K. reports royalties from UpToDate (authoring chapters on pneumonia). B.J. is supported by a VHA HSR&D career development award (no. 150HX001240); and the VA HSR&D Informatics, Decision-Enhancement, and Analytic Sciences (IDEAS) Center of Innovation (grant no. CIN 13-414). All remaining authors report no conflicts of interest relevant to this article.

Footnotes

PREVIOUS PRESENTATION: Components of this study were previously presented as an oral abstract presentation at the Infectious Diseases Society of America (IDSA) annual meeting IDWeek on September 29–October 3, 2021, held virtually. Parts of this study were also presented as a poster presentation at the American Thoracic 2021 International Conference on May 14–19, 2021, in San Diego, California.

References

Giuliano, KK, Baker, D, Quinn, B. The epidemiology of nonventilator hospital-acquired pneumonia in the United States. Am J Infect Control 2018;46:322327.10.1016/j.ajic.2017.09.005CrossRefGoogle Scholar
Baker, D, Quinn, B. Hospital-Acquired Pneumonia Prevention Initiative-2: incidence of nonventilator hospital-acquired pneumonia in the United States. Am J Infect Control 2018;46:27.10.1016/j.ajic.2017.08.036CrossRefGoogle ScholarPubMed
Micek, ST, Chew, B, Hampton, N, Kollef, MH. A case–control study assessing the impact of nonventilated hospital-acquired pneumonia on patient outcomes. Chest 2016;150:10081014.10.1016/j.chest.2016.04.009CrossRefGoogle ScholarPubMed
Horan, TC, Andrus, M, Dudeck, MA. CDC/NHSN surveillance definition of healthcare-associated infection and criteria for specific types of infections in the acute-care setting. Am J Infect Control 2008;36:309332.10.1016/j.ajic.2008.03.002CrossRefGoogle ScholarPubMed
van Mourik, MS, van Duijn, PJ, Moons, KG, Bonten, MJ, Lee, GM. Accuracy of administrative data for surveillance of healthcare-associated infections: a systematic review. BMJ Open 2015;5:e008424.10.1136/bmjopen-2015-008424CrossRefGoogle ScholarPubMed
Ji, W, McKenna, C, Ochoa, A, et al. Development and assessment of objective surveillance definitions for nonventilator hospital-acquired pneumonia. JAMA Netw Open 2019;2:e1913674.10.1001/jamanetworkopen.2019.13674CrossRefGoogle ScholarPubMed
Ramirez Batlle, H, Klompas, M, Program, CDCPE. Accuracy and reliability of electronic versus CDC surveillance criteria for non-ventilator hospital-acquired pneumonia. Infect Control Hosp Epidemiol 2020;41:219221.Google Scholar
Klompas, M. Interobserver variability in ventilator-associated pneumonia surveillance. Am J Infect Control 2010;38:237239.10.1016/j.ajic.2009.10.003CrossRefGoogle ScholarPubMed
Tejerina, E, Esteban, A, Fernandez-Segoviano, P, et al. Accuracy of clinical definitions of ventilator-associated pneumonia: comparison with autopsy findings. J Crit Care 2010;25:6268.10.1016/j.jcrc.2009.05.008CrossRefGoogle ScholarPubMed
Kerlin, MP, Trick, WE, Anderson, DJ, et al. Interrater reliability of surveillance for ventilator-associated events and pneumonia. Infect Control Hosp Epidemiol 2017;38:172178.10.1017/ice.2016.262CrossRefGoogle ScholarPubMed
VHA facility quality and safety report-fiscal year 2012 data. Veterans’ Health Administration website. https://www.va.gov/HEALTH/docs/VHA_Quality_and_Safety_Report_2013.pdf. Published 2013. Accessed December 18, 2021.Google Scholar
Charlson, ME, Pompei, P, Ales, KL, MacKenzie, CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis 1987;40:373383.10.1016/0021-9681(87)90171-8CrossRefGoogle ScholarPubMed
VA Informatics and Computing Infrastructure (VINCI). US Department of Vterans’ Affairs website. https://www.hsrd.research.va.gov/for_researchers/vinci/. Accessed November 16, 2021.Google Scholar
Carey, E, Blankenhorn, R, Chen, P, Munro, S. Non–ventilator-associated hospital-acquired pneumonia incidence and health outcomes among US veterans from 2016–2020. Am J Infect Control 2022;50:116119.10.1016/j.ajic.2021.06.001CrossRefGoogle ScholarPubMed
National Healthcare Safety Network (NHSN) patient safety component manual. US Centers for Disease Control and Prevention website. https://www.cdc.gov/nhsn/pdfs/pscmanual/pcsmanual_current.pdf. Updated 2023. Accessed February 6, 2023.Google Scholar
Byrt, T, Bishop, J, Bias, Carlin JB., prevalence and kappa. J Clin Epidemiol 1993;46:423429.10.1016/0895-4356(93)90018-VCrossRefGoogle ScholarPubMed
Conway, JR, Lex, A, Gehlenborg, N. UpSetR: an R package for the visualization of intersecting sets and their properties. Bioinformatics 2017;33:29382940.10.1093/bioinformatics/btx364CrossRefGoogle Scholar
RStudio: integrated development environment for R. R Studio website. https://www.rstudio.com/categories/integrated-development-environment/. Published 2021. Accessed February 6, 2022.Google Scholar
See, I, Chang, J, Gualandi, N, et al. Clinical correlates of surveillance events detected by National Healthcare Safety Network pneumonia and lower respiratory infection definitions—Pennsylvania, 2011–2012. Infect Control Hosp Epidemiol 2016;37:818824.10.1017/ice.2016.74CrossRefGoogle Scholar
Wolfensberger, A, Meier, AH, Kuster, SP, Mehra, T, Meier, MT, Sax, H. Should International Classification of Diseases codes be used to survey hospital-acquired pneumonia? J Hosp Infect 2018;99:8184.10.1016/j.jhin.2018.01.017CrossRefGoogle ScholarPubMed
Wolfensberger, A, Jakob, W, Faes Hesse, M, et al. Development and validation of a semi-automated surveillance system-lowering the fruit for non–ventilator-associated hospital-acquired pneumonia (nvHAP) prevention. Clin Microbiol Infect 2019;25:1428.10.1016/j.cmi.2019.03.019CrossRefGoogle ScholarPubMed
Naidech, AM, Liebling, SM, Duran, IM, Moore, MJ, Wunderink, RG, Zembower, TR. Reliability of the validated clinical diagnosis of pneumonia on validated outcomes after intracranial hemorrhage. J Crit Care 2012;27:527.10.1016/j.jcrc.2011.11.009CrossRefGoogle ScholarPubMed
Melbye, H, Dale, K. Interobserver variability in the radiographic diagnosis of adult outpatient pneumonia. Acta Radiol 1992;33:7981.Google ScholarPubMed
Vassar, M, Holzmann, M. The retrospective chart review: important methodological considerations. J Educ Eval Health Prof 2013;10:12.10.3352/jeehp.2013.10.12CrossRefGoogle ScholarPubMed
Schreiber, M, Krauss, D, Blake, B, Boone, E, Almonte, R. Balancing value and burden: the Centers for Medicare & Medicaid Services electronic Clinical Quality Measure (eCQM) strategy project. J Am Med Inform Assoc 2021;28:24752482.10.1093/jamia/ocab013CrossRefGoogle ScholarPubMed
Jones, BE, Haroldsen, C, Madaras-Kelly, K, et al. In data we trust? Comparison of electronic versus manual abstraction of antimicrobial prescribing quality metrics for hospitalized veterans with pneumonia. Med Care 2018;56:626633.10.1097/MLR.0000000000000916CrossRefGoogle ScholarPubMed
Magill, SS, O’Leary, E, Janelle, SJ, et al. Changes in prevalence of healthcare-associated infections in US hospitals. N Engl J Med 2018;379:17321744.10.1056/NEJMoa1801550CrossRefGoogle Scholar
Kazaure, HS, Martin, M, Yoon, JK, Wren, SM. Long-term results of a postoperative pneumonia prevention program for the inpatient surgical ward. JAMA Surg 2014;149:914918.10.1001/jamasurg.2014.1216CrossRefGoogle ScholarPubMed
Munro, SC, Baker, D, Giuliano, KK, et al. Nonventilator hospital-acquired pneumonia: a call to action. Infect Control Hosp Epidemiol 2021;42:991996.10.1017/ice.2021.239CrossRefGoogle ScholarPubMed
Wren, SM, Martin, M, Yoon, JK, Bech, F. Postoperative pneumonia—prevention program for the inpatient surgical ward. J Am Coll Surg 2010;210:491495.10.1016/j.jamcollsurg.2010.01.009CrossRefGoogle ScholarPubMed
Munro, S, Baker, D. Reducing missed oral-care opportunities to prevent non–ventilator-associated hospital-acquired pneumonia at the Department of Veterans’ Affairs. Appl Nurs Res 2018;44:4853.10.1016/j.apnr.2018.09.004CrossRefGoogle ScholarPubMed
Lacerna, CC, Patey, D, Block, L, et al. A successful program preventing nonventilator hospital-acquired pneumonia in a large hospital system. Infect Control Hosp Epidemiol 2020;41:547552.10.1017/ice.2019.368CrossRefGoogle Scholar
Figure 0

Table 1. Electronic Non–ventilator-Associated Hospital-Acquired Pneumonia (NV-HAP) Surveillance Definition

Figure 1

Table 2. Interrater reliability and Prevalence of Other Reviewed Definitions of Pneumonia Among 250 Charts Meeting Electronic Surveillance Definitiona

Figure 2

Fig. 1. Incidence and overlap among non–ventilator-associated hospital-acquired pneumonia (NV-HAP) electronic surveillance definition and claims-based definition.

Figure 3

Fig. 2. Matrix layout for all intersections of the 6 pneumonia definitions. Overlap of cases meeting any of the 6 pneumonia definitions among cases meeting the electronic surveillance definition for non–ventilator-associated hospital-acquired pneumonia (NV-HAP). Each horizontal bar (left margin) represents the number of cases meeting each of the 6 definitions of pneumonia. Each vertical column represents the number of cases meeting multiple definitions, indicated by black dots in the matrix.

Supplementary material: File

Stern et al. supplementary material

Appendix

Download Stern et al. supplementary material(File)
File 128.7 KB