Hostname: page-component-77c89778f8-5wvtr Total loading time: 0 Render date: 2024-07-16T12:03:26.157Z Has data issue: false hasContentIssue false

Recorded Criteria as a “Gold Standard” for Sensitivity and Specificity Estimates of Surveillance of Nosocomial Infection: A Novel Method to Measure Job Performance

Published online by Cambridge University Press:  02 January 2015

N. Joel Ehrenkranz*
Affiliation:
Florida Consortium for Infection Control, South Miami, Florida
James M. Shultz
Affiliation:
Florida Consortium for Infection Control, South Miami, Florida University of Miami School of Medicine, South Miami, Florida
Emily I. Richter
Affiliation:
Florida Consortium for Infection Control, South Miami, Florida
*
5901 SW 74th St, Suite 300, South Miami, FL 33143

Abstract

Objectives:

To compare the accuracy of infection control practitioners' (ICPs') classifications of operative site infection in Florida Consortium for Infection Control (FCIC) hospitals, in two time periods, 1990 to 1991 and 1991 to 1992, and to estimate the effect of duration of surveillance experience on that accuracy.

Methods:

Medical record reviewers examined records of all patients classified by an ICP as infected, to distinguish false-positives from true infections based on evidence of standard infection criteria and the ICP's contemporaneous clinical observations. Reviewers also examined a random sample of 100 records from patients classified as noninfected for evidence of undetected infections (false-negatives). These observations permitted estimates of the sensitivity and specificity of each ICP's classification of infection status.

Setting:

Fourteen FCIC communit:y hospitals at which performance of 16 ICPs was monitored.

Results:

There was a strong linear trend relating increasing sensitivity to numbers of years of ICP surveillance experience (P<.001). For ICPs with <4 years of experience, satisfactory sensitivity (≥80%) was reached in only one of 10 ICP-years of observation. For ICPs with ≥4 years' experience, satisfactory sensitivity was achieved for 14 of 18 person-years (P=.001). Estimated specificity was 97% to 100% for all ICP-years observed.

Conclusions:

ICPs with <4 years of surveillance experience in FCIC community hospitals rarely achieved a satisfactory sensitivity estimate, whereas ICPs with≥4 years' experience generally did. Monitoring ICP surveillance accuracy through retrospective medical record audits offers an objective approach to evaluating ICP performance and to interpreting infection rates at different hospitals.

Type
Original Articles
Copyright
Copyright © The Society for Healthcare Epidemiology of America 1995

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1. Mulholland, SG, Creed, J, Dierauf, LA, Brumn, JN, Blakemore, WS. Analysis and significance of nosocomial infection rates. Ann Surg 1974;14:827830.Google Scholar
2. Wenzel, RP, Osterman, CA, Townsend, TR, et al. Development of a statewide program for sur veillance and reporting of hospital-acquired infections. J Infect Dis 1979;140:41746.Google Scholar
3. Cardo, DM, Falk, PS, Mayhall, CG. Validation of surgical wound surveillance. Infect Control Hosp Epidemiol 1993;14:211215.Google Scholar
4. Kandula, PV, Wenzel, RP. Postoperative wound infection after total abdominal hysterectomy. Am J Infect Control 1993;21:201204.CrossRefGoogle ScholarPubMed
5. Ehrenkranz, NJ. The efficacy of a Florida hospital consortium for infection control: 1975-1982. Infect Control 1986;7:321326.Google Scholar
6. Ehrenkranz, NJ, Meakins, JL. Surgical infections. In: Bennett, JV, Brachman, PS, eds. Hospital Infections. 3rd ed. Boston, MA: Little, Brown and Co; 1992:685710.Google Scholar
7. Jones, MK. St. Anthony's 1991 Inpatient ICD-9-CM Coding Guidelines. St. Anthony Alexander, VA: Publishing Inc; 1990.Google Scholar
8. Altemeir, WA, Burke, JF, Pruitt, BA, Sandusky, WR. Manual on Control of Infection in Surgical Patients. 2nd ed. Phildelphia PA: JP Lippincott Co.; 1984:2829.Google Scholar
9. Penin, GB, Ehrenkranz, NJ. Priorities for sur veillance and cost-effective control of postoperative infection. Arch Surg 1988;123:13051308.Google Scholar
10. Taylor, G, McKenzie, M, Kirkland, T, Wiens, R. Effect of surgeon's diagnosis on surgical wound infection rates. Am J Infect Control 1990;18:295299.Google Scholar
11. Haley, RW, Schaberg, DR, McClish, DK et al. The accuracy of retrospective chart review in measuring nosocomial infection rates. Am J Epidemiol 1980;111:516533.CrossRefGoogle ScholarPubMed
12. Freeman, J, McGowan, JE. Methodological issues in hospital epidemiology. I Rates case-finding and interpretation. Rev Infect Dis 1981;3:658667.Google Scholar
13. Broderick, A, Mori, M, Nettleman, MD, Streed, SA, Wenzel, RP. Nosocomial infections: validation of surveillance and computer modeling to identify patients at risk. Am J Epidemiol 1990;131:734742.CrossRefGoogle ScholarPubMed
14. O'Neil, AC, Petersen, LA, Cook, EF, Bates, DW, Lee, TH, Brennan, TA. Physician reporting compared with medical-record review to identify adverse events. Ann Intern Med 1993;119:370376.Google Scholar