To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter highlights some important aspects of the design and analysis of clinical trials, and sketches a number of relevant statistical concepts. A controlled clinical trial of a medical intervention should have at least one primary hypothesis that drives its design. Well-designed and well-executed trials include an unambiguous protocol approved by the Institutional Review Boards (IRBs) or Ethics Committees of the participating clinics, laboratories, and data centers. The chapter also describes the basic frequentist statistical testing paradigm used by the typical randomized clinical trial with particular reference to ideas necessary in selecting sample size. Most clinical trials study more than one outcome of interest. Many neurological clinical trials compare therapies with respect to time to occurrence of the primary outcome. In the past, few clinical trials were performed in the Bayesian framework, but Bayesian methods have become more widely used recently.
To examine organizational factors and occupational characteristics associated with adherence to occupational safety guidelines recommending never recapping needles.
Mail surveys were conducted with healthcare workers (HCWs) and infection control professionals (ICPs).
The surveys were conducted at all non-federal general hospitals in Iowa, except one tertiary-care hospital. Survey data were linked to annual survey data of the American Hospital Association (AHA).
HCWs were sampled from statewide rosters of physicians, nurses, and laboratory workers in Iowa. Eligible HCWs worked in a setting and position in which they were likely to routinely handle needles. ICPs at all hospitals in the state were surveyed.
Ninety-nine ICPs responded (79% response rate). AHA data were available for all variables from 84 (85%) of the hospitals. Analyses were based on 1,454 HCWs who identified one of these hospitals as their primary hospital (70% response rate). Analyses were conducted using multiple logistic regression. Positive predictors of consistent adherence included infection control personnel hours per full-time–equivalent employee (odds ratio [OR], 1.03), frequency of standard precautions education (OR, 1.11), facilities providing personal protective equipment (OR, 1.82), facilities using needleless intravenous systems (OR, 1.42), and management support for safety (OR, 1.05). Negative predictors were use of “blood and body fluid precautions” isolation category (OR, 0.74) and increased job demands (OR, 0.90).
Healthcare organizations can improve staff safety by investing wisely in educational programs regarding approaches to minimize these risks, providing protective equipment, and eliminating the use of blood and body fluid precautions as an isolation policy.
To describe hospital practices and policies relating to bloodborne pathogens and current rates of occupational exposure among healthcare workers.
Participants and Methods:
Hospitals in Iowa and Virginia were surveyed in 1996 and 1997 about Standard Precautions training programs and compliance. The primary outcome measures were rates of percutaneous injuries and mucocutaneous exposures.
153 (64%) of 240 hospitals responded. New employee training was offered no more than twice per year by nearly one third. Most (79%-80%) facilities monitored compliance of nurses, housekeepers, and laboratory technicians; physicians rarely were trained or monitored. Implementation of needlestick prevention devices was the most common action taken to decrease sharps injuries. Over one half of hospitals used needleless intravenous systems; larger hospitals used these significantly more often. Protected devices for phlebotomy or intravenous placement were purchased by only one third. Most (89% of large and 80% of small) hospitals met the recommended infection control personnel-to-bed ratio of 1:250. Eleven percent did not have access to postexposure care during all working hours. Percutaneous injury surveillance relied on incident reports (99% of facilities) and employee health records (61%). The annual reported percutaneous injury incidence rate from 106 hospitals was 5.3 injuries per 100 personnel. Compared to single tertiary-referral institution rates determined more than 5 years previously, current injury rates remain elevated in community hospitals.
Healthcare institutions need to commit sufficient resources to Standard Precautions training and monitoring and to infection control programs to meet the needs of all workers, including physicians. Healthcare workers clearly remain at risk for injury. Further effective interventions are needed for employee training, improving adherence, and providing needlestick prevention devices.
Introduction in order to study the association between disease status (eg, nosocomial infection) and some exposure variable (eg, number of days on urinary catheter), it is often necessary to take into account other variables that may influence either the disease status or the exposure variable. For example, Freeman et al described a retrospective (in their terminology “case-referent”) study of a neonatal population. In this study, cases of nosocomial infection are selected in addition to corresponding control individuals who did not experience a nosocomial infection. All patients were selected from a neonatal intensive care unit, and the goal was to study the role of umbilical artery catheterization and its association with nosocomial infection. As noted by Freeman et al, this is a complex question because the risk of nosocomial infection might reasonably depend not only on the duration of catheterization, but also on the birth weight of the infant.
Previous papers in this series have discussed the design of epidemiologic studies and analysis for data that can be presented in a 2 × 2 table. The purpose of this paper is to explain how sample sizes are determined for unmatched prospective and retrospective studies. One of the most common questions asked in planning a project is “What sample size do I need for my study?” Determining the correct sample size is an important consideration that is best resolved prior to data collection. In this paper we assume the goal of the study is to examine the association between a dichotomous risk factor and the presence or absence of disease; therefore, the analysis will be that of a 2 × 2 table. We also assume throughout that sample sizes are to be equal for the two comparison groups (ie, equal numbers of exposed and unexposed for a prospective study, and equal numbers of cases and controls for a retrospective study). Sample size curves used to determine sample sizes will be presented. The results here are based on a paper by Schlesselman that contains sample size formulae for these study designs, and to which the reader may refer for an excellent technical discussion of their derivation.
Clinical epidemiologic studies often focus on the identification of risk factors associated with hospital-acquired infections and the subsequent effects of preventive measures. Two study designs commonly used are the prospective design and the retrospective design. Both of these designs and the associated data collection issues were defined and discussed in the first article of this series. This article reviews the statistical analysis for a dichotomous risk factor and the presence or absence of a given disease for unmatched data. We focus our attention primarily on the unmatched prospective study, although we present formulae for the retrospective study as well.
Much of clinical and hospital epidemiology involves the identification and
enumeration of cases or the comparison of case frequencies between two or
more groups of interest. Because both of these activities involve the use of
statistics, it is important to pay careful attention to biostatistical
issues involved in the collection and the analysis of such data.
In this article, the first in a series of biostatistical papers, we discuss
some general issues that are important in the design, analysis, and
interpretation of clinical epidemiologic data. Subsequent papers in this
series will deal with specific methods of analysis, examples of these
methods of analysis, and limitations and interpretations of the methods.
Schizophrenia and affective disorders selected according to the Feighner criteria can be differentiated on the basis of 40-year outcome. Within schizophrenia the presence of disorganized thoughts at the index admission was associated with poor outcome, whereas better outcome was associated with the presence of delusions or hallucinations. Within the affective disorders, bipolar patients with grandiose delusions or ideas showed a poor outcome; a better outcome was found in unipolar patients with complaints of fatiguability or tiredness at the time of the index admission.
Causes of death were studied in a cohort of 200 schizophrenic, 100 manic, and 225 depressive patients who were followed in a historical prospective study. These patients were admitted between 1934 and 1944 and were studied 30 to 40 years later. Five cause of death categories were considered in this analysis: (1) unnatural deaths, (2) neoplasms, (3) diseases of the circulatory system, (4) infective and parasitic diseases, and (5) other causes. For each cause of death, the expected number of deaths was calculated from vital statistics for the State of Iowa for the time period of follow-up. Observed numbers of deaths were contrasted with expected numbers of deaths to assess statistical significance for each diagnostic group. There was a significant excess of unnatural deaths in all diagnostic groups in both sexes, with the exception of female manics. This group, however, did show a significant excess of circulatory system deaths. Both male and female schizophrenics showed a substantial excess of infective disorder deaths.
Mortality data are presented from a four-decade follow-up study of 200 schizophrenic, 100 manic, 225 depressive patients, and 160 surgical controls (80 appendicectomy; 80 herniorrhaphy). Data for this analysis were available on 648 (95 per cent) members of the study population. Using sex-age standardized mortality ratios (SMR), the mortality experience of the study population was compared with that of the state of Iowa, the geographical area served by the admitting medical facility for the study group. Results are presented for a four-decade period beginning 1935–44, and ending 1965–74. All three psychiatric groups had a significant increase in mortality risk. This was most pronounced in the first decade following admission, although schizophrenic patients, especially females, continued to show a significant excess of deaths throughout the entire four decades of the follow-up period. During no decade of the follow-up period did the mortality of the surgical controls differ significantly from that of the Iowa population.
Email your librarian or administrator to recommend adding this to your organisation's collection.