To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We present observations of 50 deg2 of the Mopra carbon monoxide (CO) survey of the Southern Galactic Plane, covering Galactic longitudes l = 300–350° and latitudes |b| ⩽ 0.5°. These data have been taken at 0.6 arcmin spatial resolution and 0.1 km s−1spectral resolution, providing an unprecedented view of the molecular clouds and gas of the Southern Galactic Plane in the 109–115 GHz J = 1–0 transitions of 12CO, 13CO, C18O, and C17O.
We present a series of velocity-integrated maps, spectra, and position-velocity plots that illustrate Galactic arm structures and trace masses on the order of ~106 M⊙ deg−2, and include a preliminary catalogue of C18O clumps located between l = 330–340°. Together with the information about the noise statistics of the survey, these data can be retrieved from the Mopra CO website and the PASA data store.
While previous work showed that the Centers for Disease Control and Prevention toolkit for carbapenem-resistant Enterobacteriaceae (CRE) can reduce spread regionally, these interventions are costly, and decisions makers want to know whether and when economic benefits occur.
Orange County, California
Using our Regional Healthcare Ecosystem Analyst (RHEA)-generated agent-based model of all inpatient healthcare facilities, we simulated the implementation of the CRE toolkit (active screening of interfacility transfers) in different ways and estimated their economic impacts under various circumstances.
Compared to routine control measures, screening generated cost savings by year 1 when hospitals implemented screening after identifying ≤20 CRE cases (saving $2,000–$9,000) and by year 7 if all hospitals implemented in a regional coordinated manner after 1 hospital identified a CRE case (hospital perspective). Cost savings was achieved only if hospitals independently screened after identifying 10 cases (year 1, third-party payer perspective). Cost savings was achieved by year 1 if hospitals independently screened after identifying 1 CRE case and by year 3 if all hospitals coordinated and screened after 1 hospital identified 1 case (societal perspective). After a few years, all strategies cost less and have positive health effects compared to routine control measures; most strategies generate a positive cost-benefit each year.
Active screening of interfacility transfers garnered cost savings in year 1 of implementation when hospitals acted independently and by year 3 if all hospitals collectively implemented the toolkit in a coordinated manner. Despite taking longer to manifest, coordinated regional control resulted in greater savings over time.
We present observations of the first 10° of longitude in the Mopra CO survey of the southern Galactic plane, covering Galactic longitude l = 320–330° and latitude b = ±0.5°, and l = 327–330°, b = +0.5–1.0°. These data have been taken at 35-arcsec spatial resolution and 0.1 km s−1 spectral resolution, providing an unprecedented view of the molecular clouds and gas of the southern Galactic plane in the 109–115 GHz J = 1–0 transitions of 12CO, 13CO, C18O, and C17O. Together with information about the noise statistics from the Mopra telescope, these data can be retrieved from the Mopra CO website and the CSIRO-ATNF data archive.
Medicine is a science of uncertainty and an art of probability.
Sir William Osler
Decision trees and Markov cohort models, as described and illustrated in the previous chapters, are essentially macrosimulation models. Such models simulate cohorts or groups of subjects. A number of limitations exist to the use of these models. Markov cohort models, for example, have ‘no memory’, implying that subjects in a particular state are a homogeneous group. Techniques to overcome these limitations, such as expanding the number of states, using tunnel states, or using alternative modeling techniques, were discussed in Chapter 10. These techniques can get very complex when dealing with extensive heterogeneity within a population. Microsimulation using Monte Carlo analysis provides another powerful technique to account for heterogeneity across subjects. Microsimulation with Monte Carlo analysis was introduced in Chapter 10 as an alternative method for evaluating a Markov model. In this chapter it will be discussed at greater length in the context of simulating heterogeneity.
In the previous chapters we represented uncertainty with probabilities. Implicitly the assumption was that, even though we were unsure of whether an event would take place, we could nevertheless predict or estimate the probability (or relative frequency) that it would occur. In essence we were using deterministic models. In reality, however, we are also uncertain of the degree of uncertainty. In other words, rather than dealing with a fixed probability we are actually dealing with a distribution of possible values of probabilities. Not only are we uncertain about the probabilities we use in our models, but we are also uncertain about the effectiveness outcomes and cost estimates included in the analysis. Thus, every parameter value we enter into our models is better represented as a probabilistic variable rather than a deterministic variable. If there is a single uncertain parameter, e.g., the relative risk reduction of an intervention, then the 95% confidence interval (CI) of this parameter is commonly used to indicate the uncertainty of the effect. Uncertainty in two or more components requires more complex methods, such as Monte Carlo probabilistic sensitivity analysis, which we will also discuss in this chapter.
Values are what we care about. As such, values should be the driving force for our decision making. They should be the basis for the time and effort we spend thinking about decisions. But this is not the way it is. It is not even close to the way it is.
Value judgments underlie virtually all clinical decisions. Sometimes the decision rests on a comparison of probability alone, such as the probability of surviving an acute episode of illness. In such cases, there is a single outcome measure – the probability of immediate survival – that can be averaged out to arrive at an optimal decision. In most cases, however, decisions between alternative strategies require not only estimates of the probabilities of the associated outcomes, but also value judgments about how to weigh the benefits versus the harms, and how to incorporate other factors like individual preferences for convenience, timing, who makes decisions, who else is affected by the decision, and the like. Consider the following examples.
In previous chapters we have seen several applications of decision trees to solve clinical problems under conditions of uncertainty. Decision trees work well in analyzing chance events with limited recursion and a limited time horizon. The limited number of sequential decisions or chance nodes allows one to capture all the necessary information to maximize expected utility. However, when events can occur repeatedly over an extended time period, the decision-tree framework can become unmanageable. Many decision situations involve events occurring over the lifetime of the patient, thus extending far into the future. Life spans vary, but conventional trees require us to specify a fixed time horizon. The probabilities and utilities of these events may change over time and must be accounted for. This is the case for most chronic conditions. Examples include heart disease, Alzheimer’s disease, various cancers, diabetes, asthma, osteoporosis, human immunodeficiency virus (HIV), inflammatory bowel disease, multiple sclerosis and more. This chapter offers a methodology for dealing with recurring events and extended (variable) time horizons.
Consider a patient with peripheral arterial disease (PAD: obstruction of the arteries to the legs) for whom a decision has to be made for either bypass surgery or percutaneous intervention (PI). We assume that conservative treatment through an exercise regimen has not provided sufficient relief. A very simplified decision tree is presented in Figure 10.1. Following the choice of treatment, the patient may die as a result of the procedure (captured in the ‘mortality’ branches) or survive the procedure. If the patient survives, treatment may fail and the patient returns to the pre-procedure prognosis, or treatment may be successful and the patient is relieved of symptoms. If we consider some fixed time horizon like a year or five years, we can assign utilities to the three possible outcomes (success, failure, death) and calculate expected utilities to choose a preferred treatment. In the current structure, there is no explicit allowance for the time horizon we are considering, nor for the timing of the various events. Even if we consider a fixed time horizon of, say, five years, there surely is a different implication for prognosis if failure occurs in the first year versus the fifth year.
Some treatment decisions are straightforward. For example, what should be done for an elderly patient with a fractured hip? Inserting a metal pin has dramatically altered the management: instead of lying in bed for weeks or months waiting for the fracture to heal while blood clots and pneumonia threatened, the patient is now ambulatory within days. The risks of morbidity and mortality are both greatly reduced. However, many treatment decisions are complex. They involve uncertainties and trade-offs that need to be carefully weighed before choosing. Tragic outcomes may occur no matter which choice is made, and the best that can be done is to minimize the overall risks. Such decisions can be difficult and uncomfortable to make. For example, consider the following historical dilemma.
Benjamin Franklin and smallpox
Benjamin Franklin argued implicitly in favor of the application to individual patients of probabilities based on previous experience with similar groups of patients. Before Edward Jenner’s discovery in 1796 of cowpox vaccination for smallpox, it was known that immunity from smallpox could be achieved by a live smallpox inoculation, but the procedure entailed a risk of death. When a smallpox epidemic broke out in Boston in 1721, the physician Zabdiel Boylston consented, at the urging of the clergyman Cotton Mather, to inoculate several hundred citizens. Mather and Boylston reported their results (1):
Out of about ten thousand Bostonians, five thousand seven hundred fifty-nine took smallpox the natural way. Of these, eight hundred eighty-five died, or one in seven. Two hundred eighty-six took smallpox by inoculation. Of these, six died, or one in forty-seven.
Before ordering a test ask: What will you do if the test is positive? What will you do if the test is negative? If the answers are the same, then don’t do the test.
Poster in an Emergency Department
In the previous chapter we looked at how to interpret diagnostic information such as symptoms, signs, and diagnostic tests. Now we need to consider when such information is helpful in decision making. Even if they reduce uncertainty, tests are not always helpful. If used inappropriately to guide a decision, a test may mislead more than it leads. In general, performing a test to gain additional information is worthwhile only if two conditions hold: (1) at least one decision would change given some test result, and (2) the risk to the patient associated with the test is less than the expected benefit that would be gained from the subsequent change in decision. These conditions are most likely to be fulfilled when we are confronted with intermediate probabilities of the target disease, that is, when we are in a diagnostic ‘gray zone.’ Tests are least likely to be helpful either when we are so certain a patient has the target disease that the negative result of an imperfect test would not dissuade us from treating, or, conversely, when we are so certain that the patient does not have the target disease that a positive result of an imperfect test would not persuade us to treat. These concepts are illustrated in Figure 6.1, which divides the probability of a disease into three ranges:
do not treat (for the target disease) and do not test, because even a positive test would not persuade us to treat;
test, because the test will help with treatment decisions or with follow-up; and
treat and do not test, because even a negative test would not dissuade us from treating.
Treat implies patient management as if disease is present and may imply initiating medical therapy, performing a therapeutic procedure, advising a lifestyle or other adjuvant intervention, or a combination of these. Do not treat implies patient management as if disease is absent and usually means risk factor management, lifestyle advice, self-care and/or watchful waiting.
The interpretation of new information depends on what was already known about the patient.
Diagnostic information and probability revision
Physicians have at their disposal an enormous variety of diagnostic information to guide them in decision making. Diagnostic information comes from talking to the patient (symptoms, such as pain, nausea, and breathlessness), examining the patient (signs, such as abdominal tenderness, fever, and blood pressure), and from diagnostic tests (such as blood tests, X-rays, and electrocardiograms (ECGs)) and screening tests (such as Papanicolaou smears for cervical cancer or cholesterol measurements).
Physicians are not the only ones that have to interpret diagnostic information. Public policy makers in health care are equally concerned with understanding the performance of diagnostic tests. If, for example, a policy maker is considering a screening program for lung cancer, he/she will need to understand the performance of the diagnostic tests that can detect lung cancer in an early phase of the disease. In public policy making, other types of ‘diagnostic tests’ may also be relevant. For example, a survey with a questionnaire in a population sample can be considered analogous to a diagnostic test. And performing a trial to determine the efficacy of a treatment is in fact a ‘test’ with the goal of getting more information about that treatment.
Much of medical training consists of learning to cope with pervasive uncertainty and with the limits of medical knowledge. Making serious clinical decisions on the basis of conflicting, incomplete, and untimely data is routine.
Much of clinical medicine and health care involves uncertainties: some reducible, but some irreducible despite our best efforts and tests. Better decisions will be made if we are open and honest about these uncertainties, and develop skills in estimating, communicating, and working with such uncertainties. What types of uncertainty exist? Consider the following example.
It has been a hard week. It is time to go home when you are called to yet another heroin overdose: a young woman has been found unconscious outside your clinic. After giving intravenous (IV) naloxone (which reverses the effects of heroin), you are accidentally jabbed by the needle. After her recovery, despite your reassurances, the young woman flees for fear of the police. As the mêlée settles, the dread of human immunodeficiency virus (HIV) infection begins to develop. You talk to the senior doctor about what you should do. She is very sympathetic, and begins to tell you about the risks and management. The good news is that, even if the patient was HIV-positive, a needlestick injury rarely leads to HIV infection (about 3 per 1000). And if she was HIV-positive then a basic two-drug regime of antivirals such as zidovudine (AZT) plus lamivudine are likely to be able to prevent most infections (perhaps 80%).
Unfortunately, the HIV status of the young woman who had overdosed is unknown. Since she was not a patient of your clinic, you are uncertain about whether she is infected, but think that it is possible since she is an IV drug user. The Centers for Disease Control and Prevention (CDC) guidelines (1) suggest: ‘If the exposure source is unknown, use of post-exposure prophylaxis should be decided on a case-by-case basis. Consider the severity of exposure and the epidemiologic likelihood of HIV.’ What do you do?
There is no question that financial and medical effects will both be considered when making health care decisions at all levels of policymaking; the only question is whether they will be considered well.
Elaine J. Power and John M. Eisenberg
Medical care entails benefits, harms, and costs. Until this chapter our approach has involved weighing benefits against harms for individuals and groups of patients and choosing the actions that provide the greatest expected health benefit. Now we extend our analysis to consider expressly the economic costs of health care and resource allocation decisions for populations.
As with all economic goods and services, the provision of health care consumes resources. Hospital beds, medical office facilities, medical equipment, pharmaceuticals, medical devices, and the time of physicians, nurses, other health-care workers, and family members all contribute to health care. The consumption of these resources constitutes the economic costs of health care.
It is surely a great criticism of our profession that we have not organized a critical summary, by specialty or subspecialty, adapted periodically, of all relevant randomized controlled trials.
Good decision analyses depend on both the veracity of the decision model and on the validity of the individual data elements. These elements may include probabilities (such as the pre-test probabilities, the sensitivity and specificity of diagnostic tests, the probability of an adverse event, and so on), estimates of effectiveness of interventions (such as the relative risk reduction), and the valuation of outcomes (such as quality of life, utilities, and costs). Often we lack the information needed for a confident assessment of these elements. Decision analysis, by structuring a decision problem, makes these gaps in knowledge apparent. Sensitivity analysis on these ‘soft’ numbers will also give us insight into which of these knowledge gaps is most likely to affect our decisions. These same gaps exist in less systematic decision making as well, but there is no convenient way to determine how our decisions should be affected. In this chapter we shall cover the basic methods for finding the best estimate for each of the different elements that may be included in a formal decision analysis or in less systematic decision making.
Sometimes, but not as often as one would like, the estimates one is looking for can be inferred from a published study or from a series of cases that someone has reported in the literature or recorded in a data bank. This is generally considered the most satisfactory way of assessing a probability, because it involves the use of quantitative evidence. Often we will have a choice of data sources, so it is useful to have some ‘rules’ to guide the choice of possible estimates. One helpful concept is the ‘hierarchy of evidence’ (see www.cebm.net) which explicitly ranks the available evidence; ‘perfect’ data will rarely be available, but we need to know how to choose the best from the available imperfect data. This choice will also need to be tempered by the practicalities and purpose of each decision analysis: what is feasible will differ with a range from the urgent individual patient decision to a national policy decision to fund an expensive new procedure.
Essentially, all models are wrong, but some are useful.
George E. P. Box
As discussed in Chapter 8, ‘good decision analyses depend on both the veracity of the decision model and the validity of the individual data elements.’ The validity of each individual data element relies on the comprehensiveness of the literature search for the best and most appropriate study or studies, criteria for selecting the source studies, the design of the study or studies, and methods for synthesizing the data from multiple sources. Nonetheless, Sir Michael David Rawlins avers that ‘Decision makers have to incorporate judgements, as part of their appraisal of the evidence, in reaching their conclusions. Such judgements relate to the extent to which each of the components of the evidence base is “fit for purpose.” Is it reliable?’(1) Because the integration of a multitude of these ‘best available’ data elements forms the basis for model results, some individuals refer to decision analyses as black boxes, so this last question applies particularly to the overall model predictions. Consequently, assessing model validity becomes paramount. However, prior to assessing model validity, model construction requires attention to parameter estimation and model calibration. This chapter focuses on parameter estimation, calibration, and validation in the context of Markov and, more generally, state-transition models (Chapter 10) in which recurrent events may occur over an extended period of time. The process of parameter estimation, calibration, and validation is iterative: it involves both adjustment of the data to fit the model and adjustment of the model to fit the data.
Survival analysis involves determining the probability that an event such as death or disease progression will occur over time. The events modeled in survival analysis are called ‘failure’ events, because once they occur, they cannot occur again. ‘Survival’ is the absence of the failure event. The failure event may be death, or it may be death combined with a non-fatal outcome such as developing cancer or having a heart attack, in which case the absence of the event is referred to as event-free survival. Commonly used methods for survival analysis include life-table analysis, Kaplan–Meier product limit estimates, and Cox proportional hazards models. A survival curve plots the probability of being alive over time (Figure 11.1).