To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Galactic TeV
is currently an unidentified source. In an attempt to unveil its origin, we present here the most detailed study of interstellar gas using data from the Mopra Southern Galactic Plane CO Survey, 7- and 12-mm wavelength Mopra surveys and Southern Galactic Plane Survey of HI. Several components of atomic and molecular gas are found to overlap
at various velocities along the line of sight. The CS(1-0) emission clumps confirm the presence of dense gas. Both correlation and anti-correlation between the gas and TeV
-ray emission have been identified in various gas tracers, enabling several origin scenarios for the TeV
-ray emission from
. For a hadronic scenario,
and the progenitor supernova remnant (SNR) of
require cosmic ray (CR) enhancement factors of
times the solar neighbour CR flux value to produce the TeV
-ray emission. Assuming an isotropic diffusion model, CRs from both these SNRs require a slow diffusion coefficient, as found for other TeV SNRs associated with adjacent ISM gas. The morphology of gas located at 3.8 kpc (the dispersion measure distance to
) tends to anti-correlate with features of the TeV emission from
, making the leptonic scenario possible. Both pure hadronic and pure leptonic scenarios thus remain plausible.
Heart failure (HF) is a complex clinical syndrome that represents a major cause of morbidity and mortality in Western countries. Several nutraceuticals have shown interesting clinical results in HF prevention as well as in the treatment of the early stages of the disease, alone or in combination with pharmacological therapy. The aim of the present expert opinion position paper is to summarise the available clinical evidence on the role of phytochemicals in HF prevention and/or treatment that might be considered in those patients not treated optimally as well as in those with low therapy adherence. The level of evidence and the strength of recommendation of particular HF treatment options were weighed up and graded according to predefined scales. A systematic search strategy was developed to identify trials in PubMed (January 1970 to June 2019). The terms ‘nutraceuticals’, ‘dietary supplements’, ‘herbal drug’ and ‘heart failure’ or ‘left verntricular dysfunction’ were used in the literature search. The experts discussed and agreed on the recommendation levels. Available clinical trials reported that the intake of some nutraceuticals (hawthorn, coenzyme Q10, l-carnitine, d-ribose, carnosine, vitamin D, probiotics, n-3 PUFA and beet nitrates) might be associated with improvements in self-perceived quality of life and/or functional parameters such as left ventricular ejection fraction, stroke volume and cardiac output in HF patients, with minimal or no side effects. Those benefits tended to be greater in earlier HF stages. Available clinical evidence supports the usefulness of supplementation with some nutraceuticals to improve HF management in addition to evidence-based pharmacological therapy.
The Murchison Widefield Array (MWA) is an open access telescope dedicated to studying the low-frequency (80–300 MHz) southern sky. Since beginning operations in mid-2013, the MWA has opened a new observational window in the southern hemisphere enabling many science areas. The driving science objectives of the original design were to observe 21 cm radiation from the Epoch of Reionisation (EoR), explore the radio time domain, perform Galactic and extragalactic surveys, and monitor solar, heliospheric, and ionospheric phenomena. All together
programs recorded 20 000 h producing 146 papers to date. In 2016, the telescope underwent a major upgrade resulting in alternating compact and extended configurations. Other upgrades, including digital back-ends and a rapid-response triggering system, have been developed since the original array was commissioned. In this paper, we review the major results from the prior operation of the MWA and then discuss the new science paths enabled by the improved capabilities. We group these science opportunities by the four original science themes but also include ideas for directions outside these categories.
Structure and optical properties have been successfully determined for a series of niobium- and tantalum-containing layered alkaline-earth silicate compounds, Ba3(Nb6−xTax)Si4O26 (x = 0.6, 1.8, 3.0, 4.2, 5.4). The structure of this solid solution was found to be hexagonal P-62m (No. 189), with Z = 1. With x increases from 0.6 to 5.4, the lattice parameter a increases from 8.98804(8) to 9.00565(9) Å and c decreases from 7.83721(10) to 7.75212(12) Å. As a result, the volume decreases from 548.304(11) to 544.479(14) Å3. The (Nb/Ta)O6 distorted octahedra form continuous chains along the c-axis. These (Nb/Ta)O6 chains are in turn linked with the Si2O7 groups to form distorted pentagonal channels in which Ba ions were found. These Ba2+ ions have full occupancy and a 13-fold coordination environment with neighboring oxygen sites. Another salient feature of the structure is the linear Si–O–Si chains. When x in Ba3(Nb6−xTax)Si4O26 increases, the bond valence sum (BVS) values of the Ba sites increase slightly (2.09–2.20), indicating the size of the cage becoming progressively smaller (over-bonding). While SiO cages are also slightly smaller than ideal (BVS range from 4.16 to 4.19), the (Nb/Ta)O6 octahedral cages are slightly larger than ideal (BVS range from 4.87 to 4.90), giving rise to an under-bonding situation. The bandgaps of the solid solution members were measured between 3.39 and 3.59 eV, and the x = 3.0 member was modeled by density functional theory techniques to be 3.07 eV. The bandgaps of these materials indicate that they are potential candidates for ultraviolet photocatalyst.
We present observations of 50 deg2 of the Mopra carbon monoxide (CO) survey of the Southern Galactic Plane, covering Galactic longitudes l = 300–350° and latitudes |b| ⩽ 0.5°. These data have been taken at 0.6 arcmin spatial resolution and 0.1 km s−1spectral resolution, providing an unprecedented view of the molecular clouds and gas of the Southern Galactic Plane in the 109–115 GHz J = 1–0 transitions of 12CO, 13CO, C18O, and C17O.
We present a series of velocity-integrated maps, spectra, and position-velocity plots that illustrate Galactic arm structures and trace masses on the order of ~106 M⊙ deg−2, and include a preliminary catalogue of C18O clumps located between l = 330–340°. Together with the information about the noise statistics of the survey, these data can be retrieved from the Mopra CO website and the PASA data store.
While previous work showed that the Centers for Disease Control and Prevention toolkit for carbapenem-resistant Enterobacteriaceae (CRE) can reduce spread regionally, these interventions are costly, and decisions makers want to know whether and when economic benefits occur.
Orange County, California
Using our Regional Healthcare Ecosystem Analyst (RHEA)-generated agent-based model of all inpatient healthcare facilities, we simulated the implementation of the CRE toolkit (active screening of interfacility transfers) in different ways and estimated their economic impacts under various circumstances.
Compared to routine control measures, screening generated cost savings by year 1 when hospitals implemented screening after identifying ≤20 CRE cases (saving $2,000–$9,000) and by year 7 if all hospitals implemented in a regional coordinated manner after 1 hospital identified a CRE case (hospital perspective). Cost savings was achieved only if hospitals independently screened after identifying 10 cases (year 1, third-party payer perspective). Cost savings was achieved by year 1 if hospitals independently screened after identifying 1 CRE case and by year 3 if all hospitals coordinated and screened after 1 hospital identified 1 case (societal perspective). After a few years, all strategies cost less and have positive health effects compared to routine control measures; most strategies generate a positive cost-benefit each year.
Active screening of interfacility transfers garnered cost savings in year 1 of implementation when hospitals acted independently and by year 3 if all hospitals collectively implemented the toolkit in a coordinated manner. Despite taking longer to manifest, coordinated regional control resulted in greater savings over time.
We present observations of the first 10° of longitude in the Mopra CO survey of the southern Galactic plane, covering Galactic longitude l = 320–330° and latitude b = ±0.5°, and l = 327–330°, b = +0.5–1.0°. These data have been taken at 35-arcsec spatial resolution and 0.1 km s−1 spectral resolution, providing an unprecedented view of the molecular clouds and gas of the southern Galactic plane in the 109–115 GHz J = 1–0 transitions of 12CO, 13CO, C18O, and C17O. Together with information about the noise statistics from the Mopra telescope, these data can be retrieved from the Mopra CO website and the CSIRO-ATNF data archive.
Even though the diagnostic radiologist examines black-and-white images, the information that is derived from the images is hardly ever black-and-white.
M.G. Myriam Hunink
In the previous chapters we focused on dichotomous test results, e.g., fecal occult blood is either present or absent. Test results can conveniently be dichotomized, and thinking in terms of dichotomous test results is generally helpful. Distinguishing patients with and without the target disease is useful for the purpose of subsequent decision making because most medical actions are dichotomous. In reality, however, most test results have more than two possible outcomes. Test results can be categorical, ordinal, or continuous. For example, categories of a diagnostic imaging test may be defined by key findings on the images. These categories may be ordered (intuitively) according to the observer’s confidence in the diagnosis, based on the findings. As an example, abnormalities seen on mammography are commonly reported as definitely malignant, probably malignant, possibly malignant, probably benign, or definitely benign. As we shall see later in this chapter, it makes sense to order the categories (explicitly) according to increasing likelihood ratio (LR). Some test results are inherently ordinal, e.g., the five categories of a Papanicolaou smear (test for cervical cancer) are ordinal. Results of biochemical tests are usually given on a continuous scale, which may be reduced to an ordinal scale by grouping the test results. Thus, a test result on a continuous scale can be considered a result on an ordinal scale with an infinite number of very narrow categories. Scores from prediction models are on an ordinal scale if there are a finite number of possible scores, and on a continuous scale if there are an infinite number of scores. When test results are categorical, ordinal, or continuous, we have to consider many test results Ri, where i can be any value from two (the case we have considered in Chapter 5 and Chapter 6, T+ and T−) up to any number of categories. Interpretation of a test result on an ordinal scale can be considered a generalization of the situation of dichotomous test results.
Much of medical training consists of learning to cope with pervasive uncertainty and with the limits of medical knowledge. Making serious clinical decisions on the basis of conflicting, incomplete, and untimely data is routine.
Much of clinical medicine and health care involves uncertainties: some reducible, but some irreducible despite our best efforts and tests. Better decisions will be made if we are open and honest about these uncertainties, and develop skills in estimating, communicating, and working with such uncertainties. What types of uncertainty exist? Consider the following example.
It has been a hard week. It is time to go home when you are called to yet another heroin overdose: a young woman has been found unconscious outside your clinic. After giving intravenous (IV) naloxone (which reverses the effects of heroin), you are accidentally jabbed by the needle. After her recovery, despite your reassurances, the young woman flees for fear of the police. As the mêlée settles, the dread of human immunodeficiency virus (HIV) infection begins to develop. You talk to the senior doctor about what you should do. She is very sympathetic, and begins to tell you about the risks and management. The good news is that, even if the patient was HIV-positive, a needlestick injury rarely leads to HIV infection (about 3 per 1000). And if she was HIV-positive then a basic two-drug regime of antivirals such as zidovudine (AZT) plus lamivudine are likely to be able to prevent most infections (perhaps 80%).
Unfortunately, the HIV status of the young woman who had overdosed is unknown. Since she was not a patient of your clinic, you are uncertain about whether she is infected, but think that it is possible since she is an IV drug user. The Centers for Disease Control and Prevention (CDC) guidelines (1) suggest: ‘If the exposure source is unknown, use of post-exposure prophylaxis should be decided on a case-by-case basis. Consider the severity of exposure and the epidemiologic likelihood of HIV.’ What do you do?
Essentially, all models are wrong, but some are useful.
George E. P. Box
As discussed in Chapter 8, ‘good decision analyses depend on both the veracity of the decision model and the validity of the individual data elements.’ The validity of each individual data element relies on the comprehensiveness of the literature search for the best and most appropriate study or studies, criteria for selecting the source studies, the design of the study or studies, and methods for synthesizing the data from multiple sources. Nonetheless, Sir Michael David Rawlins avers that ‘Decision makers have to incorporate judgements, as part of their appraisal of the evidence, in reaching their conclusions. Such judgements relate to the extent to which each of the components of the evidence base is “fit for purpose.” Is it reliable?’(1) Because the integration of a multitude of these ‘best available’ data elements forms the basis for model results, some individuals refer to decision analyses as black boxes, so this last question applies particularly to the overall model predictions. Consequently, assessing model validity becomes paramount. However, prior to assessing model validity, model construction requires attention to parameter estimation and model calibration. This chapter focuses on parameter estimation, calibration, and validation in the context of Markov and, more generally, state-transition models (Chapter 10) in which recurrent events may occur over an extended period of time. The process of parameter estimation, calibration, and validation is iterative: it involves both adjustment of the data to fit the model and adjustment of the model to fit the data.
Survival analysis involves determining the probability that an event such as death or disease progression will occur over time. The events modeled in survival analysis are called ‘failure’ events, because once they occur, they cannot occur again. ‘Survival’ is the absence of the failure event. The failure event may be death, or it may be death combined with a non-fatal outcome such as developing cancer or having a heart attack, in which case the absence of the event is referred to as event-free survival. Commonly used methods for survival analysis include life-table analysis, Kaplan–Meier product limit estimates, and Cox proportional hazards models. A survival curve plots the probability of being alive over time (Figure 11.1).
Some treatment decisions are straightforward. For example, what should be done for an elderly patient with a fractured hip? Inserting a metal pin has dramatically altered the management: instead of lying in bed for weeks or months waiting for the fracture to heal while blood clots and pneumonia threatened, the patient is now ambulatory within days. The risks of morbidity and mortality are both greatly reduced. However, many treatment decisions are complex. They involve uncertainties and trade-offs that need to be carefully weighed before choosing. Tragic outcomes may occur no matter which choice is made, and the best that can be done is to minimize the overall risks. Such decisions can be difficult and uncomfortable to make. For example, consider the following historical dilemma.
Benjamin Franklin and smallpox
Benjamin Franklin argued implicitly in favor of the application to individual patients of probabilities based on previous experience with similar groups of patients. Before Edward Jenner’s discovery in 1796 of cowpox vaccination for smallpox, it was known that immunity from smallpox could be achieved by a live smallpox inoculation, but the procedure entailed a risk of death. When a smallpox epidemic broke out in Boston in 1721, the physician Zabdiel Boylston consented, at the urging of the clergyman Cotton Mather, to inoculate several hundred citizens. Mather and Boylston reported their results (1):
Out of about ten thousand Bostonians, five thousand seven hundred fifty-nine took smallpox the natural way. Of these, eight hundred eighty-five died, or one in seven. Two hundred eighty-six took smallpox by inoculation. Of these, six died, or one in forty-seven.
The interpretation of new information depends on what was already known about the patient.
Diagnostic information and probability revision
Physicians have at their disposal an enormous variety of diagnostic information to guide them in decision making. Diagnostic information comes from talking to the patient (symptoms, such as pain, nausea, and breathlessness), examining the patient (signs, such as abdominal tenderness, fever, and blood pressure), and from diagnostic tests (such as blood tests, X-rays, and electrocardiograms (ECGs)) and screening tests (such as Papanicolaou smears for cervical cancer or cholesterol measurements).
Physicians are not the only ones that have to interpret diagnostic information. Public policy makers in health care are equally concerned with understanding the performance of diagnostic tests. If, for example, a policy maker is considering a screening program for lung cancer, he/she will need to understand the performance of the diagnostic tests that can detect lung cancer in an early phase of the disease. In public policy making, other types of ‘diagnostic tests’ may also be relevant. For example, a survey with a questionnaire in a population sample can be considered analogous to a diagnostic test. And performing a trial to determine the efficacy of a treatment is in fact a ‘test’ with the goal of getting more information about that treatment.
Before ordering a test ask: What will you do if the test is positive? What will you do if the test is negative? If the answers are the same, then don’t do the test.
Poster in an Emergency Department
In the previous chapter we looked at how to interpret diagnostic information such as symptoms, signs, and diagnostic tests. Now we need to consider when such information is helpful in decision making. Even if they reduce uncertainty, tests are not always helpful. If used inappropriately to guide a decision, a test may mislead more than it leads. In general, performing a test to gain additional information is worthwhile only if two conditions hold: (1) at least one decision would change given some test result, and (2) the risk to the patient associated with the test is less than the expected benefit that would be gained from the subsequent change in decision. These conditions are most likely to be fulfilled when we are confronted with intermediate probabilities of the target disease, that is, when we are in a diagnostic ‘gray zone.’ Tests are least likely to be helpful either when we are so certain a patient has the target disease that the negative result of an imperfect test would not dissuade us from treating, or, conversely, when we are so certain that the patient does not have the target disease that a positive result of an imperfect test would not persuade us to treat. These concepts are illustrated in Figure 6.1, which divides the probability of a disease into three ranges:
do not treat (for the target disease) and do not test, because even a positive test would not persuade us to treat;
test, because the test will help with treatment decisions or with follow-up; and
treat and do not test, because even a negative test would not dissuade us from treating.
Treat implies patient management as if disease is present and may imply initiating medical therapy, performing a therapeutic procedure, advising a lifestyle or other adjuvant intervention, or a combination of these. Do not treat implies patient management as if disease is absent and usually means risk factor management, lifestyle advice, self-care and/or watchful waiting.
And take the case of a man who is ill. I call two physicians: they differ in opinion. I am not to lie down and die between them: I must do something.
How are decisions made in practice, and can we improve the process? Decisions in health care can be particularly awkward, involving a complex web of diagnostic and therapeutic uncertainties, patient preferences and values, and costs. It is not surprising that there is often considerable disagreement about the best course of action. One of the authors of this book tells the following story (1):
Being a cardiovascular radiologist, I regularly attend the vascular rounds at the University Hospital. It’s an interesting conference: the Professor of Vascular Surgery really loves academic discussions and each case gets a lot of attention. The conference goes on for hours. The clinical fellows complain, of course, and it sure keeps me from my regular work. But it’s one of the few conferences that I attend where there is a real discussion of the risks, benefits, and costs of the management options. Even patient preferences are sometimes (albeit rarely) considered.
And yet, I find there is something disturbing about the conference. The discussions always seem to go along the same lines. Doctor R. advocates treatment X because he recently read a paper that reported wonderful results; Doctor S. counters that treatment X has a substantial risk associated with it, as was shown in another paper published last year in the world’s highest-ranking journal in the field; and Doctor T. says that given the current limited health-care budget maybe we should consider a less expensive alternative or no treatment at all. They talk around in circles for ten to 15 minutes, each doctor reiterating his or her opinion. The professor, realizing that his fellows are getting irritated, finally stops the discussion. Practical chores are waiting; there are patients to be cared for. And so the professor concludes: ‘All right. We will offer the patient treatment X.’ About 30% of those involved in the decision-making process nod their heads in agreement; another 30% start bringing up objections which get stifled quickly by the fellows who really do not want an encore, and the remaining 40% are either too tired or too flabbergasted to respond, or are more concerned about another objective, namely their job security.