To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Photodynamic therapy (PDT) is an alternative to traditional cancer treatments. This approach involves the use of photosensitizer (PS) agents and their interaction with light. As a consequence, cytotoxic reactive oxygen species (ROS) are generated that, in turn will destroy tumors. On the other hand, ZnO is a biocompatible, nontoxic, and biodegradable material with the capability to generate ROS, specifically singlet oxygen (SO), which makes this material a promising candidate for 2-photon PDT. Doping ZnO with Li species is expected to induce defects in the host oxide structure that favors the formation of trap states that should affect the electronic transitions related to the generation of SO. The present work reports the effect of the level of Li-doping on the ZnO structure and its capability to generate SO. Li-doped ZnO nanoparticles were synthesized under size-controlled conditions using a modified version of the polyol method. XRD measurements confirmed the development of well-crystallized ZnO Wurtzite; the average crystallite sizes ranged between 13.3nm and 14.2 nm, with an increase in Li content. The corresponding band gap energy values, estimated from UV-vis measurements, decreased from 3.33 to 3.25 eV. Photoluminescence (PL) measurements of Li-ZnO revealed the presence of emission peaks centered on 363nm, 390nm, and 556 nm; these emission peaks correspond to the exciton emission, transition of shallow donor levels near of the conduction band to valence band such as interstitial Zn, and oxygen vacancies, respectively. The observed increase of the emission intensity of the 390 nm emission peak, relative to the intensity of the main emission peak at 363nm, was attributed to the promote of trap states due to interstitial Zn or Li-incorporation into the host oxide lattice. SO measurements evidenced the enhancing effect of the Li concentration on the capability of the doped ZnO to generate this species. This Li-dependence of SO generation can be attributed to the enhancement of the concentration of trap states in the host ZnO, as suggested by PL measurements. Accordingly, Li-ZnO would become cytotoxic to cancer cells via photo-induced ROS generation enabling this nanomaterial to be considered as a potential direct PS agent for the 2-photon PDT route.
The present work focuses on the polyol-mediated synthesis of pure and Mg-doped ZnO nanoparticles. The synthesized samples were characterized via X-ray diffraction, Fourier transformed infrared spectroscopy, ultraviolet visible spectroscopy and photoluminescence techniques. The Standard Plate Count was used to assess the bactericidal properties of the nanoparticles against E. coli at 1000 ppm and 1500 ppm of concentration. The capacity of the Zn-Mg oxides to generate singlet oxygen (SO) species was also evaluated. X-ray diffraction information evidenced the formation of ZnO-wurtzite; no diffraction peaks corresponding to isolated Mg-phases were detected. The average crystallite size of the Zn-Mg oxide nanocrystals was estimated in the 6nm - 7nm range. Infrared spectroscopy measurements confirmed the formation of the oxide with a Metal-Oxygen band centered on 536 cm-1; other bands associated to the functional groups of polyol by product were also observed. The exciton peak of UV spectrum suggests similarity in the particle size with the dopant addition. The effect of particle composition (i.e. doping level) on the corresponding generation of SO and bactericidal capacity is presented and discussed.
Even though the diagnostic radiologist examines black-and-white images, the information that is derived from the images is hardly ever black-and-white.
M.G. Myriam Hunink
In the previous chapters we focused on dichotomous test results, e.g., fecal occult blood is either present or absent. Test results can conveniently be dichotomized, and thinking in terms of dichotomous test results is generally helpful. Distinguishing patients with and without the target disease is useful for the purpose of subsequent decision making because most medical actions are dichotomous. In reality, however, most test results have more than two possible outcomes. Test results can be categorical, ordinal, or continuous. For example, categories of a diagnostic imaging test may be defined by key findings on the images. These categories may be ordered (intuitively) according to the observer’s confidence in the diagnosis, based on the findings. As an example, abnormalities seen on mammography are commonly reported as definitely malignant, probably malignant, possibly malignant, probably benign, or definitely benign. As we shall see later in this chapter, it makes sense to order the categories (explicitly) according to increasing likelihood ratio (LR). Some test results are inherently ordinal, e.g., the five categories of a Papanicolaou smear (test for cervical cancer) are ordinal. Results of biochemical tests are usually given on a continuous scale, which may be reduced to an ordinal scale by grouping the test results. Thus, a test result on a continuous scale can be considered a result on an ordinal scale with an infinite number of very narrow categories. Scores from prediction models are on an ordinal scale if there are a finite number of possible scores, and on a continuous scale if there are an infinite number of scores. When test results are categorical, ordinal, or continuous, we have to consider many test results Ri, where i can be any value from two (the case we have considered in Chapter 5 and Chapter 6, T+ and T−) up to any number of categories. Interpretation of a test result on an ordinal scale can be considered a generalization of the situation of dichotomous test results.
Much of medical training consists of learning to cope with pervasive uncertainty and with the limits of medical knowledge. Making serious clinical decisions on the basis of conflicting, incomplete, and untimely data is routine.
Much of clinical medicine and health care involves uncertainties: some reducible, but some irreducible despite our best efforts and tests. Better decisions will be made if we are open and honest about these uncertainties, and develop skills in estimating, communicating, and working with such uncertainties. What types of uncertainty exist? Consider the following example.
It has been a hard week. It is time to go home when you are called to yet another heroin overdose: a young woman has been found unconscious outside your clinic. After giving intravenous (IV) naloxone (which reverses the effects of heroin), you are accidentally jabbed by the needle. After her recovery, despite your reassurances, the young woman flees for fear of the police. As the mêlée settles, the dread of human immunodeficiency virus (HIV) infection begins to develop. You talk to the senior doctor about what you should do. She is very sympathetic, and begins to tell you about the risks and management. The good news is that, even if the patient was HIV-positive, a needlestick injury rarely leads to HIV infection (about 3 per 1000). And if she was HIV-positive then a basic two-drug regime of antivirals such as zidovudine (AZT) plus lamivudine are likely to be able to prevent most infections (perhaps 80%).
Unfortunately, the HIV status of the young woman who had overdosed is unknown. Since she was not a patient of your clinic, you are uncertain about whether she is infected, but think that it is possible since she is an IV drug user. The Centers for Disease Control and Prevention (CDC) guidelines (1) suggest: ‘If the exposure source is unknown, use of post-exposure prophylaxis should be decided on a case-by-case basis. Consider the severity of exposure and the epidemiologic likelihood of HIV.’ What do you do?
Essentially, all models are wrong, but some are useful.
George E. P. Box
As discussed in Chapter 8, ‘good decision analyses depend on both the veracity of the decision model and the validity of the individual data elements.’ The validity of each individual data element relies on the comprehensiveness of the literature search for the best and most appropriate study or studies, criteria for selecting the source studies, the design of the study or studies, and methods for synthesizing the data from multiple sources. Nonetheless, Sir Michael David Rawlins avers that ‘Decision makers have to incorporate judgements, as part of their appraisal of the evidence, in reaching their conclusions. Such judgements relate to the extent to which each of the components of the evidence base is “fit for purpose.” Is it reliable?’(1) Because the integration of a multitude of these ‘best available’ data elements forms the basis for model results, some individuals refer to decision analyses as black boxes, so this last question applies particularly to the overall model predictions. Consequently, assessing model validity becomes paramount. However, prior to assessing model validity, model construction requires attention to parameter estimation and model calibration. This chapter focuses on parameter estimation, calibration, and validation in the context of Markov and, more generally, state-transition models (Chapter 10) in which recurrent events may occur over an extended period of time. The process of parameter estimation, calibration, and validation is iterative: it involves both adjustment of the data to fit the model and adjustment of the model to fit the data.
Survival analysis involves determining the probability that an event such as death or disease progression will occur over time. The events modeled in survival analysis are called ‘failure’ events, because once they occur, they cannot occur again. ‘Survival’ is the absence of the failure event. The failure event may be death, or it may be death combined with a non-fatal outcome such as developing cancer or having a heart attack, in which case the absence of the event is referred to as event-free survival. Commonly used methods for survival analysis include life-table analysis, Kaplan–Meier product limit estimates, and Cox proportional hazards models. A survival curve plots the probability of being alive over time (Figure 11.1).
Some treatment decisions are straightforward. For example, what should be done for an elderly patient with a fractured hip? Inserting a metal pin has dramatically altered the management: instead of lying in bed for weeks or months waiting for the fracture to heal while blood clots and pneumonia threatened, the patient is now ambulatory within days. The risks of morbidity and mortality are both greatly reduced. However, many treatment decisions are complex. They involve uncertainties and trade-offs that need to be carefully weighed before choosing. Tragic outcomes may occur no matter which choice is made, and the best that can be done is to minimize the overall risks. Such decisions can be difficult and uncomfortable to make. For example, consider the following historical dilemma.
Benjamin Franklin and smallpox
Benjamin Franklin argued implicitly in favor of the application to individual patients of probabilities based on previous experience with similar groups of patients. Before Edward Jenner’s discovery in 1796 of cowpox vaccination for smallpox, it was known that immunity from smallpox could be achieved by a live smallpox inoculation, but the procedure entailed a risk of death. When a smallpox epidemic broke out in Boston in 1721, the physician Zabdiel Boylston consented, at the urging of the clergyman Cotton Mather, to inoculate several hundred citizens. Mather and Boylston reported their results (1):
Out of about ten thousand Bostonians, five thousand seven hundred fifty-nine took smallpox the natural way. Of these, eight hundred eighty-five died, or one in seven. Two hundred eighty-six took smallpox by inoculation. Of these, six died, or one in forty-seven.
The interpretation of new information depends on what was already known about the patient.
Diagnostic information and probability revision
Physicians have at their disposal an enormous variety of diagnostic information to guide them in decision making. Diagnostic information comes from talking to the patient (symptoms, such as pain, nausea, and breathlessness), examining the patient (signs, such as abdominal tenderness, fever, and blood pressure), and from diagnostic tests (such as blood tests, X-rays, and electrocardiograms (ECGs)) and screening tests (such as Papanicolaou smears for cervical cancer or cholesterol measurements).
Physicians are not the only ones that have to interpret diagnostic information. Public policy makers in health care are equally concerned with understanding the performance of diagnostic tests. If, for example, a policy maker is considering a screening program for lung cancer, he/she will need to understand the performance of the diagnostic tests that can detect lung cancer in an early phase of the disease. In public policy making, other types of ‘diagnostic tests’ may also be relevant. For example, a survey with a questionnaire in a population sample can be considered analogous to a diagnostic test. And performing a trial to determine the efficacy of a treatment is in fact a ‘test’ with the goal of getting more information about that treatment.
Before ordering a test ask: What will you do if the test is positive? What will you do if the test is negative? If the answers are the same, then don’t do the test.
Poster in an Emergency Department
In the previous chapter we looked at how to interpret diagnostic information such as symptoms, signs, and diagnostic tests. Now we need to consider when such information is helpful in decision making. Even if they reduce uncertainty, tests are not always helpful. If used inappropriately to guide a decision, a test may mislead more than it leads. In general, performing a test to gain additional information is worthwhile only if two conditions hold: (1) at least one decision would change given some test result, and (2) the risk to the patient associated with the test is less than the expected benefit that would be gained from the subsequent change in decision. These conditions are most likely to be fulfilled when we are confronted with intermediate probabilities of the target disease, that is, when we are in a diagnostic ‘gray zone.’ Tests are least likely to be helpful either when we are so certain a patient has the target disease that the negative result of an imperfect test would not dissuade us from treating, or, conversely, when we are so certain that the patient does not have the target disease that a positive result of an imperfect test would not persuade us to treat. These concepts are illustrated in Figure 6.1, which divides the probability of a disease into three ranges:
do not treat (for the target disease) and do not test, because even a positive test would not persuade us to treat;
test, because the test will help with treatment decisions or with follow-up; and
treat and do not test, because even a negative test would not dissuade us from treating.
Treat implies patient management as if disease is present and may imply initiating medical therapy, performing a therapeutic procedure, advising a lifestyle or other adjuvant intervention, or a combination of these. Do not treat implies patient management as if disease is absent and usually means risk factor management, lifestyle advice, self-care and/or watchful waiting.
And take the case of a man who is ill. I call two physicians: they differ in opinion. I am not to lie down and die between them: I must do something.
How are decisions made in practice, and can we improve the process? Decisions in health care can be particularly awkward, involving a complex web of diagnostic and therapeutic uncertainties, patient preferences and values, and costs. It is not surprising that there is often considerable disagreement about the best course of action. One of the authors of this book tells the following story (1):
Being a cardiovascular radiologist, I regularly attend the vascular rounds at the University Hospital. It’s an interesting conference: the Professor of Vascular Surgery really loves academic discussions and each case gets a lot of attention. The conference goes on for hours. The clinical fellows complain, of course, and it sure keeps me from my regular work. But it’s one of the few conferences that I attend where there is a real discussion of the risks, benefits, and costs of the management options. Even patient preferences are sometimes (albeit rarely) considered.
And yet, I find there is something disturbing about the conference. The discussions always seem to go along the same lines. Doctor R. advocates treatment X because he recently read a paper that reported wonderful results; Doctor S. counters that treatment X has a substantial risk associated with it, as was shown in another paper published last year in the world’s highest-ranking journal in the field; and Doctor T. says that given the current limited health-care budget maybe we should consider a less expensive alternative or no treatment at all. They talk around in circles for ten to 15 minutes, each doctor reiterating his or her opinion. The professor, realizing that his fellows are getting irritated, finally stops the discussion. Practical chores are waiting; there are patients to be cared for. And so the professor concludes: ‘All right. We will offer the patient treatment X.’ About 30% of those involved in the decision-making process nod their heads in agreement; another 30% start bringing up objections which get stifled quickly by the fellows who really do not want an encore, and the remaining 40% are either too tired or too flabbergasted to respond, or are more concerned about another objective, namely their job security.
It is surely a great criticism of our profession that we have not organized a critical summary, by specialty or subspecialty, adapted periodically, of all relevant randomized controlled trials.
Good decision analyses depend on both the veracity of the decision model and on the validity of the individual data elements. These elements may include probabilities (such as the pre-test probabilities, the sensitivity and specificity of diagnostic tests, the probability of an adverse event, and so on), estimates of effectiveness of interventions (such as the relative risk reduction), and the valuation of outcomes (such as quality of life, utilities, and costs). Often we lack the information needed for a confident assessment of these elements. Decision analysis, by structuring a decision problem, makes these gaps in knowledge apparent. Sensitivity analysis on these ‘soft’ numbers will also give us insight into which of these knowledge gaps is most likely to affect our decisions. These same gaps exist in less systematic decision making as well, but there is no convenient way to determine how our decisions should be affected. In this chapter we shall cover the basic methods for finding the best estimate for each of the different elements that may be included in a formal decision analysis or in less systematic decision making.
Sometimes, but not as often as one would like, the estimates one is looking for can be inferred from a published study or from a series of cases that someone has reported in the literature or recorded in a data bank. This is generally considered the most satisfactory way of assessing a probability, because it involves the use of quantitative evidence. Often we will have a choice of data sources, so it is useful to have some ‘rules’ to guide the choice of possible estimates. One helpful concept is the ‘hierarchy of evidence’ (see www.cebm.net) which explicitly ranks the available evidence; ‘perfect’ data will rarely be available, but we need to know how to choose the best from the available imperfect data. This choice will also need to be tempered by the practicalities and purpose of each decision analysis: what is feasible will differ with a range from the urgent individual patient decision to a national policy decision to fund an expensive new procedure.
Values are what we care about. As such, values should be the driving force for our decision making. They should be the basis for the time and effort we spend thinking about decisions. But this is not the way it is. It is not even close to the way it is.
Value judgments underlie virtually all clinical decisions. Sometimes the decision rests on a comparison of probability alone, such as the probability of surviving an acute episode of illness. In such cases, there is a single outcome measure – the probability of immediate survival – that can be averaged out to arrive at an optimal decision. In most cases, however, decisions between alternative strategies require not only estimates of the probabilities of the associated outcomes, but also value judgments about how to weigh the benefits versus the harms, and how to incorporate other factors like individual preferences for convenience, timing, who makes decisions, who else is affected by the decision, and the like. Consider the following examples.
Medicine is a science of uncertainty and an art of probability.
Sir William Osler
Decision trees and Markov cohort models, as described and illustrated in the previous chapters, are essentially macrosimulation models. Such models simulate cohorts or groups of subjects. A number of limitations exist to the use of these models. Markov cohort models, for example, have ‘no memory’, implying that subjects in a particular state are a homogeneous group. Techniques to overcome these limitations, such as expanding the number of states, using tunnel states, or using alternative modeling techniques, were discussed in Chapter 10. These techniques can get very complex when dealing with extensive heterogeneity within a population. Microsimulation using Monte Carlo analysis provides another powerful technique to account for heterogeneity across subjects. Microsimulation with Monte Carlo analysis was introduced in Chapter 10 as an alternative method for evaluating a Markov model. In this chapter it will be discussed at greater length in the context of simulating heterogeneity.
In the previous chapters we represented uncertainty with probabilities. Implicitly the assumption was that, even though we were unsure of whether an event would take place, we could nevertheless predict or estimate the probability (or relative frequency) that it would occur. In essence we were using deterministic models. In reality, however, we are also uncertain of the degree of uncertainty. In other words, rather than dealing with a fixed probability we are actually dealing with a distribution of possible values of probabilities. Not only are we uncertain about the probabilities we use in our models, but we are also uncertain about the effectiveness outcomes and cost estimates included in the analysis. Thus, every parameter value we enter into our models is better represented as a probabilistic variable rather than a deterministic variable. If there is a single uncertain parameter, e.g., the relative risk reduction of an intervention, then the 95% confidence interval (CI) of this parameter is commonly used to indicate the uncertainty of the effect. Uncertainty in two or more components requires more complex methods, such as Monte Carlo probabilistic sensitivity analysis, which we will also discuss in this chapter.