The ‘precautionary principle’ has become a mantra of medical decision-making. It seems at first glance morally unassailable. Who would want to object to the idea that a course of action is based on the basic Hippocratic injunction of primum non nocere embedded at the core of medical training?
But is the notion of the precautionary principle as currently used always quite the same as this Hippocratic ideal? Could some of its consequences be unexpected? There may be value in exploring aspects of the principle as currently used in medicine and psychiatry, particularly in interpreting the evidence base, forming national guidance and managing clinical uncertainty.
What is the precautionary principle? Medical ethicists John Harris and Søren Holm have traced its origins to environmental planning legislation and its subsequent extension in the 1980s and 1990s to cover wider aspects of health and public policy (Reference Harris and HolmHarris & Holm, 2002). A classic definition has been provided by Reference Ashford, Barrett and BernsteinAshford et al(1998):
‘When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically. In this context the proponent of an activity, rather than the public, should bear the burden of proof’.
Harris & Holm discuss the logic and ethics of such a principle as applied to science policy, arguing that, despite its obvious face validity, there may be contexts in which it subtly fails to provide either a moral compass or the best ultimate risk management. Such a paradox results from the fact that the precautionary principle privileges an often indeterminate future risk over possibly less apparent current benefits; and, furthermore, disregards the possibility that future advances may neutralise current or future risks. Since it is impossible to prove the complete absence of risk we may in this way end up increasing it.
Contemporary pressures on the precautionary principle
A number of relatively new developments affect this risk/benefit equation and may make Harris & Holm’s caveats increasingly salient. These developments include a litigious medical culture, apparently greater general social anxiety about risk and a convergence of the ideals of evidence-based medicine with increasingly centralised National Health Service management. Why should these developments have made a difference?
Litigation and risk aversion
The influence of the litigious culture is obvious. The precautionary principle is an approach to risk management and the possibility of litigation can greatly increase the weight given to possible future risk, thus contributing to a culture of anxiously defensive medicine. In her influential Reith Lecture series, Onora O’Neill (Reference O'NeillO’Neill, 2002) illuminates ways in which such an effect is mirrored in an increasingly risk-averse culture. The precautionary principle applied absolutely and literally to the details of everyday life would lead us, she argues, to a version of psychological paralysis – we would never leave the house.
O’Neil describes the common institutional response as an ever-expanding cycle of top-down obsessive micromanagement. She argues that, beyond reasonable limits, such efforts to control can be endless and finally counterproductive. What she does not emphasise so much is a reciprocal tendency in individuals: because such management undermines self-regulation and autonomy, people within institutions can become entrained themselves into more and more managing everyday anxieties by deference to the external guidance.
Committee v. clinician decision-making
Contemporary medicine is increasingly characterised by centrally managed clinical decisions through protocols allied to the evidence base. There are decisively positive aspects to this and I am unequivocally in favour of evidence-based practice. But one unexpected consequence might be an amplification of some of the negative aspects of the precautionary principle. Policy and protocol-forming committees are given increasing salience in this climate: they are exposed to wide scrutiny and act on consensus. Can this lead to less nuanced and more risk-averse thinking?
Concern about publication bias in favour of positive results of (especially pharmaceutical) trials may reinforce caution in evaluating the policy implications of the existing evidence base (Reference GreenGreen 2004a ,Reference Green b ). Even if the internal validity of the available evidence is good, its external validity may be weaker and compromises are inevitable in making generalisations in complex areas such as psychiatry. Despite all this, the caution of the committee may well be completely appropriate as long as its guidance is not always taken as a prescription from which it is irresponsible (or constitutes malpractice) to deviate. And even if the committee itself offers its advice only as ‘guidance’, this may nevertheless be widely acted on by organisations, the media or individuals (in another spiral of precaution) as dictat (Reference GreenGreen 2004a ).
How is this different from decision-making by individual clinicians? They are certainly exposed to the same cultural and ethical imperatives. Yet when confronted with the individual patient the doctor is essentially governed not by a precautionary principle in abstract, but by the imperative of the medical role to fulfil responsibilities within their competency for the promotion of health. With each individual patient this imperative, operating in a specific context, will influence how much risk can and should be taken for potential beneficial outcomes (Reference EddyEddy, 1991). Doctors are used all the time to weighing cost against benefit for the individual in this way. The Hippocratic oath suggests ‘fundamentally do no harm’ but essentially this translates in practice into ‘fundamentally do no overall resultant harm for this particular patient’. Decision-making will take into account not only the cost benefit of the particular treatment advocated but the risk of non-treatment and – crucially – the risk that the particular patient is prepared and able to take. The latter is generally not considered by the committee.
Ideally, the two perspectives should be complementary. Randomised clinical trials with good external validity can model the clinical context and provide robust guidance. But the sophisticated clinical use of evidence-based medicine (Reference Sackett, Straus and RichardsonSackett et al, 2002) emphasises that the individual clinician is still placed in the central role of:
-
1 understanding their patient’s predicament;
-
2 critically evaluating the available evidence as it applies to the context of their particular patient;
-
3 integrating their patient assessment with the relevant pooled evidence to form a series of risk/benefit options;
-
4 making the clinical decision, usually in the context of a dialogue with the patient.
The increasing salience of protocol-driven decision-making – whatever its benefits in standardising variations in practice – risks undermining, if followed rigidly, the moderating role of individual clinical decisions in front of the individual patient and the balancing of risks of non-treatment. This is not how evidence-based medicine is supposed to work.
What of professionalism?
Does this argument just boil down to a self-interested advocacy in favour of clinician autonomy? To say this misses an equally central positive value of the nature of professionalism: that it involves the confidence and authority to make individually mediated risk decisions in the context of a personal relationship with patients. To act in this way, doctors must feel they have support and legitimacy for this quality of autonomous practice. Will doctors who hand over all responsibility (and risk management) to committees and protocols be able to retain this professional vitality (Reference WilliamsWilliams, 2004)?
This distinction between the ethics and process of the protocol and that of the individual clinical encounter has been explicitly discussed perhaps too rarely in the UK (Reference EddyEddy, 1991). Clearly, many situations of clear-cut risk demand absolute restrictive and authoritative guidance. But most clinical situations in psychiatry – and for that matter in medicine generally – are probably less clear-cut.
Proposals
The alternative is not in any sense a reckless disregard of patients’ interests or safety. Nor is it a retreat from the positive ideals and discipline of evidence-based practice (indeed, the contrary). Still less is it the abandonment of protocols. But attention to new aspects of the evidence base and protocol development may be helpful. The following are some examples.
Quantification of risks and benefits
In producing the evidence base, trial designs and measurement should be adapted to quantify equivalently risks and benefits. This is the core of Harris & Holms’ argument against the precautionary principle: that hidden current benefits may be sacrificed for indeterminate risks and that the possibility is discounted that future advances might obviate these possible risks. We use the notion of clinically significant effect sizes or numbers needed to treat to contextualise statistical results on effectiveness. We should be equally scrupulous in using methods of measuring clinically meaningful risks (‘numbers needed to harm’) in order to avoid overinterpreting signals of uncertain risk through the application of a precautionary principle.
For instance, reviews of the evidence underlying the policy of the UK Medicines and Healthcare products Regulatory Agency (MHRA) on prescription of selective serotonin reuptake inhibitors to children were convergent on the fact that the (limited) extant trials were not designed in such a way as to be able to identify the clinical meaning of the adverse effects reported (Reference EddyEddy, 1991; Reference Cummins and LaughrenCummins & Laughren, 2004; Reference GreenGreen 2004a ; Reference Whittington, Kendall and FonagyWhittington et al, 2004). Moreover, the possibility that good-quality medical management and surveillance could obviate any putative risk was not included in the balance. The relevant committees clearly felt they had no choice but to caution against use of almost all such medications in the under-18s, a view also taken in the recent NICE guidance on child and adolescent depression (National Institute for Health and Clinical Excellence, 2005).
I have discussed more fully elsewhere some issues about how such guidance relates to the actual quality of the evidence currently available in child and adolescent mental healthcare (Reference GreenGreen, 2004a ); the NICE report concludes that the evidence is ‘generally moderate to low’ for individual outcomes, and that interpretation of harm-related outcomes was ‘often difficult’ because the trials were not designed to measure these. In spite of this, the overall precautionary guidance reached regarding medication is couched in strong terms and, unless further evidence becomes available, this is certain to dominate local specialist protocol development. To what extent will this reflect society’s particular caution with the under-18s compared with adults, and will the clinical care of some children be thus disadvantaged?
The purpose and structure of protocols
Another requirement is greater critical reflection on the purpose and structure of protocols and how we use them. Because of managed care these issues have probably been addressed earlier in the USA. In a useful series of articles during the 1990s in the JAMA, Eddy discussed many of these issues and outlined critical appraisal criteria for policy guidelines. These criteria include (Reference EddyEddy, 1990b ):
-
• a transparent statement of committee membership and evaluation methods
-
• incorporation into guidance of societal and ethical values
-
• regular timely updates.
Eddy also recommended an explicit statement of the degree of proscription or force implied by a protocol. In one of the articles (Reference EddyEddy, 1990a ), he suggests a distinction between:
-
• ‘standards’ for rigid application, from which it would probably be malpractice to deviate
-
• ‘guidelines’, where there is more flexibility
-
• ‘options’, when the evidence is more equivocal.
However, even given this gradation (which is often now in some form incorporated into guidance), there is still the question of how central guidance may be interpreted and used by local managers and clinicians. Reference EddyEddy (1990a ) makes the point that we are often ambivalent about freedom –life may be easier and less anxiety-provoking when we fall back on external precautionary restriction (especially as suggested above in a climate of anxiety about risk). But doing this may at times restrict legitimate clinical options for patients.
Precaution and health
We live in an era of increasing protocol management. We can support the development of evidence-based practice, but at the same time be concerned about the risk of relying on unnecessarily restrictive protocols. The danger is that these protocols may at times conflate overly precautionary inferences from the evidence with the needs of centralised health planning, institutional risk management and individuals’ own management of clinical anxiety. Of course, committees of all kinds have a critical role in protecting the public and maintaining confidence in medicine. But overreliance on public caution may mean that patients are deprived of the treatments that may help them. The potential risks of a particular treatment must be equally balanced against the risks of no treatment (or no medical advance) – both for the individual patient in the clinic and for public policy in general.
Declaration of interest
None.
eLetters
No eLetters have been published for this article.