Hostname: page-component-8448b6f56d-gtxcr Total loading time: 0 Render date: 2024-04-25T06:29:18.068Z Has data issue: false hasContentIssue false

Clinical effectiveness

Published online by Cambridge University Press:  02 January 2018

Anna Higgitt
Affiliation:
Mental Health Policy Branch, Department of Health, Wellington House, 133–155 Waterloo Road, London SEI 8UG, UK
Peter Fonagy
Affiliation:
Sub-Department of Clinical Health Psychology, University College London, UK
Rights & Permissions [Opens in a new window]

Abstract

Type
Columns
Copyright
Copyright © 2002 The Royal College of Psychiatrists 

A number of themes have run through health policy initiatives of the two Labour Governments of the past 5 years: modernisation; stigma; inequalities and social exclusion; partnerships; involvement of users and carers. But perhaps the most important from the point of view of mental health professionals is the initiative to alter the culture within which health care is offered from one based on expert knowledge and authority to one founded on the principle of evidence-based practice.

Clinical effectiveness is one of those self-evidently ‘good’ things. No one would want openly to advocate clinically ineffective treatment approaches (despite the fact that in psychiatry relevant data are sparse) any more than they would wish to argue against the appropriateness of maternity (and paternity) leave. Yet our clinical decisions are often based more on habit than on knowledge of what actually works.

Clinical governance in the National Health Service (NHS) has charged all organisations with a statutory duty to seek quality improvements in the health care that they deliver (Department of Health, 1998b ). There is high-quality research evidence to support such an initiative in many areas of mental health care. There are excellent collections of reviews, guidelines and critical appraisals of this evidence, but no single comprehensive index covers all the relevant information. A relatively recent compilation produced by the School of Health and Related Research (ScHARR) (http://www.nettingtheevidence.org.uk) includes a library of relevant articles and a listing of databases and organisations concerned with evidence-based practice. Turning Research into Practice (TRIP) is a resource hosted by the Centre for Research Support in Wales and it covers a wide range of UK and US clinical effectiveness resources and evidence-based guidelines (http://www.tripdatabase.com). It boasts of over 8000 links to resources in 28 different centres. A particularly useful source is Evidence-Based Medicine Reviews (EBMR), which is available via Ovid online (http://www.ovid.com). This is a single, fully searchable database that integrates the Cochrane database of systematic reviews (see below) with Best Evidence (which provides summaries of articles from major medical journals along with expert commentaries) and DARE (Database of Abstracts of Reviews of Effectiveness, a database of high-quality systematic research reviews produced by the NHS Centre at the University of York; see below). Clinical guidelines are summarised on another database, at the Centre for Evidence Based Mental Health (http://www.cebmh.com). The very richness of these sources might discourage some from extended searches. This paper aims to provide an impressionistic review of some of what there is to read about evidence.

THE ART OF DISSEMINATION

One of the greatest obstacles facing the clinical effectiveness initiative is how evidence may be translated into clinician behaviour. Psychology has taught us that attitudes are easier to alter than behaviour. So, how can the behaviour of clinicians be changed? Some clues may be gleaned from a learned and clearly presented book by Palmer & Fenner (Reference Palmer and Fenner1999). The work is aimed at those, most likely in management, who will be responsible for ensuring that clinical practice reflects available and relevant research findings. Written by members of the Royal College of Psychiatrists' College Research Unit, it is a digest of what research evidence there is concerning the dissemination of research findings within the NHS. It draws on 11 reviews which have met the quality criteria of the NHS Centre for Reviews and Dissemination or the Cochrane Collaboration, and provides an overview of theoretical approaches relevant to dissemination as well as a chapter on ‘putting it into practice’.

A good start is made with a useful differentiation between ‘diffusion’, ‘dissemination’ and ‘implementation’, which are seen as interrelated and increasingly active phases of a process. Publication in a journal article (diffusion) is seen a passive form of communication, haphazard, untargeted and uncontrolled (seemingly insufficient to achieve much in the way of change in clinical practice). The development of practice guidelines, overviews and so on (dissemination) is more active and targeted at an intended audience. Implementation is yet more active, with sanctions and incentives, monitoring and adjustment to local needs.

The authors summarise the available evidence on the effectiveness of various methods of dissemination. The methods considered range from use of written materials, through educational efforts, via product champions, financial incentives and patient-mediated interventions, through to reminder systems. There are a number of areas in which there is no conclusive research evidence and several in which effects are positive but small. Individual educational initiatives seem more effective than group ones, although the latter can be improved when the influence of peers is included. There is no conclusive research evidence on product champions, but academic detailing/educational outreach has a potentially important role. Reminder systems may also be of value. Overall, indications are that the most encouraging results will be obtained using a combination of methods, but it is acknowledged that ‘the research base currently has many gaps and leaves many questions unanswered’.

This shortage of hard data leaves the way open for the authors to consider theoretical approaches to dissemination. This is something of an exploration of relevant management theories. A table with a summary of ‘lessons to be learned from health communication campaigns’ makes interesting reading.

In the book's summary, the need to adapt information to local circumstances, to be educative, to promote the credibility of the source of information and (sometimes) to use a regulatory framework all stood out. The book ends with a checklist, aimed at supporting the development of a strategy for the dissemination of information to clinicians.

In the same year as Palmer & Fenner's book appeared, the NHS Centre for Reviews and Dissemination published an issue of Effective Health Care devoted to reviewing the research evidence on dissemination and implementation interventions (NHS Centre for Reviews and Dissemination, 1999b ). In contrast to the College Research Unit's review, which complains of the scarcity of data, this boasts 44 systematic reviews, which together cover over 1000 investigations. The conclusions that emerge are (perhaps unsurprisingly) far from profound. A ‘diagnostic analysis’ is recommended that identifies all groups involved, assesses the characteristics of the proposed change and the preparedness of the professions involved to change, identifying both barriers and enabling factors. The evidence indicates that dissemination alone is unlikely to lead to changes in behaviour. There is an agreement that multi-faceted strategies that are broad-based are more likely to be effective, but will cost more. Educational outreach, sending of reminders and patient-mediated interventions are all effective under certain circumstances. For example, prescribing behaviour is rarely changed by posting educational materials without using a trained person to meet with the health care professional in a practice setting and providing ongoing feedback that includes recommendations. Most interventions are effective under some circumstances but none is effective under all circumstances. For example, audit and feedback and the use of local opinion leaders have mixed and moderate effects outside of attempts to change prescribing behaviour and referrals for diagnostic tests.

Taking these reviews together, it might seem that an article such as the present one focusing on reading about effectiveness can hope for but very limited impact. An inherent limitation of this kind of evidence-based approach is illustrated by the above observation. It is most unlikely that clinicians will change their behaviour if they see no direct benefit for their patients. It is also unlikely that they will not change their behaviour if they are convinced of the appropriateness of the advice to do so. It is most likely the lack of conviction that the evidence-based message carries that leads to difficulties in achieving full implementation of evidence-based protocols. Although not part of the dissemination advice, in our view encouraging clinicians to undertake their own pragmatic trials in situations of true therapeutic equipoise may be one of the most compelling ways of modifying practice.

REVIEWS

It is certainly appropriate that the Department of Health puts funding into a number of projects aimed at increasing the availability of information concerning effective treatments, given the emphasis the reforms put upon them. The Cochrane Collaboration developed internationally in response to the call by a British epidemiologist (Archie Cochrane) for regularly updated critical summaries of relevant randomised controlled trials for the various medical sub-specialities (further details available at http://www.cochrane.org). The UK Cochrane Centre was established in 1992. The main output of the Cochrane Collaboration is the Cochrane Library, which is updated every 3 months and includes the Cochrane database of systematic reviews, the database of abstracts of reviews of effectiveness, the controlled trials register and the review methodology database. Of particular relevance within mental health are the outputs from the Dementia and Cognitive Improvement Group, the Depression, Anxiety and Neurosis Group, the Developmental, Psychosocial and Learning Problems Group, the Drug and Alcohol Group and the Schizophrenia Group. The membership of these groups consists of researchers, health care professionals, consumers and others. There is, indeed, consumer participation throughout most of the organisation.

As of the fourth quarter of 2001, the Schizophrenia Group had produced 61 reviews, on topics including cognitive-behavioural therapy (CBT), electroconvulsive therapy and family intervention for schizophrenia. Review protocols (indicating forthcoming reviews) have been suggested for social skills programmes for schizophrenia, ‘as-required’ psychotropic medication for psychotic disorders in hospital patients and even educational games for mental health professionals. The Depression, Anxiety and Neurosis Group has produced fewer reviews and has a larger number of protocols outstanding. Old age psychiatry is represented by the Dementia and Cognitive Improvement Group, with 27 reviews listed. As with any collection relying on the contribution of a large number of individuals, the reviews vary in quality, although the rigour of the review methodology minimises the impact of such variation. The collaboration cannot be blamed for the paucity of evidence in certain areas. It is heartening that an increasingly large proportion of the reviews emerge with definitive recommendations concerning the value of particular treatment approaches. A recently updated review of antidepressants used in the treatment of bulimia nervosa is typical of what these reviews yield (Reference Bacaltchuk and HayBacaltchuk & Hay, 2001). The authors found 16 trials, no differences between different classes of antidepressant and a relatively high rate of drop-outs across the active drug conditions. The reviews in general give a broad direction to clinical practice, but are only rarely helpful in difficult clinical choices (such as which antidepressant to choose for a particular patient with bulimia). Reviewers in a collaboration are not immune to bias, nor are those funding or performing the studies that they review. It is to be regretted that declarations of interest are not standardly stated. Notwithstanding the quality control imposed by the editors, the reader is still required to come to her or his own conclusions. Potential users of the Cochrane Library need to have access to the online technology, but should rapidly feel at home with searching the databases.

The modernisation agenda so prominent within the NHS and social care provision (Department of Health, 1997, 1998a ) is aimed at driving up quality and reducing unacceptable variations in service delivery, with services becoming responsive to individual needs, regardless of age, gender, race, culture, religion, disability or sexual orientation. The centrality of quality of care provision to these developments is underlined by the publication of two White Papers, one for health and one for social care (Department of Health, 1998b and 1999a , respectively). High-quality services must surely be as effective as is reasonably possible. This pressure for modernisation and quality improvement has created a need for information that might guide the direction of development of local service providers. The Government has provided for this need by supporting the following initiatives.

The first is the NHS Centre for Reviews and Dissemination (CRD), based at the University of York, which has produced some very helpful and readable papers relevant to the mental health field. The Centre warns that such publications are likely to have a shelf-life of about a year, after which readers should check that there are no significant new findings available. There is more homogeneity in the quality of CRD reviews than Cochrane reviews. This is understandable because fewer people are involved in their preparation. There is, however, wide variation in the usefulness of the individual reviews, owing to the availability of the evidence and the background clinical knowledge of the reviewers.

Reviews from the CRD provide detailed in-depth results of various commissioned systematic reviews. They are available from the publications department of the University of York at relatively low cost. The reports on therapeutic communities and the effectiveness of mental health services (NHS Centre for Reviews and Dissemination, 1999c and 2001b , respectively) are of interest to psychiatrists. The latter considered the systematic review evidence in relation to the various standards of the National Service Framework for Mental Health (Department of Health, 1999b ). Areas in which little or no such evidence was available were highlighted. Of note is the paucity of evidence in relation to interventions within hospital settings, interventions for carers of those with mental health problems and assessment of risk of imminent violence. Cost-effectiveness data are particularly sparse.

Second is Effective Health Care, a bimonthly bulletin based on systematic reviews and synthesis of research on clinical effectiveness. The publication is subject to extensive and rigorous peer review. Effective Health Care is distributed free within the NHS and is also available online from volume 2 (1996). It has published useful summaries of mental health promotion, deliberate self-harm, drug treatments for schizophrenia and psychosocial interventions in schizophrenia (NHS Centre for Reviews and Dissemination, 1997, 1998, 1999a and 2000, respectively). The last of these was extracted from Cochrane reviews. The bulletin covers supportive educational interventions (psychoeducational and family interventions), skills training (life skills, social skills, vocational skills), and problem— or symptom-focused therapies (CBT, cognitive rehabilitation, token economies) and service provision (including assertive community treatment and community mental health teams). The articles are valuable because they summarise a large number of reviews and end with a set of general pointers. They are, however, not written with entertainment in mind.

The third is Effectiveness Matters, a complementary publication toEffective Health Care. It is written in a more journalistic style and summarises the longer reports in Effective Health Care. There are as yet few articles of relevance to psychiatrists in this series, but those that have appeared are, as advertised, highly accessible yet definitive. An interesting recent review on counselling in primary care considered six randomised controlled trials (RCTs) (n=772) in which modest benefits were reported at 6 months (NHS Centre for Reviews and Dissemination, 2001a ). These improvements were in terms of psychological symptoms. Two trials comparing counselling with other treatments (CBT and antidepressants prescribed by the general practitioner) found no differences between approaches.

Such documents are immensely valuable compilations of evidence drawn up to explicitly specified criteria (both search and evaluation). Their advantage, which is at the same time their limitation (beyond the inevitable limitation of the evidence), is that they are carried out by review experts rather than by practising clinicians and the clinical recommendations drawn from the reviews may at times be crucially flawed. The criteria used by the reviewers principally concern the methodological details of a study, often evaluated relatively superficially. More subtle aspects of trials, for example adverse reactions from psychosocial interventions, might go unnoticed. Sometimes rarely practised treatments emerge from such reviews as interventions of choice, because such data as there are support them, but a general lack of interest in their effectiveness makes for a limited range of investigations. In general, the more expert input there is to reviews, the more appropriate the conclusions, but also the greater the potential for bias. The clinician can be fully confident of a reviewer's opinion only by reading the original source.

REPORTS OF INDIVIDUAL STUDIES

There are good databases of randomised controlled trials from which to seek information. For example, the Cochrane Controlled Trials Register provides good pointers and the US National Institutes of Health has a website (http://www.clinicaltrials.gov) that provides patients, family members and other members of the public with up-to-date information about clinical studies.

Many sensible and conscientious practitioners feel unable to read the large number of original publications published annually. Not only is there just too much to read, but also a critical mind is required to assess the quality of the work. Luckily, an increasing number of digests are available.

Evidence-Based Mental Health is a relatively new journal. Its aim is ‘to alert clinicians working in the field of mental health to important and clinically relevant advances in treatment (including specific interventions and systems of care), diagnosis, aetiology, prognosis/outcome research, quality improvement, continuing education and economic evaluation’. With the logos of the Royal College of Psychiatrists, the British Psychological Society and the BMJ Publishing Group on its cover and acknowledgement of the support of both McMaster University and the Department of Health, its pedigree could hardly be better.

It is living up to its promise and provides easily readable abstracts and commentaries (which aim to place the study in its clinical health care context) of about 25 papers in each issue. Papers must meet specified criteria to be considered for inclusion. The criteria of good control and high level of internal validity are rigorously applied. The descriptions of studies are full enough to enable the interested reader to come to a conclusion about their clinical implications. The commentators are experts of international distinction. Because the commentaries are never anonymous, there is something of a culture of the positive.

A limitation of this approach to dissemination is the inevitably selective coverage of the evidence base. Given the rate at which new studies emerge, no abstracting journal could possibly do justice to every sub-speciality in mental health. In the USA there are many specialised abstracting journals (covering, for example, child and adolescent psychopharmacology and even outcome methodology publications) that charge very high subscriptions and have commentaries of indifferent quality. Soon we may need an abstracting journal for abstracting journals. More seriously, such abstracting journals cannot consider the research context of any study adequately. Even with intelligent and informed commentaries they can only highlight the most recent, the newest and the most up to date. What is not in the news disappears from the radar screen of the practitioner whose sole source of information is the abstracting journal. Yet old evidence should not be confused with bad evidence. Commercial factors, such as pharmaceutical company funding for new products or research councils' interest in being associated with the development of new treatments, may better account for the prominence of novel methods than do the limitations of older techniques.

THE LACK OF EVIDENCE FOR THE EVIDENCE-BASED APPROACH

While the positivist epistemology of an evidence-based approach is both logically and ethically hard to dispute, there is surprisingly little evidence on the benefits to clinical work of an evidence-based perspective. Are clinicians who have more up-to-date knowledge more effective than their less ‘current’ colleagues? And if so, what is the effect size of this intellectual effort? How much of the variability in outcomes can it account for relative to harder-to-regulate variables such as clinician warmth and empathy or simply the amount of time a clinician spends with a patient? There is a genuine concern that being overconcerned with evidence opens the door to 15-minute consultations and 5-minute ‘meds-checks’. It is unlikely that a better awareness of the psychopharmacology literature will compensate for the absence of human contact and a full understanding of the patient's life circumstances and social as well as clinical needs. Roth (Reference Roth1999) cautions us that clinicians may be alienated from the evidence-based practice endeavour if they see it as a justification for favouring cheap, short-term interventions (where the research is easier to conduct) over longer-term therapies.

Research, with its focus on selected patient populations, cannot of course, tell clinicians what to do with specific individuals. Clinicians have to ask the research database specific questions with an individual client in mind, have to learn how to pose this massive accumulation of data such questions and, even more challengingly, how to obtain meaningful answers. These are far more complex skills than that of generating a systematic review. Many hope that clinical guidelines can and will perform the role of translation of research into practice increasingly well. The controversy that surrounds this issue is beyond the scope of this paper. However, it is perhaps sufficient for us to say that we cannot see guidelines, however sophisticated, ever substituting for clinical skill and experience any more than knowledge of the Highway Code can substitute for skilled driving. Future research should perhaps look also at the skill with which clinicians implement particular treatments and the relationship of that to patient outcome.

A further concern among policy-makers is that there is an unacceptably long delay involved in publishing research findings. It takes time to set up a large, multi-site RCT, to recruit its sample, to get its adequate follow-up, to analyse its data. Journals may then take further months, if not years, to consider, review and finally publish papers. Should the information not have its impact sooner? A BMJ editorial (Reference Delamothe, Smith and KellerDelamothe et al, 1999) indicated that certain journals might be willing to publish papers from authors who had put the (non-peer-reviewed) findings on the internet. This may seem a very positive move, but it skirts the problem of misinformation. Peer review performs an essential function of quality control for which no alternative mechanisms have so far been identified. Perhaps a formal quality assurance rating should be attached to findings, depending on their level of scientific acceptability.

To many people evidence-based medicine implies a rigid adherence to the evidence. The evidence, in mental health at least, is not yet such that an artificial intelligence could replace the clinician. There is still room, most would argue, for clinical judgement. The evidence can inform, and thus refine, this judgement, reducing the variability.

All of this begs the question of the nature of the evidence. We cannot address in this short review the long-standing controversy concerning the balance between internal and external validity in RCTs. There can be no doubt that, in the imposition of methodological rigour to attain high levels of internal validity, trials can lose generalisability because of biased samples, and a highly coherent and controlled treatment protocol. The inherent tension between modernist evidence-based medicine and post-modernist user involvement was the basis of an interesting article by Laugharne (Reference Laugharne1999). In the USA, where these controversies are far more dramatically played out than in the UK, there has been a major move away from so-called efficacy studies to studies which, at least in principle, are more generalisable because they provide a better analogue to everyday practice. It is not evident that the simple expedient of removing rigorous procedural constraints will itself generate data with clearer implications. In our view, considerable further intellectual work remains to be done before treatment choice in psychiatry will truly be made on the basis of empirical data alone. Because none of us is (or should be) able to avoid the increased pressure for accountability in our practice, it is crucial that all of us should be familiar with the empirical evidence, not primarily to guide decisions but to be clear about the limitations of existing scientific knowledge. We should be aware of the as yet unknown problems that remain in drawing conclusions from the research literature.

There is no viable alternative to evidence-based practice. Yet it is possible that the pendulum between research and practice has swung too far and the balance will have to be redressed by moving towards practice as a source of evidence. The proposal from the current National Director for Mental Health, Professor Louis Appleby, to introduce routine outcomes measurement has the potential to take us to this next phase of data-gathering. We think that it is important to be aware that many received truths of evidence-based practice have relatively shaky foundations.

Finally, a word about science and scientism. We all have a need for certainty and experience discomfort with not knowing and risk anxious retreat from ignorance into pseudo-knowledge (so characteristic of the early years of medicine). A scientific approach has obviously been extremely helpful and has saved many millions of lives. To argue against it is clearly unethical and destructive. But to argue for a mechanical reading of evidence equally ignores the risk of doing harm. Evidence needs to be carefully weighed. Multiple channels for evaluation are needed and they need to be kept open and actively maintained. No self-respecting clinician will change long-standing practice overnight. It would be unwise to do so. Evidence has to be read and evaluated, placed in the context of what is possible, desirable and in keeping with existing opportunities. It should be remembered that in mental health at least, but also probably in most areas of clinical treatment, method accounts for a relatively small proportion of the variance in outcome relative to the nature of the patient's problem, which may well interact with the skills of the attending clinician. This latter form of variance is to be cherished, not only because that is where the art of medicine lies, but also because it is in the study of that variability that future major advances in health care may be made (provided that we can submit these to empirical scrutiny).

References

Bacaltchuk, J. & Hay, P. (2001) Antidepressants versus placebo for people with bulimia nervosa. Cochrane Libraryissue 4. Oxford: Update Software.Google Scholar
Delamothe, T., Smith, R., Keller, M. A., et al (1999) Netprints: the next phase in the evolution of biomedical publishing. BMJ, 319, 15151516.CrossRefGoogle ScholarPubMed
Department of Health (1997) The New NHS: Modern, Dependable. London: HMSO.Google Scholar
Department of Health (1998a) Modernising Social Services: Promoting Independence, Improving Protection, Raising Standards. London: HMSO.Google Scholar
Department of Health (1998b) A First Class Service: Quality in the New NHS. London: HMSO.Google Scholar
Department of Health (1999a) A New Approach to Social Services Performance: A Consultation Document. London: Stationery Office.Google Scholar
Department of Health (1999b) National Service Framework for Mental Health. London: Stationery Office.Google Scholar
Laugharne, R. (1999) Evidence-based medicine, user involvement and the post-modern paradigm. Psychiatric Bulletin, 23, 641643.CrossRefGoogle Scholar
NHS Centre for Reviews and Dissemination (1997) Mental health promotion in high risk groups. Effective Health Care, 3, June.Google Scholar
NHS Centre for Reviews and Dissemination (1998) Deliberate self harm. Effective Health Care, 4, December.Google Scholar
NHS Centre for Reviews and Dissemination (1999a) Drug treatments for schizophrenia. Effective Health Care, 5, December.Google Scholar
NHS Centre for Reviews and Dissemination (1999b) Getting evidence into practice. Effective Health Care, 5, January.Google Scholar
NHS Centre for Reviews and Dissemination (1999c) Therapeutic Community Effectiveness: A Systematic International Review of Therapeutic Community Treatment for People with Personality Disorders and Mentally Disordered Offenders. CRD Report 17. York: NHS CRD.Google Scholar
NHS Centre for Reviews and Dissemination (2000) Psychosocial interventions for schizophrenia. Effective Health Care, 6, August.Google Scholar
NHS Centre for Reviews and Dissemination (2001a) Counselling in primary care. Effectiveness Matters, 5, August.Google Scholar
NHS Centre for Reviews and Dissemination (2001b) Scoping Review of the Effectiveness of Mental Health Services. CRD Report 21. York: NHS CRD Google Scholar
Palmer, C. & Fenner, J. (1999) Getting the Message Across: Review of Research and Theory about Disseminating Information within the NHS. London: Gaskell.Google Scholar
Roth, A. (1999) Evidence based practice: is there a link between research and practice. Clinical Psychology Forum, 134, 3740.Google Scholar
Submit a response

eLetters

No eLetters have been published for this article.