Skip to main content Accessibility help
×
Hostname: page-component-8448b6f56d-jr42d Total loading time: 0 Render date: 2024-04-24T14:25:45.052Z Has data issue: false hasContentIssue false

Part II - The Cogwheels of Change

Published online by Cambridge University Press:  17 June 2021

George Ikkos
Affiliation:
Royal National Orthopaedic Hospital
Nick Bouras
Affiliation:
King's College London

Summary

The 1959 Mental Health Act represented, by any standard, a ‘paradigm shift’ in the way in which mental illness was construed, not just in Britain but anywhere.

Type
Chapter
Information
Mind, State and Society
Social History of Psychiatry and Mental Health in Britain 1960–2010
, pp. 69 - 200
Publisher: Cambridge University Press
Print publication year: 2021
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

Chapter 8 Mental Health Law: ‘Legalism’ and ‘Medicalism’ – ‘Old’ and ‘New’

George Szmukler and Lawrence O. Gostin
Introduction

The 1959 Mental Health Act represented, by any standard, a ‘paradigm shift’ in the way in which mental illness was construed, not just in Britain but anywhere.

Its predecessor was the Lunacy Act of 1890. Kathleen Jones, in her influential History of the Mental Health Services characterised that Act thus:

The Act itself is an extremely long and intricate document, which expresses few general principles and provides detail for almost every known contingency. Nothing was left to chance, and very little to future development.

From the legal point of view it was nearly perfect … From the medical and social viewpoint, it was to hamper the progress of the mental-health movement for nearly 70 years.1

Laws governing detention and treatment in the nineteenth century were developed in the setting of the expanding asylum system. The early enthusiasm for ‘moral treatment’ failed to live up to its promise. The numbers of those detained in the asylums grew far beyond what was originally envisaged.

Under the Lunacy Act 1890 admission to an asylum or licensed house depended on whether the case was private (involving a justice of the peace and two medical certificates) or pauper (involving a Poor Law receiving officer or the police, a medical certificate and a justice of the peace).2

Admission by inquisition, whose origins dated back to the fourteenth century applied to so-called Chancery lunatics – expensive and affordable only to those with large estates and great wealth. The alleged lunatic could request a trial of their sanity by jury.

There were detailed regimes of visitation by Lunacy Commissioners – unannounced, at an hour, day or night. A report book for instances of mechanical restraint was kept; a medical certificate was necessary for each instance.

Discharge arrangements were complex and could differ for private versus pauper patients. They might involve the person signing the petition for the reception, the authority responsible for the maintenance of the pauper patient, two Lunacy Commissioners – one legal and one medical – or three members of the visiting Local Authority committee.

The Mental Treatment Act 1930 followed a Royal Commission on Lunacy and Mental Disorder 1924–6.3 It proposed that mental illness should be viewed like any other illness, and its recommendation that treatment should not necessarily be contingent upon certification was accepted. The Lunacy Act was amended but earlier legislation was not replaced. The Act introduced ‘voluntary admission’ by written application to the person in charge of the hospital. Non-objecting but non-volitional patients, called ‘temporary’, could be admitted under a non-judicial certificate. An essential condition in the application for reception of a ‘temporary’ patient was that the person ‘is for the time being incapable of expressing (him)(her)self as willing or unwilling to receive such treatment’.4 For many clinicians, the meaning of this provision lacked clarity, accounting for a huge variation in its use – from 34 per cent to 0 per cent.5

Magistrates continued to be involved in overseeing compulsory hospital admissions. The Act authorised local authorities to set up psychiatric outpatient clinics in general and mental hospitals, but the hospital remained the focal point for psychiatric provision.

The Mental Health Act 1959

The Mental Health Act 1959 followed the key recommendation of the Percy Royal Commission, established in 1954, ‘that the law should be altered so that whenever possible suitable care may be provided for mentally disordered patients with no more restriction of liberty or legal formality than is applied to people who need care because of other types of illness’.6

The Act repealed all previous legislation.7 Informal admission was now the usual method of admission. For the first time since 1774, there was no judicial authorisation for a compulsory admission. Patients could be admitted to any hospital or mental nursing home without any formalities. This replaced the ‘voluntary admission’ set down in the Mental Treatment Act of 1930 where the patient signed admission papers. Non-volitional patients could be admitted informally provided that they did not positively object to treatment.

Mental disorder was defined as ‘mental illness; or, arrested or incomplete development of mind (i.e. subnormality or severe subnormality); or, psychopathic disorder; or any other disorder or disability of mind’.

Psychopathic disorder was defined as a persistent disorder resulting in abnormally aggressive or seriously irresponsible conduct and susceptible to medical treatment. Persons were not to be regarded as suffering from a form of mental disorder by reason only of promiscuity or immoral conduct.

There were three kinds of compulsory admission:

  • Observation order: up to twenty-eight days’ duration, made on the written recommendations of two medical practitioners stating that the patient either (1) is suffering from a mental disorder of a nature or degree which warrants his (sic) detention under observation for a limited period or (2) that he ought to be detained in the interests of his own health and safety, or with a view to the protection of other persons.

  • Treatment order: for up to a year, to be signed by two medical practitioners. The grounds were:

    1. (a) The patient must be suffering from mental illness or severe subnormality; or from subnormality or psychopathic disorder if he is under the age of twenty-one.

    2. (b) He must suffer from this disorder to an extent which, in the minds of the recommending doctors, warrants detention in hospital for medical treatment; and his detention must be necessary in the interests of his health and safety, or for the protection of other persons.

  • Emergency order: following an application made by the mental welfare officer or a relative of the patient and backed by one medical recommendation. The patient had to be discharged after three days unless a further medical recommendation had been given satisfying the conditions of a treatment order.

A Mental Health Review Tribunal (MHRT) alone took over the previous watchdog functions of the Lunacy Commission (which had later become the Board of Control). Detention could be reviewed at the request of patients or relatives or at the request of the minister of health. The tribunal consisted of an unspecified number of persons: legal members appointed by the Lord Chancellor; medical members appointed by the Lord Chancellor in consultation with the minister; lay members having such experience or knowledge considered suitable by the Lord Chancellor in consultation with the minister.

A patient was discharged by the Responsible Medical Officer (RMO), by the managers of the hospital, by an MHRT, by the patient’s nearest relative – though with a possible RMO veto – or, in the case of subnormality or psychopathy, if the person had reached the age of twenty-five.

Guardianship, which had its origins in mental deficiency legislation and promised a degree of control in the community, could now be applied to those with a mental disorder.

Space does not allow us to discuss provisions for mentally disordered offenders in detail. In brief, courts could order admission to a specified hospital or guardianship for patients with a mental disorder of any kind, founded on two medical recommendations. Courts of Assize and Quarter Sessions could place special restrictions on the discharge of such patients. Power to grant leave of absence, to transfer the patient to another hospital or to cancel any restrictions placed on their discharge was reserved to the Home Secretary. Limitations were placed on the appeal of such patients to an MHRT. Those found ‘Not guilty by Reason of Insanity’ were detained during ‘her Majesty’s pleasure’ by warrant from the Home Secretary. The Mental Health Act 1959 provisions by and large resemble those of today, with a significant change concerning the power of the MHRT in 1982, discussed later. The medical profession was united in its enthusiasm for the new provisions and the status of psychiatrists in the medical sphere was enhanced.

The Context of This Radical Change in Mental Health Law

A new optimism had emerged concerning the effectiveness of psychiatric treatment, with a new expectation that patients would return to their communities following a short admission. Jones talked in terms of ‘three revolutions’: pharmacological, administrative and legal.8

The ‘Pharmacological Revolution’

The standing of psychiatry as a medical speciality, based on scientific principles, was boosted with the introduction in the early 1950s of the antipsychotic drug chlorpromazine. Admissions had become shorter and much more likely to be voluntary. The antidepressants, imipramine and iproniazid, were introduced later in the decade. New psychosocial interventions, such as the ‘therapeutic community’ and ‘milieu therapy’ looked promising. There was a sense of a ‘therapeutic revolution’.

The ‘Administrative Revolution’

A 1953 World Health Organization (WHO) report (Third Report: The Community Mental Hospital) described new models for mental health services. Combinations of a variety of services were proposed, including ‘open door’ inpatient units, outpatients, day care, domiciliary care and hostels. Earlier treatment, it claimed, meant fewer admissions; chronic patients could be satisfactorily cared for at home or boarded out. The report significantly influenced the Royal Commission’s determinations.

There were other administrative considerations. In 1948, the new National Health Service (NHS) found itself responsible for the management of 100 asylums, each with its own regulations and practices. Their average population was around 1,500 patients. Patients with a ‘mental illness or mental deficiency’ occupied around 40 per cent of all hospital beds.9

The ‘Legal Revolution’

Some legal matters were complicated by amendments introduced by the National Health Service (NHS) Act 1946. There was also a welfare state–influenced reimagining of law of this kind, now to be seen as an ‘enabling’ instrument as opposed to a coercive or constraining one.

Tackling the stigma of mental illness was another theme. There was agreement that stigma was heightened by what was called the ‘heaviness of procedure’ manifest in the magistrate’s order, linking in the public’s mind the deprivation of liberty for the purposes of treatment with that for the purposes of punishment.

Unsworth summarised the significance of the 1959 Act as a negation of the assumptions underlying the Lunacy Act:

The [1959] act injected into mental health law a contrary set of assumptions drawing upon the logic of the view of insanity as analogous to physical disease and upon reorientation from the Victorian institution-centred system to ‘community care’. … Expert discretion … was allowed much freer rein at the expense of formal mechanisms incorporating legal and lay control of decision-making procedures.10

A ‘pendulum’ thus had swung through almost its full trajectory, from what Fennell and others have termed ‘legalism’ to ‘medicalism’,11 a form of paternalism.

A warning was sounded in parliament, however, by Baroness Wootton:

Perhaps there is a tendency to endow the medical man with some of the attributes that are elsewhere supposed to inhere in the medicine man. The temptation to exalt the medical profession is entirely intelligible … but I think it does sometimes place doctors in an invidious position, and sometimes possibly lays them open to the exercise of powers which the public would regard as arbitrary in other connections.12

Mental Health Act 1983

Twenty-four years later, a new Mental Health Act was passed.13 While the general outline of the 1959 Act was preserved, there was a significant swing of the ‘pendulum’ towards a new form of ‘legalism’.

Among the changes introduced in the 1983 Act, the following were notable:

  1. 1. For the first time, the idea of consent to treatment, even if the patient was detained, made its appearance in mental health law. A requirement for consent was introduced for certain hazardous or irreversible treatments – psychosurgery and surgically implanted hormones. These now required the patient’s consent and approval by a panel of three people, including a psychiatrist, appointed by the Mental Health Act Commission (see item 4). Further, consultation with two persons professionally involved with the treatment (other than the patient’s consultant) was needed. Electroconvulsive therapy (ECT) and the administration of medications for the mental disorder beyond three months required consent or a second opinion if the person could not or did not consent.

  2. 2. An expanded role and enhanced training was introduced for ‘approved social workers’ in respect of a social assessment.

  3. 3. Access to review of detention by the MHRT was expanded and now included patients under a 28-day ‘assessment’ order and automatic review with renewal of a treatment order. Patients became entitled to publicly funded legal representation.

  4. 4. An oversight body concerning detained patients, the Mental Health Act Commission, was established. It will be recalled that there was no such body under the 1959 Mental Health Act.

  5. 5. Patients suffering from ‘psychopathic disorder’ or ‘mental impairment’ could only be detained if their behaviour was ‘abnormally aggressive or seriously irresponsible’ and if treatment was likely to alleviate or prevent a deterioration of their condition (i.e. a ‘treatability’ criterion’).

  6. 6. A duty was placed on the District Health Authorities and local social services authorities to provide aftercare services for patients admitted on a treatment order or on some forensic orders.

In these domains, the rights of persons with a mental disorder were thus enhanced.

What Was the Context of These Changes?

In an extended history of mental health services, Jones began her analysis of the post-1959 period thus:

After the passing of the 1959 Act, it would have been reasonable to expect a period of consolidation and cautious experimentation; but within two years, the whole scene changed. In 1961, a new Minister of Health, Enoch Powell, announced a policy of abolishing mental hospitals, and cutting psychiatric beds by half … Opposition to this draconian policy was muted by three new theoretical analyses … opposed to mental hospitals for very different reasons.14

She was referring to Szasz, Goffman and Foucault. We shall come to them later in this section (see also Chapter 20).

The Ministry of Health’s A Hospital Plan for England and Wales followed Powell’s ‘Water Tower speech’. It proposed the restriction of hospital care mainly to the acute sector. Under a ‘parity of esteem’, this applied to psychiatry just as it did to the rest of medicine. Thus commenced a huge reduction in the number of hospital beds. At the same time, the Department of Health faced increasing fiscal pressures arising from the need to refurbish and maintain decaying public hospitals. The forces leading to the policy of deinstitutionalisation here overlapped with those acting to reduce admissions, including recourse to involuntary hospitalisation. Some noted an ‘unnatural alliance’ between civil rights advocates on the left, who distrusted the state and psychiatric expertise, and monetarist conservatives, who were concerned with the high institutional costs of mental health care (see also Chapter 31).

Highly publicised scandals involving mental hospitals continued – there were some twenty serious inquiries into maltreatment between 1959 and 1983. Faith in the effectiveness of medication – and in pharmaceutical companies, especially following the thalidomide inquiry – was faltering.

Proposals from some academic authorities that outcomes would improve if treatment were focused in the community rather than in hospitals were welcome to government. Evidence was offered that ‘total institutions’ like mental hospitals, in which almost every aspect of the resident’s life is subservient to the institution’s rules, far from being therapeutic, in fact contribute to a dehumanising erosion of personal identity, dependency and disability. Here Goffman’s 1961 Asylums and Barton’s 1959 Institutional Neurosis were influential.15

Joined to these criticisms was another set of voices denying the legitimacy of the psychiatric enterprise itself. Key figures in this loosely termed ‘anti-psychiatry’ movement included three psychiatrists – Thomas Szasz, R. D. Laing and David Cooper (see also Chapter 20). Szasz held that ‘mental illness’ was a ‘myth’ and had no kinship with ‘real’ illness; so-called mental illnesses were ‘problems of living’, not brain diseases.16 From a rather different perspective, Laing and Cooper argued that insanity was an understandable reaction of some to impossible family pressures or, indeed, a society gone insane.17 The experience of psychosis, they claimed, handled correctly – as opposed to conventional treatment – could be transformative.

In significant ways congruent with the ‘anti-psychiatry’ movement were the ideas of Michel Foucault. His Histoire de la folie, published in 1961, appeared in a much abridged form in English in 1965 (as Madness and Civilization) featuring a polemical introduction by David Cooper.18 Madness and Civilization examined how the notion of ‘mental illness’ assumed the status of ‘positive knowledge’ or objectivity and its irreconcilability with society’s growing valorisation of ‘productive’ citizenship. Foucault argued that psychiatrists’ expertise lay in asylum-based governance and non-medical practices, such as techniques for the normalisation of certain sorts of socially transgressive behaviours.

Thus, while these figures differed significantly in their theories, they had in common a critique of psychiatry’s basic tenets, its social role and the institutions in which these were realised. Their ideas found a place within a broader counterculture movement prominent in the 1960s and 1970s, which helped to bring them to the attention of a wider public.

A further significant influence was the civil rights movement in the United States, increasingly effective in the 1960s and 1970s. Civil rights were progressively asserted for groups subject to discrimination – African Americans, prisoners, women, persons with mental illness and persons with disabilities. An essential instrument was the law; a number of key legal decisions led to changes in institutional practices.

Increasingly publicised abuses of psychiatry in the Soviet Union during the 1970s and early 1980s seemed to point to the fact that, unless involuntary hospitalisation was the subject of special scrutiny, arbitrary detention could follow.

The key player in fostering reform of mental health legislation in the 1970s was the National Association for Mental Health (now Mind). Founded in 1946, it started as a traditional voluntary organisation, a partnership between professionals, relatives and volunteers, aimed at improving services and public understanding. Its character, described by Jones as ‘duchesses and twin-set’, changed in the 1970s (see also Chapter 14).

The organisation was shaken by a serious, though failed, attempt of a Scientology takeover. A ‘consumer’ orientation and a focus on human rights followed, marked by the appointment of Tony Smythe as director in 1974. He was previously secretary-general of the National Council for Civil Liberties (NCCL, later named Liberty). Mind soon established a legal and welfare rights service.

Larry Gostin, co-author of this chapter, an American lawyer and recently a Fulbright Fellow at Oxford, was appointed first legal officer in 1975.19 Both Gostin and Smythe had worked in the domain of civil liberties in the United States. While legal director for Mind, Gostin wrote A Human Condition, essentially Mind’s proposals for reforming the Mental Health Act 1959.20

He stated:

The [1959] Act is largely founded upon the judgment of doctors; legal examination has ceased at the barrier of medical expertise, and the liberty of prospective patients is left exclusively under the control of medical judgments which have often been shown in the literature to lack reliability and validity.21

Gostin challenged the assumption that compulsory detention automatically allowed for compulsory treatment. He proposed that all treatment to be given to an inpatient who cannot, or does not, give consent should be reviewed by an independent body. He argued for the concept of the ‘least restrictive alternative’ (which in turn required the provision of a range of alternative services). He also proposed an extended advocacy system.

Gostin took cases to the courts, ranging from the right to vote and consent to treatment to freedom of communication. A particularly successful example was the 1981 case, X v The United Kingdom, before the European Court of Human Rights, resulting in a new power for MHRTs to discharge restricted forensic patients. While at Mind, he formed a volunteer lawyers panel to represent patients at MHRT hearings.

Gostin subsequently received the Rosemary Delbridge Memorial Award from the National Consumer Council for the person ‘who has most influenced Parliament and government to act for the welfare of society’.

The 1983 Mental Health Act thus marked a swing of the Act’s ‘pendulum’, not especially dramatic, towards ‘legalism’ (or called by Gostin, ‘new legalism’). It differed from Lunacy Act legalism by an accent on the rights of detained patients and their entitlements to mental health care, rather than ensuring that the sane were not mistakenly incarcerated as insane, or the detection of grossly irregular practices.

The newly established Mental Health Act Commission faced a daunting task. In addition to producing a Code of Practice, its oversight function involved up to 728 hospitals and units for mental illness and intellectual disabilities in England and Wales, together with 60 nursing homes which could come under its purview if they housed detained patients.

Mental Health Act 2007: An Amended Mental Health Act 1983

The next Mental Health Act followed thirty-four years later, in 2007.

The reduction in the number of mental health beds continued apace – England saw an 80 per cent reduction between 1959 and 2006. An argument grew in the 1980s that, as the locus of psychiatric treatment was increasingly in the community, so should be the option of involuntary treatment. Early moves in this direction were the ‘long leash’ – the creative use of ‘extended leave’ (ruled unlawful in 1985), the introduction of non-statutory Supervision Registers in 1994 and then the passing of the Mental Health (Patients in the Community) Act in 1995. This introduced Supervised Discharge, also known as ‘aftercare under supervision’. This could require a patient to reside at a specified place and to attend places for medical treatment or training. Administration of treatment could not be forced in the community but the patient could be conveyed to hospital, by force if necessary, for persuasion or admission.

The 1990s saw a new turn – a growing public anxiety that mental health services were failing to control patients, now in the community and no longer apparently safely detained in hospitals, who presented a risk, especially to others (see also Chapter 28). The 1983 Act was labelled obsolete – as, for example, in a highly publicised publication, the Falling Shadow report, following the investigation of a homicide by a mental patient.22

A ‘root and branch’ review of the Mental Health Act 1983 was initiated by the government in 1998. Its purpose, as announced by the then Secretary of State for Health, Frank Dobson, was ‘to ensure that patients who might otherwise be a danger to themselves and others are no longer allowed to refuse to comply with the treatment they need. We will also be changing the law to permit the detention of a small group of people who have not committed a crime but whose untreatable psychiatric disorder makes them dangerous.’

This led to what Rowena Daw, chair of the Mental Health Alliance, a coalition of more than seventy professional organisations and interest groups, called a seven-year ‘tortured history’ of ‘ideological warfare’ between the government and virtually all stakeholder groups.23 The Mental Health Alliance was a unique development. Created in 1999, it incorporated key organisations representing psychiatrists, service users, social workers, nurses, psychologists, lawyers, voluntary associations, charities, religious organisations, research bodies and carers (see also Chapter 28).

Initially, a government-appointed Expert Committee chaired by Professor Genevra Richardson produced generally well-received recommendations founded on the principles of non-discrimination towards people with a mental illness, respect for their autonomy and their right to care and treatment. An impaired ‘decision-making capacity’ criterion was proposed, only to be overridden in cases of a ‘substantial risk of serious harm to the health or safety of the patient or other persons’, and there are ‘positive clinical measures which are likely to prevent a deterioration or to secure an improvement in the patient’s mental condition’.

However, as Daw notes:

Government, on the other hand, had different priorities. It was driven by its wish to give flexibility in delivery of mental health services through compulsory treatment in the community; and its fear of ‘loopholes’ through which otherwise treatable patients might slip. In its general approach, the government followed a populist agenda fuelled by homicide inquiries into the deaths caused by mental health patients. Public concern and media frenzy went hand in hand to demand better public protection. … The then Health Minister Rosie Winterton MP stated that ‘every barrier that is put in the way of getting treatment to people with serious mental health problems puts both patients and public at risk’.24

A ‘torrid passage’ (Daw’s words) of Bills through parliament involved two rejections, in 2002 and 2004, and finally, in 2007, an amending Act to the 1983 Mental Health Act was passed.25

Fanning has detailed the role of the containment of ‘risk’ in the generation of the 2007 Act.26 He notes a swing back from ‘legalism’ to a new form of ‘medicalism’ – or ‘new medicalism’. He explains:

The 1959 Act’s medicalism … trusted mental health practitioners to take decisions for and on behalf of their patients according to clinical need. By contrast, the 2007 Act’s ‘New Medicalism’ expands practitioners’ discretion in order to enhance the mental health service’s responsiveness to risk. This subtle shift in focus introduces a covert political dimension to mental health decision-making … the 2007 Act’s brand of medicalism follows an inverted set of priorities to those pursued in the 1959 Act.27

Fanning examines a link to the characterisation of contemporary society, for example, by Beck and Giddens, as a ‘risk society’ – one preoccupied with anticipating and avoiding potentially catastrophic hazards that are a by-product of technological, scientific and cultural advances (see also Chapters 10 and 17). ‘Risk’ replaces ‘need’ as a core principle of social policy. It also leads to a culture of blame if adverse events should occur. Foucault’s notion of ‘governmentality’ also enters Fanning’s account – risk here offering an acceptable warrant for governmental disciplinary measures.

Another factor was the claim – disputed by a number of authorities – that risk assessment instruments had now achieved an acceptable degree of scientific precision as valid predictors of serious violent acts by persons with a mental disorder. The evidence is that risk assessment instruments for low frequency events, such as a homicide, result in a large preponderance of ‘false positives’ (see also Chapter 10).28

However, there is a problem with ‘risk’, Fanning argues. It is unclear what it really means. He claims:

Within reason, anything practitioners recast as evidence of a threat to the patient’s health or safety or to others is enough to justify the deployment of the compulsory powers … Consequently, it undermines legal certainty and impairs the law’s ability to defend patients’ interests.29

The 2007 Act increased professional discretion on the role of risk by:

  • simplifying and arguably broadening the definition of ‘mental disorder’;

  • abolishing the ‘treatability test’ for psychopathy, requiring only that treatment be ‘appropriate’ – previously the treatment had to be ‘likely’ to be effective, now that must be its ‘purpose’;

  • broadening the range of professionals able to engage the compulsory powers, by replacing the role of the ‘approved social worker’ with an ‘approved mental health professional’, who could be a psychologist, psychiatric nurse or occupational therapist. This reduced the separation of powers that existed between those with clinical and social perspectives; and

  • introducing Supervised Community Treatment (or Community Treatment Orders, CTOs), effectively strengthening supervision after discharge by the imposition of a broad range of conditions. A failure to comply with the treatment plan may result in recall to hospital; and if treatment cannot be reinstituted successfully within seventy-two hours, the CTO may be revoked with reinstatement of the inpatient compulsory order. The patient may appeal against the CTO but not the conditions.

The reforms represented a substantial shift away from a focus on individual rights and towards public protection. An exception was a strengthening of the need for consent for ECT in a patient with decision-making capacity (except where it is immediately necessary either to save the person’s life or to prevent a serious deterioration of their condition) and a right to advocacy (by an ‘Independent Mental Health Advocate’) for detained patients and those on a CTO.

Fanning goes on to claim that the ‘new medicalism’ maintains a ‘residual legalism’ in the amended Mental Health Act, which:

arguably has a sanitising effect by conferring a veneer of legitimacy on ‘sectioning’ processes which may now be less certain, less predictable and primarily motivated by concern for public safety … Far from being a minor statute which changes very little, the 2007 Act represents an entirely new moment in English mental health law and policy.30

At the same time as deliberations were in progress over reform to the Mental Health Act, parliament was passing the Mental Capacity Act 2005 in which the involuntary treatment of patients in general medicine and surgery was to be based on an entirely different set of principles – ‘decision-making capacity’ and ‘best interests’.

Post-2007 Developments

Two drivers of reform garnered significant support during the first decade of the twenty-first century. The first was the proposal for capacity-based law or a more radical version, known as a ‘fusion law’; the second was the adoption by the United Nations (UN) in 2006 of the Convention on the Rights of Persons with Disabilities (CRPD). Both aim at the elimination of unfair discrimination against people with a mental disorder.

A ‘fusion law’ refers to a single, generic law applicable to all persons who have an impairment in the ability to make treatment decisions, whether the cause be a ‘mental disorder’ or a ‘physical disorder’.31 It combines the strengths of the Mental Capacity Act 2005 – that is, a respect for autonomy, self-determination and the right to refuse treatment, almost entirely absent in the Mental Health Act – with the detailed regulation of involuntary detention and treatment – its authorisation, by whom, where, for how long, review and appeal mechanisms, all well specified in conventional mental health legislation but absent from the Mental Capacity Act. Involuntary treatment is restricted to those who lack ‘decision-making capacity’ and where it is in the person’s ‘best interests’. Northern Ireland passed such an Act in 2016 following the path-breaking Bamford Report of 2007.

The UN CRPD presents a huge challenge to conventional psychiatric practice. A number of authorities, including the UN CRPD Committee established by the UN to oversee the convention, holds that any ‘substitute decision-making’ (except perhaps by a proxy appointed by the person with a disability, and who will respect the person’s ‘will and preferences’) is a violation of the CRPD. Thus, treatment against the objection of a patient is prohibited. It remains to be seen how the consequent debate with the many critics of this interpretation will play out.32

Scotland and Northern Ireland

Space permits only a brief account of the salient features of Scotland’s legislation. Until 2003, Scottish mental health law was by and large similar to that of England and Wales (though it did retain in its 1960 Mental Health Act an oversight body – the Mental Welfare Commission and a role in compulsory admissions for a Sheriff). However, the Mental Health (Care and Treatment) (Scotland) Act 2003 marked a substantial departure:

  • the principle of autonomy is prominent;

  • it stipulates ten Guiding Principles (with no reference to risk or public safety);

  • patients must be treatable if compulsion is to be used;

  • while a criterion of risk to self or others is retained, an additional criterion must be met – a ‘clinically significant impairment of treatment decision-making ability’;

  • compulsory treatment orders, inpatient or in the community, must be authorised by a Mental Health Tribunal;

  • there is a right to independent advocacy;

  • there is a special recognition of advance statements – a failure to respect the person’s wishes needs written justification, which must be reported to the patient, a person named by the patient, a welfare attorney (if there is one) and the Mental Welfare Commission; and

  • there is a choice of a named person rather than the nearest relative.

Few would deny that this law is far more rights-based than that in England and Wales. As mentioned in the section ‘Post-2007 Developments’, Northern Ireland has taken reform even further, having passed a ‘fusion law’.

European Convention on Human Rights (Human Rights Act 1998)

It is beyond the scope of this chapter to give more than a brief reference to the influence on UK mental health law of the European Convention on Human Rights – later the Human Rights Act (1998). In 2007, Baroness Hale, then a member of the Appellate Committee of the House of Lords, summarised the impact as modest.33 An exception was the ‘Bournewood’ case concerning a man (HL) with autism who was admitted to a hospital as an informal patient, and although he did not, or could not, object, it was apparent that he would not be allowed to leave if he were to wish to do so. His carers, denied access to him, initiated a legal action that he was being detained unlawfully. This progressed with appeals through the English court system up to the House of Lords in 1998, who decided HL’s admission was lawful. His carers then took the case to the European Court of Human Rights, who, in 2004, ruled it was unlawful. This resulted in the Mental Health Act 2007 appending Schedules to the Mental Capacity Act establishing ‘Deprivation of Liberty Safeguards’ covering non-objecting hospital inpatients or care home residents who lacked decision-making capacity and, in their best interests, were not allowed to leave.34

Service User Movement

Similarly, limitations in scope only allow a brief consideration of the influence of service user organisations on changes in mental health law (see also Chapters 13 and 14). Service users had little direct involvement in the development of the 1983 Mental Health Act. While patient groups did form in the 1970s, the service user voice was not significantly heard until the mid-1980s.35 It was reasonably prominent in the debate leading to the 2007 Act, and the major service user organisations joined the Mental Health Alliance. Within or outwith the Alliance, however, their voice was largely ignored by government.

Conclusion

We have traced the course of mental health legislation from 1959 to 2010. The broad sweep of the changes can be summarised schematically, allowing for a degree of simplification. We have adapted the idea from Fanning of locating each law within a space created by two orthogonal dimensions (Figure 8.1).36 While Fanning finally did not support the schema, we propose that by reconceptualising the dimensions, it proves useful. The first dimension has at its poles ‘legalism’ versus ‘clinical discretion’ (or ‘medicalism’). The second has a ‘respect for autonomy’ (or emphasis on decision-making capacity’ and ‘consent to treatment’) versus ‘protection from harm’ (especially to others) dimension. The movements in this ‘legal–clinical–social’ space from the 1890 Act to the 1959 Act, then to the 1983 Act and the 2007 Act can be traced. The Richardson Committee’s 1999 recommendations and the 2003 Scotland Act are also shown, as are the major directions taken in the first decade of the twenty-first century – the Northern Ireland ‘fusion law’ as well as the UN CRPD Committee’s interpretation of the Convention (the claim that ‘substitute decision-making’ is to be abolished means that it is located at the top-right extreme or perhaps falls outside the space altogether).

Figure 8.1 Schematic presentation of mental health legislation changes, 1890–2016

It is pertinent to ask what has happened to the number of involuntary admissions over the period covered? In England, they declined steadily between 1966 and 1984, rose quite sharply and steadily until 2000 and then remained flat until 2010. The numbers then rose very sharply again (Figure 8.2). The contribution of changes in mental health legislation is difficult to determine. There were increases in involuntary admissions after both the 1983 and the 2007 Acts but other socio-political and administrative changes also occurred. Perhaps the most interesting observation is the stable rate between 2000 and 2008. This period was characterised by a substantial investment in community mental health services, suggesting that resources are a major determinant of the rate of involuntary admissions. Consistent with a resources contribution is the steep rise from 2009, a period of austerity.

Figure 8.2 Involuntary admissions for England, 1964–2014

Gostin observed that there is perhaps no other body of law which has undergone as many fundamental changes in approach and philosophy as mental health law.37 We have seen that such law reflects shifts – in Jones’s words, ‘good, bad or merely muddled’ – in social responses and values to an enduring and troubling set of human problems.38 We agree with Fennell that, while such laws often may not obviously greatly affect substantive outcomes, they are important, if for no other reason, than they require professionals – and we would add the state and civil society – to reflect on, explain and justify what is being done.39

Key Summary Points
  • The 1959 Mental Health Act represented, by any standard, a ‘paradigm shift’ in the way in which mental illness was construed, not just in Britain but anywhere. ‘Legalism’ of the 1890 Lunacy Act was replaced by ‘medicalism’.

  • While the general outline of the 1959 Act was preserved in the 1983 Mental Health Act, there was a significant swing of the ‘pendulum’ towards a new ‘legalism’.

  • Significant influences were the successes of the civil rights movement in the United States in the 1960s and 1970s that progressively asserted rights for groups subject to discrimination and the establishment of a legal and welfare rights service by the National Association for Mental Health (Mind).

  • An argument grew in the 1980s that, as the locus of psychiatric treatment was increasingly in the community, so should be the option of involuntary treatment. The Mental Health Act 2007 represented a substantial shift away from a focus on individual rights and towards public protection.

  • Two drivers of reform garnered significant support during the first decade of the twenty-first century. The first was the proposal for capacity-based law or a more radical version, known as a ‘fusion law’; the second was the adoption by the UN in 2006 of the Convention on the Rights of Persons with Disabilities (CRPD). Both aim at the elimination of unfair discrimination against people with a mental disorder.

  • Changes in mental health law are traced within a space formed by two key dimensions: ‘clinical discretion’ versus ‘legalism and ‘autonomy’ versus ‘protection from harm’.

Chapter 9 Ken Clarke in Conversation with Peter Tyrer: My Role in Justice and Health

Peter Tyrer
Introduction

This interview was carried out with Ken Clarke on 24 September 2019 at the House of Commons. It was transcribed and subsequently edited by Ken and this text is placed in italics. The remainder of the text is written by Peter Tyrer.

Early Background

Ken Clarke and I were at the same Cambridge College, Gonville and Caius College, between 1959 and 1962. In the first year, we were in adjacent blocks in Tree Court, a square that is now deprived of trees but still has marvellous wisterias on the walls. Our paths hardly ever crossed. Ken soon became very involved in university politics and I was too preoccupied in organising a botanical expedition to Central Africa. Ken was attracted to Labour at first – his grandfather was a Communist – but then quickly changed to another party whose nature all now know. He was offered his place at Caius before being offered an Exhibition at Oxford University, which he turned down as he felt they were too pompous. Some jokers have suggested this was the reason Ken never became prime minister, as eleven of the last fourteen occupants of the position went to Oxford. The last to be educated at Cambridge was Stanley Baldwin.

The only other point of note that struck me when Ken was at Cambridge was his accent. He came from Langley Mill, a village that is in the middle of D. H. Lawrence country. Although he won a scholarship to Nottingham High School, very eminent in the Midlands, he must have had a fair dose of the typical and not unpleasant local accent and would have understood that ‘silin’ dahn in Stabbo’ meant it was raining very hard in Stapleford (also near the Derbyshire border). He got on very well with the local miners, including those who had known D. H. Lawrence’s father. Ken’s father’s associates did not think much of D. H. Lawrence, who they felt had got above himself by going away and ‘writing mucky stories’, so this gives you an idea of Ken’s social milieu.

By the time he went to Cambridge, however, he was determined to become an MP and his accent had to change (remember this was 1959). He developed a Bertie Wooster-ish lad-di-dah accent with exaggerated rounded diphthongs, but this quickly changed into the voice everyone now knows. Ken is the only politician to be described as ‘blokeish’, a term impossible to define, but I think it really describes his voice. He doesn’t talk down to people in any way and, even when disagreeing strongly, he retains the ‘I’m just an ordinary chap trying to get my point over’ that never sounds offensive. Mrs Thatcher had to have elocution lessons but from his early time in politics Ken never needed them.

The main point of describing this background is to show Ken is not your ordinary politician. He achieved almost all the top positions of state – failing to be elected leader of the Conservative Party on three occasions – and could be regarded overall as highly successful, through dint of great intelligence and very hard work. He was constantly amazed at how lazy and ill-informed so many of his colleagues were.

Consistency

The other thing that is important to emphasise in understanding Ken Clarke is his consistency. It is often said that politicians have to bend in the wind or be toppled, but Ken was an exception. Once he had come to a view, he held on to it unless there was compelling evidence otherwise. The main reason I suspect why he was not elected leader of the Conservative Party was his belief that the UK should be an intrinsic part of the European Community. Unlike many other politicians who changed their views greatly over the years, Ken stuck to his opinion and, if anything, has stressed it even more strongly in these Brexit years.

In his personal life, he has been equally consistent. He was married to Gillian, whom he met as an undergraduate when at Caius, for more than fifty years. She died in 2015 from cancer, and one of her last acts was to vote for her husband in the 2015 General Election. She was an absolute rock to him, especially in his early political years when he commuted between Nottingham and London almost daily.

He is even consistent in his support of football teams. He supports both Notts County and Nottingham Forest and tries to see one of them every Saturday (they play at home alternately); he has kept loyal to both. Even though Notts County has drifted down the leagues and is in danger of going into liquidation, while Nottingham Forest soared to great heights in the 1970s and won the European Cup twice, he has stuck with both teams.

Good Humour and Unflappability

In his memoir, Kind of Blue,1 beautifully reflecting both his love of jazz and his semi-detached connection with the Conservative Party, Ken several times refers to the very true perception that ‘I was so laid back I was almost horizontal’. He is not wounded by criticism – his wife was and tried to defend him – but he regards it as useful ammunition for his rejoinders, which are often very witty. His negotiations with the British Medical Association (BMA) over a new contract and negotiations for an internal market involved the chairman, Anthony Grabham,2 and Ken used to point out his name repeatedly to illustrate that doctors were only interested in their wallets. Yet he also agreed that the BMA, in the end, had won the public relations battle over the new contract, and in my interview with him he conceded that the BMA was the union that he felt was more formidable than any other trade union because it could use the public as its main ally (see also Chapter 28).

Relevance of These Characteristics to Negotiations in Mental Health

I do not share the political views of Ken Clarke and it is equally possible to describe him as a highly intelligent but blinkered politician who has always seen the world through a kind of blue lens. Yet, in the interview, I conducted with him he was absolutely straight. ‘Go ahead, start your recording, ask me anything you want.’ I have to praise that in a politician. The changes he made to the transcript were very small, essentially typographical errors and improvement of clumsy expressions; and never once did he say ‘I’ve changed my mind over that now’ – consistency or stubbornness, you decide.

Ken was also highly regarded as a constituency MP. He represented the parliamentary constituency of Rushcliffe to the east of Nottingham for forty-nine years and improved the lot of his constituents greatly over that time. (I know, as I look over with envy from where I live in the adjacent constituency of Newark, which has been much less successful in gaining funding and new initiatives.) This constituency includes Saxondale Hospital, the former county mental hospital of Nottinghamshire, and Ken was aware of its assets and its failings. What he found most disturbing when Saxondale Hospital was destined to close was what he thought was sound evidence that some of the county dignitaries had managed to place their difficult and embarrassing relatives in the hospital for no good reason and that they had languished there for years.

My Interview

I was far from clear that I would be allowed to tape the interview with Ken when I saw him and only had a few notes jotted down. I was also limited by time. I saw him in Portcullis House, the new base for politicians opposite Big Ben, and he had to return to the House of Commons for a vote on the same evening. I apologise for any important omissions.

Q. Why did you introduce NHS Trusts?

One of the things I was concerned about when I became Secretary of State was to try and make the service much more accountable to its patients, and to stop it being so borne down by bureaucracy and dominated by industrial relations problems. The whole point of the purchaser/provider divide was to make sure what money was being spent on locally and to spend that money on the best outcomes for patients, the service and others in that locality. The service was for the locality, a strikingly novel idea that caused a lot of controversy. I had to reform the awful way in which the service was managed, with the responsibility diffusing out from the centre, not very effectively, saying how the service should be run in the rest of the country. We needed a better way of doing things. The idea of Trusts was to give more autonomy to the local users of services so they could answer for their performances to their local public.

I intended to pass responsibility downwards from oversized headquarters in London and secure accountability upwards so the general public could see where decisions were made. This led to some interesting battles over the next three years but it generally went well.

Q. Was there ever a real risk that the John Moore proposal to have an insurance-based replacement for the NHS was likely to happen?3

There definitely was. If John Moore had not become unwell, the proposal would have gone ahead as Margaret Thatcher and John were quite agreed that this was precisely what they were going to do. She was convinced the American system was much superior to our own, with a system of personal insurance and the state paying the insurance premiums for those who could not afford it. When I became Secretary of State I quickly became aware that I was expected to take over the implementation of this policy.

But I quickly came round to the view that this was quite unacceptable. Nigel Lawson helped me by resisting Margaret Thatcher’s urgings,4 that those who paid these premiums could get tax relief on their private health insurance contributions. It took me many long meetings to persuade her (Margaret did listen to argument) to abandon her scheme, returning to mine that at least had a degree of market-related responsibility with a purchaser/provider split making sure that there was some fiscal discipline.

Before the purchaser/provider discussions nobody really knew that the NHS was spending its money on, and the idea that the money given might be linked to the outcomes was never contemplated. One trouble was that patients did not belong to any trade unions so their views about what was needed were never heard.

I never pretended I was introducing the final model for the National Health Service. I was wanting to change direction by putting in a framework that could be developed in the future. To some extent this has been achieved but now Simon Stevens of NHS England has developed the idea of integrating hospital and community care diluting the purchaser/provider approach.5

Q. You mention in your memoirs that the BMA was the most difficult trade union you ever had to negotiate with. You were not very complimentary about the medical profession generally. Apart from Donald Acheson (then the Chief Medical Officer of Health), who you admired but who was not really part of the system, was there anyone in the BMA who you looked to for guidance and help?

Not really. Well the BMA was a trade union, even though at times they pretended not to be, with all sorts of high-minded statements for the general public, but basically, like all trade unions, they were always concerned about pay and conditions for their members. When I first arrived I was always advised I had to make concessions to the BMA. When I met Tony Grabham (muffled laughter about the significance of his surname) he tried to frighten me, telling me that all the previous Secretaries of State he had to deal with had all folded under the strain and that in the case of Barbara Castle he had reduced her to tears. But I wasn’t going to buckle under this even when they got nasty and went public with advertisements pillorying me.6

Q. Do you think doctors were treated too generously in the pay decisions of 2004?

Yes it was generous. And the government at the time, like almost all governments, was trying to buy popularity. Of course, as you know, the doctors have always been regarded as a special case. When Aneurin Bevan said you had to stuff their mouths with gold he set the scene for the future. When the new contracts set reduced hours for much of their work for more pay most were rather surprised as they were going to be paid more for doing less.

Q. Was the Conservative government involved in any way with the introduction of the Dangerous and Severe Personality Disorder Programme (DSPD) introduced by the Labour Government in 1999?

I have no recollection that at any time we were considering such a programme. I think you have to give the responsibility to the Labour government. The trouble is with policies like this is that you now have this semi-presidential system where everyone listens to advisers and so when public opinion gets sufficiently animated new policies are introduced without ever having been thought through.7

Q. What are your views about the Private Finance Initiatives (PFIs)?

We introduced the idea of Private Finance Initiatives in the 1992–7 government but very few took them on as the Labour Party said they would abolish them if they came to power. When of course they were elected they immediately started to introduce them. They have unfairly been given a very bad name. This is mainly because of the appalling incompetence of the people who negotiated these contracts, with payments that extended far beyond completion of the project. In the end they turned out to be more expensive than public financing. At the time there was this obsession over keeping to targets for the public finances. Gordon Brown was saying continuously in the first two to three years about how prudent he was going to be as Chancellor.8 But the government was not bothered about anything that was not on the books, so the PFIs were not down anywhere amongst the figures, so the government’s reputation for fiscal prudence could not be damaged in any way. My only explanation why the health authorities were allowed to go mad on PFI contracts was that the responsibility for the payments would pass on to a Minister appointed many years into the future.

Q. Can I give you the example of one Trust close to me, Sherwood Forest Hospitals Foundation Trust, where the final bill for the new hospital is going to be more than thirty times the value of the initial cost. How could this happen?’

The problem was that the Trust people had no experience of any kind to negotiate the cost of such complex deals, so the people from these private organisations who negotiated these deals could not believe their luck, and took them to the cleaners. So the argument was quite simple. ‘If you want your new hospital you can have it now. The payment for it will be made by someone else down the road, so don’t worry.’ So a sensible policy, well thought out and prudently applied at first, was quite discredited. Even in the early days when I was trying to get PFI going my approach was to say to each private financier that they could only get a return on their investment if they accepted a proportion of the risk.

The trouble was that the structure of these negotiations was quite unsuitable for the right contracts. People were appointed in the NHS with no experience, often on short-term contracts so they knew they would not have to pick up the flak down the line. So, in the Department of Health headquarters, we had some bright spark, hoping to make his way up the ladder, negotiating with British Oxygen’s finest, knowing absolutely nothing about the oxygen market. So, as you can imagine, the results were very satisfactory for British Oxygen.

Q. Do you think there is a political solution to the imbalance in the funding of the NHS where a large proportion of the funding is going to elderly people like you and me, just to give them a few extra years of life, not always ideal ones, when it should be going to younger people with their lives ahead of them, including a large proportion with mental health problems?

The reason why we have not been able to reverse this trend is that the people in the mental health system do not have a voice. I recognised this when I became Secretary of State and had to close down these Victorian institutions like Saxondale Hospital in my constituency, where people had been kept for years with no voice and no influence. These old ‘asylums’ were absolutely shocking places so it was perfectly sensible to introduce this policy of care in the community, provided it was integrated with hospital care by psychiatrists and others in a coherent way.

The trouble is that care in the community was extremely unpopular. People noticed that suddenly there were strange people out in the street loitering by traffic lights and felt that they ought to be locked away somewhere like Mapperley or Saxondale (if they were in Nottingham),9 and not being allowed to roam the streets.

The public believe that whenever there is extra money in the health service it ought to be spent on cancer patients or children, not on mental health, and populist Secretaries of State in populist governments accede to these requests, especially on cancer, a subject that terrifies the public so they feel if we spend more it might go away.

A more careful and balanced approach is possible but it does require a well-argued political defence. I separate that from the other problem you are touching on, which is the demographic one, the changing proportion of old and young people in society. The reason why there is an inexorable rising demand is the increasing age of the population that is creating a crisis in our health care system. We are going to have to find some way of meeting this demand. The burden of taxation is going to be spent increasingly on looking after the care of the elderly people and this is going to become unbalanced.

The big change is that so many people in their older years are going to need extra care. It is an extremely tricky political problem that has not been properly faced.

Q. But there has been an inter-Party group discussing this over some years. Is there any way in which this issue might be taken out of politics?

No, it can never be taken out of politics. The idea that it can be is quite wrong. Every time a bed-pan was dropped in South Wales there is a problem, which falls to the Secretary of State to deal with the consequences.

It will remain political while we have a totally free tax-paid health care system, even when it is linked to a social care system that is not comprehensively paid for at all by government.

Q. Should the NHS take over social care?

Well, we can’t afford it. It’s as simple as that. The debate about social care needs to be updated. It is perfectly obvious that if you provided the figures that would be necessary to pay for free care out of taxation it would be rejected out of hand. We need to introduce a more rational and fairer system. We need to come round to the idea that social care cannot be totally free and that we cannot avoid some financial burden. It may be possible to organise a national insurance system, but I’m not completely sure about that. We need to have something that takes account of the individual needs of patients and the responsibilities of society. The idea that somebody in a £2 m house should not be expected to pay for the costs of their care is ridiculous. Yet if this person did pay something, it should not mean that someone on a low income who has worked and struggled to pay for their needs over a working lifetime should also be required to pay. This would be clearly farcical.

The reason why we do not have a policy on social care is that no government has had the courage to produce one. Although there have been many attempts to resolve this all of them have proved to be deeply unpopular.

The trouble with our current populist system of government is that all decisions seem to be made by opinion polls. But opinion polls change, so they are no substitute for a properly organised policy. Anyone who suggests that social care should be paid for completely out of taxation would not be able to defend this at a time when there is rapidly increasing demand. So you then have to work out what means testing you are going to apply. A fairer, but not instantly unpopular solution for the twenty-first century, is to have some private insurance arrangements set up for social care.

We have to recognise where we are now. Both our education and health care systems are immeasurably better that when I was Secretary of State. Now we want someone with a well-thought-out plan for longer term reform, someone who can put their head down and not be worried about being unpopular. One of the big things about Margaret Thatcher was that she was not terribly interested in the reactions of the general public, she never looked at popular opinion, as she was a conviction politician.

There are two rules that need to be understood about reform. First, all change is resisted at first, and second, that anything that might cost more money is equally resisted. So you have to be aware of that from the beginning when you are making changes.

Q. Lastly, I want to turn to your time when you were at the Ministry of Justice when you were trying very hard to bring down the prison population. Why has this been so difficult when in other countries this aim has been more successful?

It is very disappointing. Even in America, in places where they have hard-line Republicans, they are beginning to reduce incarceration rates. We should be able to do better.

Q. In current forensic practice there are strong moves to improve the environments of people with significant mental health problems and to obtain early release, but there are many obstructions in the way. What can we do to help here?

There are good people in correctional institutions who recognise that getting people out of prison into good environments is the key to progress. There are far too many people in our prisons who are mentally ill and who require the proper treatment for their conditions. Of course this sometime requires secure accommodation. But we have to acknowledge that our current prisons do not allow adequate intervention for any of the mental health issues that they face.

At present we are having another of these populist drives to be tougher on law and order so we can bring more people into overcrowded institutions where it is almost impossible to do any worthwhile therapeutic work with them. There isn’t the space, there aren’t the personnel and almost all the effort is wasted. Currently we have Priti Patel who is waving this banner to be ‘tough on crime’.10 But it started long before her. Michael Howard, David Blunkett and others did quite a lot to raise the prison population dramatically.11 Priti is going to do her best to make her policy at the next general election a repeat of the old ‘hang ’em and flog ’em’ mantra. But there have been reactions against this. Michael Gove was on our side; 12 he had sensible solutions.

But, as for me, I have to say I failed to get a change in policy. I discussed it with David Cameron very frequently.13 He listened, but he was too nervous about the Daily Telegraph to do anything.

When we had the Thatcher government it was different. When we had good policies that we believed were right we implemented them. But we had to get the timing right. We knew they would be unpopular at first but over time they would be accepted, so we had to bring them in early. Nowadays parliaments seem to be much shorter. It is also different as Prime Ministers now employ ranks of public relations specialists who seem to make all the decisions.

Q. Is there a place for conviction politics nowadays?

Of course. The time will come again. I regard myself as a conviction politician but at present we are in a small minority.

Conclusion

The interview finished and Ken popped across to the House of Commons for one of his final debates. In the chaotic last weeks of the 2017–19 government, there was doubt as to who was running the country and one proposition put forward was that he, as Father of the House, might be prime minister for at least a week or so. It did not come to pass, but it would have been a fitting end to a career, which, despite the gloomy words of Enoch Powell that he maintained applied to all politicians, certainly did not end in failure.

Chapter 10 UK Mental Health Policy and Practice

Jon Glasby , Jerry Tew and Sarah-Jane Fenton
Introduction

Over five decades, we have seen major changes in mental health policy for adults and for young people, often influenced by shifts in the broader social, political and economic environment. This chapter summarises some of the main changes, drivers and issues, including the introduction of care in the community and the emergence of new discourses around recovery, marketisation and risk during the period 1960–2010.

From Asylum to Community Care

The Mental Health Act 1959 was a step change from previous legislation in foregrounding the provision of treatment, rather than mere confinement, as the core purpose of mental health services. This reflected wider changes in services, with informal treatment becoming available not only for inpatients but also for those outside hospital (with 144,000 outpatient clinic attendances in 1959 compared to virtually none in 1930). However, mental health was still very much a ‘Cinderella service’, with Mental Health and Mental Deficiency Hospitals containing 40 per cent of NHS inpatient beds but receiving only 20 per cent of the hospital budget.1

With a populist’s ability to identify issues which chimed with the mood of the age, the Conservative health minister, Enoch Powell, saw the old Victorian asylums as being out of step with emerging expectations of a modern Britain. As well as being overcrowded and offering poor standards of care, their very architecture resonated as an uncomfortable symbol of a bygone age of Poor Law and Workhouse. In 1961, Powell captured this in his famous ‘Water Tower speech’ (for more details, see Chapters 1, 31).2 He also recognised the attitudes, customs and practices (both social and professional) which were embodied in these buildings – the ‘sheer inertia of mind and matter’ – that would need to be overcome if services were to be transformed.

This landmark speech was followed by A Hospital Plan for England and Wales, which proposed the development of small-scale psychiatric units in District General Hospitals, with local authorities providing a full range of community services.3 Much of this chimed with the aspirations of the more progressive elements within the mental health professions, who were keen to move out from the isolation (and perceived inferiority) of the old asylums and become part of mainstream health and social services provision. It suited both those with a more biological persuasion, with its emphasis on treatment rather than containment, and the emerging movement of social psychiatry with its emphasis on the social aspects of rehabilitation. However, despite the recognition of what was needed, and cross-party support for this agenda, financial pressures and institutional resistances continued to undermine any substantial implementation of community care. Although inpatient numbers were falling (from 160,000 in 1954 to 100,000 in 1974), there was inadequate investment in new community-based alternatives and concerns were starting to be expressed about the gap between rhetoric and reality.4

Recognising this, Barbara Castle, the Labour health minister, introduced the 1975 White Paper Better Services for the Mentally Ill.5 This made explicit the level of community-based NHS and local authority provision that should be provided per 100,000 population, assuming a roughly equal commitment by the NHS and local authorities, with the latter taking on the main responsibility for those requiring longer-term support and reintegration into mainstream community living. It stated that ‘joint planning of health and local authority services is essential’ and that ‘the policy can only be achieved if there is substantial capital investment in new facilities and if there is a significant shift in the balance of services between health and the local authority’.6 What was less explicit were the mechanisms whereby this joint planning would be achieved; how ‘bridge funding’ could be provided for investment in new facilities before old hospitals could be closed and savings made; and how resources could be transferred from the NHS to local authorities to provide social care. These concerns were amplified by the unfortunate timing of the White Paper, coinciding with economic adversity following the oil crisis of 1973.7

Nevertheless, government funding was made available to pilot the proposed model of service provision in Worcestershire in an experiment known as the Worcester Development Project.8 This allowed for comprehensive services to be established in the community without having to wait for any capital to be released and revenue saved from the closure of the old hospital. On the ground, progress was patchy, with teams in one part of the county moving quickly to relocate all their residents from the former asylum, while others were less committed to giving up previous ways of working – leading to a considerable delay in bringing about its final closure. Although GPs generally saw the new services as better for their patients, they also expressed concerns that they themselves were not properly trained for taking on a greater role in mental health.9

Although the intention was for this blueprint for a community service to be properly evaluated, this was not followed through. As a result, lessons were not learned as to what was actually needed, how much it would cost and how quickly the old hospitals could actually close – impeding further roll-out of the new service model. Whereas the Worcester Development Project had the benefit of bridging finance, this was not available elsewhere. Consequently, many people were discharged into lodgings or unsuitable accommodation with minimal support, arousing increasing public concern. During the hospital closure phase, more attention tended to be given to establishing psychiatric teams in new facilities in District General Hospitals than to integrating people back into mainstream community life. Crucially, there was no mechanism to transfer over funds to local authorities to create an appropriate infrastructure of community-based support.

A somewhat different story characterised developments in children’s services. Here, there had been an established model of Child Guidance Clinics, located within local authority education services and having a strong psychosocial ethos. However, separate NHS hospital-based psychiatric services for young people were also now being developed alongside new adult provision. Early debates in the 1960s were about how to better integrate these service arms – but with little success.10 Things came to a head (largely spurred on by all too familiar debates about a lack of adolescent inpatient beds and who should pay for what) in the 1986 report Bridges Over Troubled Waters.11 This resulted in the advent of an integrated Child and Adolescent Mental Health Service (CAMHS) that was no longer split between the NHS and local authorities. However, there remained a lack of clarity as to how this should operate in practice, with the first national guidelines not arriving until the mid-1990s – and CAMHS remained hampered by lack of substantive financial investment.

Rights and Recovery

Although the 1959 Mental Health Act had been welcomed as a great advance, by the late 1970s the government and other stakeholders were suggesting that a review would be timely. Led by their legal director, Lawrence O. Gostin, Mind ‘argued that many aspects of the treatment of those diagnosed as mental ill were an abuse or denial of their rights’.12 Although the 1983 Mental Health Act retained much of the overall structure of the 1959 Act, a series of stronger safeguards were built in to enshrine the principle of the ‘least restrictive alternative’, including greater independence (and training) for Approved Social Workers; stronger (and quicker) rights of appeal for detained patients; and greater use of second medical opinions in relation to more controversial treatments such as psychosurgery and electroconvulsive therapy. Notably absent from the debates leading up to the new Act was any public or political concern as to the inherent dangerousness of people with mental health difficulties and hence any paramount necessity to protect the public against such people.

A little later in the decade, a new discourse emerged around the rights of young people to protection – which was reflected in the United Nations Convention on the Rights of the Child and the 1989 Children Act. This increased awareness of the need for more specific services to support children and young people with their mental health and well-being.13 However, while this had more tangible impacts on local authority children’s services (as in the provision of guardians ad litem to represent children’s interests in court), it was less influential in relation to mental health where, for example, young people could still be sectioned and sent to adult psychiatric wards without any specific safeguards being put in place.

Linking in with wider movements around disability activism, people with lived experience of mental distress (often describing themselves as ‘survivors’ of the mental health system) started to assert their own voice through campaigning organisations such as Survivors Speak Out and the UK Advocacy Network and, to an increasing extent, voluntary organisations such as Mind. Particularly influential was the movement in the 1990s to claim and redefine the term ‘recovery’.14 Activists such as Pat Deegan in the United States and Ron Coleman in the UK promoted the idea of recovery as reclaiming a life worth living – where it would be for the person (and not professionals) to define what that life would look like. It offered a paradigm shift towards a more co-productive approach to practice – one that did not always sit easily with some of the established attitudes and practices of mental health professionals in its emphasis on areas such as empowerment, peer support and social inclusion.15

This user voice and the idea of recovery were influential in the development of the National Service Framework – although perhaps not as influential as many would have liked. Instead, it was articulated in documents that were less central to policy implementation: The Journey to Recovery: The Government’s Vision for Mental Health Care and A Common Purpose: Recovery in Future Mental Health Services (the latter in collaboration with the Royal College of Psychiatrists).16 Rather than transforming the mainstream of service provision, its influence tended to be in more circumscribed developments, such as the emergence of Recovery Colleges. Concerns started to be expressed that the idea of ‘recovery’ had lost its radical edge and had been appropriated by professional interests to support their agendas – for example, as a pretext for withdrawing services.17 This marginalisation of user-defined recovery reflected a deep ambivalence within the system as to how (and whether) to move beyond rhetoric and situate people not as patients to be cured but as collaborators in their own recovery journeys.

Marketisation

This focus on rights was soon to be overtaken by a newly emerging discourse about management and efficiency in the delivery of public services – which came to dominate the policy agenda during Margaret Thatcher’s premiership. Driven by the ideologies of neoliberalism and New Public Management that were taking hold in the United States, the priority was to make public services more efficient and ‘business-like’ using market mechanisms. A key proposal, based on the ideas of an American economist, Alain Enthoven, was that responsibility for purchasing care and providing services should be separated (the purchaser/provider split). NHS services would be bought from self-governing NHS Trusts which, in theory, would compete with one another, thereby encouraging greater responsiveness and cost-efficiency. A parallel (but different) marketisation of social care was introduced in the NHS and Community Care Act 1990, with local authorities as lead purchasers and the bulk of provision contracted out to the voluntary/private sectors (see also Chapter 3).

For mental health services, this fragmentation within and between different parts of the health and social care system simply exacerbated existing difficulties in ensuring strategic and operational collaboration. Partnership working was, in effect, part of government rhetoric rather than a practical possibility.18 With no mechanism in place for enabling (or ring-fencing) a shift of funding from hospital beds to community care, many local authorities saw an opportunity, at a time of financial pressure, to cut back or abdicate many of their responsibilities in relation to mental health – apart from the statutory duty of providing Approved Practitioners to assess people under the Mental Health Act.

By contrast, relatively unaffected by marketisation, a more coherent approach was being taken forward in CAMHS. In Together We Stand,19 a tiered model was proposed in which different levels of support and expertise were available in response to different levels of need. This was well received and described as a policy that ‘captured the imagination of all and triggered a clear commitment to improve services’.20 However, an unintended consequence was to compound existing problems around transitions (as most areas continued to only see children up to the age of sixteen, with adult services starting from the age of eighteen) – with no provision at all in some areas for sixteen-to-eighteen-year-olds who were either too old or too young for services.21

Risk and Public Safety

The primacy of economic efficiency as a policy driver came to be displaced by new discourses around risk and dangerousness that had become a key feature of ‘late modernity’ in the latter part of the twentieth century.22 There emerged a widely held perception, aided and abetted by both politicians and professional interests, that risk and unpredictability could be eradicated across society by the appropriate application of management tools and technologies. While this had some positive impacts, for example in improving health and safety practices within industry, its impact on mental health services was less benign (see Chapter 23). By its very nature, mental distress challenges deeply embedded notions of rationality and predictability that underpin the organisation of civil society,23 so it is perhaps not surprising that efforts to manage this perceived threat took on almost totemic significance for government. Despite the evidence that very few people with mental health problems commit homicides – and that the proportion of overall homicides committed by people with serious mental health problems has actually tended to decline during the transition to community care – certain incidents (in particular the death of Jonathan Zito on 17 December 1992) provided the focus for a widespread ‘moral panic’ fanned by the media (see also Chapters 23, 27, 28).24

While analysis of findings from homicide inquiries suggests that an investment in improving overall service quality and accessibility, rather than in devoting professional time to formal risk management procedures, is more likely to prevent potentially avoidable deaths,25 this has not been reflected in policy or practice. Despite popular (and sometimes professional) misconceptions, research was demonstrating that, using the best available tools, practitioners working in the community cannot predict risk with an accuracy that is of any practical use.26 This led to the unequivocal conclusion that:

The stark reality is that however good our tools for risk assessment become … professionals will not be able to make a significant impact on public safety.27

Nevertheless, practices of risk assessment and management came to dominate both policy and practice in the 1990s and 2000s, often to the detriment of more progressive recovery-oriented practice. However, more recently, there have been some shifts towards more collaborative approaches to ‘positive risk taking’,28 recognising that some degree of informed risk is part of normal life and that people cannot move towards recovery if they are overprotected (and potentially over-medicated).

One consistent finding from homicide inquiries was that people were often ‘slipping through the net’ because professionals and agencies were not working collaboratively or communicating well with one another. Unfortunately, this tendency was only exacerbated by the Thatcher government’s market-led reforms. In the early 1990s, while one part of the Department of Health was drafting the NHS and Community Care Act and associated guidance, another part was introducing the Care Programme Approach (CPA) to promote better inter-agency working in managing the risks which were seen to be posed by people with mental illness.29 While the former focused on assessment in relation to a concept of need, the latter was concerned with the assessment of risk. The former proposed that the key professional role was the care manager who had a limited role in terms of assessing need and purchasing services to meet that need. The latter prescribed a much more ‘hands-on’ role for the key worker (later renamed care co-ordinator) who would have an ongoing relationship with the service user, working with them to make sure that they were properly supported and services co-ordinated. In practice, the lack of integration between the two methods ‘resulted in duplication of effort, excessive bureaucracy and construction of a barrier to effective joint working’.30 This only started to be acknowledged by government in revised guidance, Building Bridges,31 and, when this manifestly failed to resolve the splits and confusions, in a subsequent report entitled (with perhaps unconscious irony) Still Building Bridges.32

Modernisation of Mental Health Services

New Labour’s approach to mental health policy from 1997 reflected somewhat contradictory drivers. On the one hand, there was a mounting concern in relation to the supposed dangerousness of people with mental health problems – as exemplified by the health secretary’s assertion that ‘care in the community has failed’.33 On the other, there was a genuine concern to improve the effectiveness of services and take seriously issues such as stigma and discrimination.

Modernising Mental Health Services provided the first comprehensive government statement about the future direction of mental health policy since Better Services for the Mentally Ill in 1975. The following year, the National Service Framework (NSF) for Mental Health in England set out a ten-year plan for the development and delivery of mental health services for adults of working age,34 with similar frameworks being produced by the devolved governments in Scotland and Wales. For the first time, there was a focus on mental health promotion – although mental health only came to be formally part of the public health agenda in England much later. For people with serious mental ill-health, the NSF encouraged implementation of functionalised mental health teams (Assertive Outreach and Crisis Resolution), putting greater organisational emphasis on services that could keep people out of hospital – but inadvertently taking the focus away from improving the effectiveness of hospital care itself (see also Chapters 11, 30). Probably the most influential innovation was the mainstreaming of Early Intervention in Psychosis teams, introducing an integrated psychosocial approach that was developed out of research in Australia and the UK.35 Somewhat uniquely, these services spanned the divide between provision for adolescents and young adults – but only for young people with psychosis.

Following on from the NSF, there was a new stress on promoting social inclusion for people with mental illness and in ensuring that services benefited all sections of the population. A flurry of new policy documents emerged, including Mainstreaming Gender and Women’s Mental Health, Delivering Race Equality: A Framework for Action and Personality Disorder: No Longer a Diagnosis of Exclusion.36 Beyond this, there was a recognition that taking this agenda forward would require concerted action across government – work that was led by the Social Exclusion Unit within the Office of the Deputy Prime Minister.37

Set against the mainly progressive thrust of much of this policy agenda was a countervailing tendency driven by an overriding concern about managing risk. In framing his introduction to Modernising Mental Health Services, Frank Dobson, then secretary of state for health, promised that ‘we are going to ensure that patients who might otherwise be a danger to themselves and others are no longer able to refuse to comply with the treatment they need’. This promise became translated into a political push, against concerted opposition from user and professional organisations (including the Royal College of Psychiatrists), to replace the 1983 Mental Health Act with more restrictive legislation. A first step was the appointment of an expert advisory committee under the chair of Professor Genevra Richardson in 1998. Unfortunately for the government’s agenda, the committee decided to take a more balanced approach and recommended that the new legislation should foreground the principles of non-discrimination, consensual care and capacity – and that there should be a ‘bargain’ in which the state’s right to take away people’s liberty was to be balanced by a statutory duty to provide appropriate services (which, in many instances, might obviate the need to employ compulsion). In a somewhat cavalier way, the government chose to ignore the committee’s recommendations and went ahead in setting out their agenda in the subsequent White Paper, Reforming the Mental Health Act.38

The most contentious aspect of the 2007 Mental Health Act was the introduction of Community Treatment Orders (CTOs). Under this provision, patients discharged from hospital could be required to accept medical treatment outside of hospital or face the sanction of a swift recall to hospital. Perhaps for fear of appearing ‘soft’ on public safety, CTOs came to be used much more widely than originally envisaged – despite the evidence from a randomised trial which showed that CTOs did not improve the effectiveness of community care as people on CTOs were just as likely to require readmission and did not experience any significant improvement in clinical or social functioning.39

From Illness to Well-being

The early 2000s saw an emerging political interest in the well-being of the general population alongside the need to better provide for those with more serious mental health problems difficulties. In 2006, Lord Layard, a health economist at the London School of Economics, published an influential report on the costs of failing to treat anxiety and depression.40 The report stated that around 2.75 million people in England visited GP surgeries each year with mental health problems but were rarely offered effective psychological treatments. The central tenet of this argument was economic, based on the number of people unable to work due to mental health problems. Layard argued that ‘someone on Incapacity Benefit costs £750 a month in extra benefit and lost taxes. If the person works just a month more as a result of the treatment (which is £750), the treatment pays for itself.’41 In response, the government announced funding for a new Improving Access to Psychological Therapies (IAPT) programme, with a commitment to train 3,600 new therapists to offer a limited number of sessions of psychological treatment to more than 500,000 people. Whether or not this initiative delivered on its intended economic outcomes has not been evaluated, and the only comparative study to be conducted found that, while patients’ well-being and mental health had improved over four- and eight-month intervals, outcomes were not significantly better than in comparator sites.42

Beyond the relatively narrow focus of the IAPT programme, the prioritising of mental well-being outcomes within wider social and economic policy initiatives came to achieve greater traction, particularly in Scotland. In England, a broader cross-governmental focus on mental well-being was taken forward in subsequent articulations of policy, New Horizons and No Health without Mental Health.43 However, there was little ownership of these strategies within government (nationally or locally) and they were not accompanied by any funding or delivery mechanisms by which to translate such high level visions into reality. They did not link to any concerted investment in measures that might have ameliorated those adverse personal, social and economic circumstances that increase the likelihood of developing mental health problems – and, in particular, those adverse experiences affecting young people.44

Conclusion

As is usually the case with reviews of policy development, the picture that emerges is not one of consistent direction or continuous improvement. It is instead characterised by the influence of major competing discourses and pressures that both emerged internally within and more usually came to bear from outside of the immediate field of mental health (often influenced by broader economic, social and political changes). Overall, it is probably fair to judge that mental health services in 2010 were both substantially more effective and significantly more humane than those prevailing in 1960. However, were we to start with a blank sheet of paper and to design the most effective mental health service within the resources available, it might still bear relatively little resemblance to what has emerged over time. Of course, no generation starts with a blank sheet of paper, and there remains the challenge of how to think ‘big’ enough and engage co-productively with communities and those with experience of mental health difficulties, alongside professionals and other stakeholders, in envisioning and implementing a properly ‘joined-up’ strategy for delivering better mental health.

Key Summary Points
  • The Mental Health Act 1959 and A Hospital Plan for England and Wales in 1962 set a direction for mental health services away from inpatient and towards outpatient and community care which enjoyed support across the political spectrum.

  • There has been a shift of focus over time from rights and recovery to marketisation, risk and safety, modernisation and, finally, to well-being.

  • There has been greater coherence in policy and consensus among staff in child and adolescent mental health than its adult counterpart, but service developments were hampered by chronic underfunding.

  • Though, overall, it is probably fair to judge that mental health services in 2010 were both substantially more effective and significantly more humane than those prevailing in 1960, they have not fulfilled the aspirations held widely at the beginning of the period.

Chapter 11 Mental Health Policy and Economics in Britain

Paul McCrone
Introduction

Economics and health care are fundamentally linked. Financial arguments have been influential in the development of mental health services over the ages, from the establishment of asylums through to their demise and replacement with other forms of care. This chapter presents some of the economic arguments that have been used around the process of moving from a predominantly hospital-based form of care in 1960 to one in which community services were developed and expanded by 2010.

UK Economy and Health Spending, 1960–2010

The fifty-year period from 1960 to 2010 witnessed huge political and economic change in the UK. The post-war consensus whereby governments (Conservative and Labour) followed largely Keynesian economic policies (which were relatively interventionist) continued until the emergence of the Margaret Thatcher administration in 1979. ‘Thatcherism’ was characterised by a desire (whether achieved or not is debatable) to reduce the role of government and to promote the private sector. This was clearly relaxed to some extent during the Labour governments of 1997–2010, and the financial crisis towards the end of that period resulted in government once again accepting a heavily interventionist role in the economy.

The amount of funds that a society devotes to health care (whether through public or private spending) is fundamentally a decision made by society or individuals. According to the Organisation for Economic Co-operation and Development (OECD),1 in 1960 gross domestic product (GDP) in the UK was £26.1 billion and by 2010 was £1.6 trillion. Adjusting for inflation gives an increase of some 336 per cent over this period. In 1960/1, the amount of GDP accounted for by health spending was 3.1 per cent and this had increased to 7.5 per cent by 2009/10.2 This still leaves more than 90 per cent of GDP going on non-health activities and so there is room for increases albeit with proportional reductions elsewhere. Can this be achieved? Given that in other areas productivity gains can be achieved through technological advancements, products in these areas are prone to becoming cheaper in real terms. More labour-intensive sectors (health but also education) do not experience such productivity gains and so, as an economy develops, we should expect and even welcome a greater proportion of spending going on those areas. This is an argument put forward strongly by the influential American economist William Baumol.3 However, governments around the world have appeared concerned about the rising costs of health care and cost-containment measures have been endorsed.

Unfortunately, it is not feasible to estimate how much funding has been allocated specifically to mental health care over the years. Different government agencies have had responsibility for this area and data have not been recorded consistently. In recent years, we do have such information for planned NHS spending. In 2007, for example, planned mental health expenditure amounted to £8.4 billion which was 12.4 per cent of all NHS spending.4

Economic Arguments around Deinstitutionalisation

Since the emergence of the large asylums in the Victorian period, there was an increase in the number of people detained in psychiatric hospitals, with stays frequently being long-term. The peak number of psychiatric beds in England was in 1955 (around 150,000 beds),5 and this was followed by a gradual decline facilitated in part by the emergence of new antipsychotic medication such as chlorpromazine. However, the asylums remained and came in for substantial criticism. In 1961, Enoch Powell, who was a health minister in the ruling Conservative government, gave his famous ‘Water Tower speech’ which many consider to have paved the way for the ultimate demise of the asylums. In current times, the development of community services while still maintaining a huge stock of hospital beds might seem unfeasible (see also Chapters 1 and 2). However, it is clear from Powell’s speech that the stated intention was not that finance would act as a barrier to developing alternative forms of care and indeed that around half of capital expenditure at the time was already taking place in the community rather than in hospitals. Importantly, whether there was professional acceptance of the need for this move is unclear.

Subsequent policy documents developed further the move to care outside of the long-stay institutions and economic factors clearly influenced these arguments. The report entitled Hospital Services for the Mentally Ill,6 published in 1971, proposed integrating physical and mental health care within district general hospitals (interestingly, very different from what we have today). That report recognised the need for extra resources resulting from a desired high staff-to-patient ratio for care to be provided adequately. The realisation that community provision would require properly resourcing was shown in the 1975 report Better Services for the Mentally Ill (see also Chapter 10).7 As well as emphasising that the running costs of community services would be high, the report also emphasised the imperative to upgrade the existing hospital services. Moving forward to 1989, the government published an official response to a seminal report led by Roy Griffiths (see also Chapter 12).8 The Caring for People report set down the idea of a purchaser/provider split for health care, which has been revisited on a number of occasions up to the present day. It did not, though, ring-fence resources for community services (as had been called for) unless these were developed jointly between health and social care agencies.9

Financing Community Services

It is notable that, while many would have viewed the closure of asylums and the development of community services as a way of saving money, many in government and the policy world clearly were of the view that community services would not come cheaply. In her book After the Asylums, Elaine Murphy highlighted a number of financial issues relating to community care in the UK.10 She noted the problem, which still applies today, that different agencies have different responsibilities in terms of provision and commissioning and this can lead to clashing priorities. When the asylums were established this was accompanied with a transfer of financial responsibility from local parishes to county councils who managed the asylums. With the emergence of the welfare state later in the twentieth century, housing benefit became available and this led to funds being transferred to the Department of Social Security. Jones pointed out that, while Joint Finance to encourage the development of community services became available from 1976, this represented just 1 per cent of NHS funds and discouraged the transfer of other funds from health authorities to local authorities. This, coupled with the immense pressure that local authorities came under in the early 1980s to contain spending (through rate capping, for example), meant that local services were poorly developed. The closure of long-stay hospitals should have released funds but unless whole wards or hospitals could be closed this would not materialise. In addition, the downsizing of hospitals meant that the costs per patient per day became very high. Jones was clear that to counter some of these problems there would need to be an effective system of bridging finance put in place (such as a dowry mechanism).

Rationale for Economic Evaluations

Any development in the way in which mental health care is provided has potential impacts on resources. New services usually require start-up costs and running costs can be quite different from those they are replacing. Clearly, there is a limit on funds available for health and social care. Economics is concerned with this issue of scarcity (hence it being dubbed the ‘dismal science’ by Thomas Carlyle in the nineteenth century) and whenever scarce resources are used in one particular way then there is an opportunity cost in that other potential uses for them are foregone.

The idea that resources are inherently scarce is a strong one – although it is important to bear in mind the arguments put forward by William Baumol described in the section ‘UK Economy and Health Spending, 1960–2010’. This, coupled with the high demands for health care due in part to an ageing population but also demand for new ways of working, means that decisions are always being made about how best to use the resources we have available to us. This can be at a very specific level (e.g. what is the best form of care for someone with schizophrenia?) or at ‘higher’ levels (e.g. should we spend more on care for people with mental health problems or those with cancer?; should we be spending more on health or education or defence?). Such decisions have always been made and always will be. The methods of economic evaluation have been developed and operationalised to try and make this process better informed.

In establishing whether new forms of mental health provision represent value for money we need to combine information on the costs of care with evidence on outcomes. Simply focusing on costs is rarely sufficient – unless of course we are simply intent on identifying the least-cost option. Costs are, though, key to an economic evaluation and it is imperative that they are measured appropriately. In doing so, we must recognise that they can be borne by different agencies: the health service, social care departments, housing agencies, criminal justice services, education providers, social security systems, business, families and friends, and users of services themselves. Altering the way in which care is provided might have widespread impacts and capturing these is important. If one were to only focus on direct intervention costs, then key information might be missed. For example, some medications can be particularly expensive but if they result in savings elsewhere then such costs might well be offset if they prevent lengthy inpatient hospital stays or return patients more quickly to productive work.

Costs ultimately have to be either combined with or viewed alongside outcomes for those affected by changes in service delivery. In mental health care, this is of course primarily the user themselves, but families and wider society can also be affected. What outcomes should be included? Given that there is unlikely to be just one impact that is of interest, it may be that a range of outcomes are relevant. Policymakers in England (not the whole of the UK) generally favour the use of quality-adjusted life years (QALYs), which are a generic measure enabling, in theory, comparisons to be made across diverse health areas. QALYs have been used to evaluate mental health services but many would consider them to be too reductionist.

One outcome that has frequently been used in mental health evaluations has been the use of inpatient care. This has its merits. It is easily recorded and avoiding admission may be desirable. However, it has important limitations. It comes with an underlying assumption that inpatient care is ‘bad’ but for some it may be entirely necessary and good-quality care may be optimised in an inpatient setting. Furthermore, it is not a true outcome measure but rather a process measure. As such, it is more properly considered to be a cost rather than an outcome. Finally, use of inpatient care is directly associated with the provision of inpatient beds in the area. Reducing provision inevitably leads to reduced use.

Economics of Deinstitutionalisation

In the UK, the largest and most influential evaluation of a hospital closure programme was the Team for the Assessment Psychiatric Services (TAPS) study led by Julian Leff (see also Chapter 30).11 This focused on the closure of the Friern and Claybury hospitals in North London in the 1980s and the health economic component was conducted by the Personal Social Services Research Unit (PSSRU) at the University of Kent, led by Martin Knapp and Jennifer Beecham.12 The TAPS study followed cohorts of patients as they were discharged from the hospitals over several years, usually to long-stay residential facilities. Not surprisingly, the patients who were considered easier to place in the community were discharged first and they had noticeably lower average weekly costs of community services than those discharged later. These costs include the residential facility plus other services used while in the community and any subsequent hospital care. The authors of the TAPS study also looked at the comparative costs (in 1993/4 £s) of hospital- and community-based care.13 This showed that the average weekly cost when still in the Friern Hospital and Claybury Hospital was £578 and £551 respectively. This compares to £539 during the first year following discharge for cohorts 1–7 and £562 in the fifth year following discharge for cohorts 1–3. What is clear from this is that the costs of community care were similar to hospital care and it was therefore not saving funds, even if that was desired by some.

Inpatient and Residential Care since Deinstitutionalisation

What can we say about inpatient care since the hospital closure programme has been largely completed? While the number of days spent in hospital for mental health reasons fell from when there was a peak number of 150,000 inpatients at any one time in 1955, in recent times the number of days has been fairly stable. Data from the Department of Health’s Hospital Episode Statistics show that, in 1998/9, there were 7,029,445 days spent in hospital by people with a psychiatric diagnosis and, by 2009/10, the figure was 7,482,979.14 This was also a time of comparatively high investment in community services and so probably indicates that this number of bed days (equivalent to around 20,000 beds) may be the minimum required.

Some patients do, though, remain as long-stay patients in hospital, and it is interesting to see what one year as an inpatient is equivalent to in terms of other forms of care. Box 11.1, where the numbers are based on unit costs produced by the University of Kent, shows that the amount of resources that could be provided is substantial.15 Of course, it may be unfeasible for some to be discharged from hospital, but these figures do illustrate the alternatives that are available. Even though residential care is expensive, it is far less so than inpatient care. It is also worth pointing out that one day in prison has a cost of around £102 which is substantially lower than hospital costs.16 It is well known that psychiatric morbidity in hospital is high, but the differences in costs between prisons and hospitals, especially secure units, can act as a disincentive to change location.

Box 11.1 One Year in Hospital Is Equivalent to …

3,058 day care sessions
1,105 counselling sessions
1,095 staffed residential care days
577 outpatient appointments
969 psychologist contacts
2,803 community mental health nurse contacts
2,452 GP contacts

Inpatient stays remain a fundamental part of the care process within a mental health system and are also a highly expensive form of care. It is interesting therefore that there are relatively few studies that have investigated what actually takes place on inpatient wards and in particular there have been limited numbers of economic studies focusing on this. One major exception was the Patient Involvement in Improving the Evidence Base on Inpatient Care (PERCEIVE) study led by Til Wykes from King’s College London and conducted in south London. This investigated care provided on seventeen inpatient wards and evaluated means of improving activities for patients. The economic analyses (described in depth by Sabes-Figuera and colleagues)17 were a novel departure from usual studies that have examined inpatient costs in that the focus was on activities and staff contacts that patients considered to be meaningful to them. Study participants were asked what meaningful care they had received in the previous week and, from this information, costs were calculated. Further analyses attempted to identify which patient characteristics were predictive of variations on the costs across the sample.

The patients reflected those found on many inpatient wards in inner-city areas in the UK. Schizophrenia or bipolar disorder was the diagnosis of 65 per cent, compulsory detentions were experienced also by 65 per cent, and 53 per cent were from a minority ethnic group. Therapeutic activities, including ward meetings, had been experienced by 78 per cent of the sample which was relatively encouraging. Meaningful staff contacts were reported by 90 per cent of patients (including 59 per cent with nurses, 74 per cent with psychiatrists and between 20 and 33 per cent with other professionals). While this might seem reasonable, it does mean that some patients were reporting no meaningful contacts at all. When it comes to costs of meaningful care, the average for the week was £227. Psychiatrist contacts made up 48 per cent of this, nurse contacts were 13 per cent and other therapeutic activities were also 13 per cent. This figure of £227 per week is substantially below the cost per day on a ward if the conventional approach to estimating unit costs is used. The implication is that, although staff may have different views, patients perceive much care they receive to not be meaningful or therapeutic.

The abovementioned TAPS study investigated discharge from long-stay hospitals and usually this was to supported accommodation in the community. The Quality and Effectiveness of Supported Tenancies (QuEST) study more than twenty years later, led by Helen Killaspy from University College London, evaluated care delivered in different forms of supported accommodation.18 A total of 619 residents were recruited for the study from residential care (which involved 24-hour staffing), supported housing (self-contained apartments with staffing up to 24 hours per day) and floating outreach arrangements (self-contained tenancies with staff support). Various clinical measures were taken as well as quality of life, and service use was recorded and costs calculated.

The QuEST study revealed that quality of life was similar in residential care and supported housing but was lower in floating outreach arrangements. Satisfaction with care was similar under all three models. Service use in the previous three months showed relatively high levels of care coordinator, psychiatrist and other doctor contacts. Inpatient care was received by relatively few participants. Total care costs were similar between residential care and supported housing, while care costs for those in floating outreach arrangements were around one-third lower. This was not surprising given that those in these forms of care would presumably have lower levels of need.

Economics of Specialist Services

Shortly after the election of the Labour government in 1997, the Department of Health published the National Service Framework for Mental Health.19 This put forward the case for further developing specialist mental health services across the country. Three particular new models of teams and services were outlined: home treatment teams for people facing acute mental health crises; early intervention services for those with a first episode of psychosis; and assertive community treatment to improve contact with those who may be hard to engage with services. Though they had not been available across the whole country, such services had existed for many years in different settings. There was reasonably powerful evidence on the clinical benefits of the models but up-to-date evidence on their economic aspects was limited. A number of studies using similar methods were forthcoming though.

A randomised trial led by Sonia Johnson from University College London was conducted to compare a crisis intervention service with usual care in north London.20 The main outcome in this study was inpatient use over a six-month follow-up period and this was shown to be substantially less for those receiving the crisis service. The costs of the two forms of care were calculated and, given the main outcome, it was not surprising that crisis intervention resulted in large cost savings (around 30 per cent) compared to usual care.21

The Lambeth Early Onset study led by Tom Craig and Philippa Garety at King’s College London was another trial, this time comparing early intervention for first-episode psychosis with usual care.22 The early intervention service resulted in reduced time in hospital. Costs for many services were higher for those receiving the service but total costs were lower due to the impact of reduced hospital time.23

The REACT study, led by Helen Killaspy, again at University College London, evaluated assertive community treatment in north London.24 The randomised trial took inpatient days over an eighteen-month period as the primary outcome measure and found no significant difference between assertive community treatment and usual care. The economic evaluation measured a range of services and calculated costs of these. The average cost over the eighteen-month period was £34,572 for assertive community treatment and £30,541 for usual care. Most of this cost was accounted for by inpatient care.25

These three studies have demonstrated that early intervention and crisis services certainly appear to represent some value for money. Less clear is the assertive community treatment. One potential reason for the lack of a significant effect for that service model was that usual care by the time of the study was well developed – certainly compared to the comparator groups in some of the early studies of assertive community treatment.

Conclusion

Economic considerations were a key influence in the development of mental health care in the period between 1960 and 2010. While some initiatives might have been seen as cost-saving endeavours, it has been clear all along that good-quality mental health provision is not inexpensive. This, though, should not necessarily be seen as a problem – cost is only a monetary proxy for care provided and we would want care to be sufficient for those in need of it.

Key Summary Points
  • Financial arguments have been influential in the development of mental health services over the ages, from the establishment of asylums through to their demise and replacement with other forms of care.

  • In 1960, gross domestic product (GDP) in the UK was £26.1 billion and, by 2010, was £1.6 trillion. Adjusting for inflation gives an increase of some 336 per cent over this time period. In 1960/1, the amount of GDP accounted for by health spending was 3.1 per cent and this had increased to 7.5 per cent by 2009/10.

  • Given that in other areas productivity gains can be achieved through technological advancements, products in these areas are prone to becoming cheaper in real terms. More labour-intensive sectors (health but also education) do not experience such productivity gains and so, as an economy develops, we should expect and even welcome a greater proportion of spending going on those areas.

  • The TAPS study in north London in the 1980s showed that the average weekly cost when still in the Friern Hospital and Claybury Hospital was £578 and £551 respectively. This compares to £539 during the first year following discharge for cohorts 1–7 and £562 in the fifth year following discharge for cohorts 1–3. What is clear from this is that the costs of community care were similar to hospital care and it was therefore not saving funds, even if that was desired by some. However, prison costs are lower than hospital costs and this may discourage admission of mentally ill patients to appropriate clinical facilities.

  • Health economic studies carried out in London and published between 1999 and 2009 suggest that, while home treatment teams for people in acute mental health crises and early intervention teams may save money, assertive outreach teams for difficult-to-engage patients may not. However, cost calculations do not address issues of quality of care and desired outcomes.

Chapter 12 True Confessions of a New Managerialist

Elaine Murphy
Introduction

I became one of the new breed of medically qualified health service managers in the early 1980s and watched with fascination and some amazement at the upsurge in triumphs and disasters that accompanied the management revolution over the course of the following twenty years. In this chapter, I explore why the revolution happened, what went right and what went appallingly wrong for users of mental health services and for the professionals working in them. I end on a positive note; some aspects of mental health care improved over the forty years between 1970 and 2010 and quite a lot can be attributed to ‘management’.

Three separate threads of causality in the changes in mental health services came together in the late 1970s: first, the oil crisis and its impact on global funding of health care; second, the drive to improve health care quality; and third, the global commitment to the deinstitutionalisation of people with serious mental disorders and profound learning/intellectual disabilities. Each one of these issues posed serious challenges; trying to cope with all three at once was bound to cause ‘collateral damage’ to the lives and careers of the cared-for and carers.

Health Care Funding

The 1973–4 oil crisis significantly raised prices and, shortly after, the Organization of the Petroleum Exporting Countries (OPEC) cut off supplies to several Western countries in retaliation for their support for Israel in its war with Egypt and Syria. Britain’s ambassador to Saudi Arabia commented that the oil price rise represented ‘perhaps the most rapid shift in economic power that the world has ever seen’.1 The crisis underlined the importance of oil to the world economy in no uncertain terms. At that time, oil provided more than half of the world’s energy needs – a situation that was not expected to change for the foreseeable future. Five of America’s twelve leading firms were oil companies, as were Britain’s top two: BP and Shell. ‘The disappearance of cheap oil has transformed the world in which British foreign policy has to operate’, noted the Foreign Office, with industrialised nations seeing their trading surpluses transformed into deficits almost overnight.

In the UK, this led to a stagnation in NHS funding through the years of the Labour administration. This was followed by an attempt to reduce and make more efficient the delivery of health care during the Thatcher years of 1979–90. The government then as now seemed not to understand that it is largely demographic change of an ever-ageing population that shifts demand once the parameters of provision have been established. The spending in real terms went up substantially from £9.2 billion in 1978/9 to £37.4 billion in 1991/2. Even adjusting for inflation, this rise is more than 50 per cent. However, the hyperinflationary effects of unionised NHS staff salary demands and an increase in pharmaceutical prices led to a real increase of only about 1.5 per cent. This, combined with real restrictions on acute hospital funding, contributed to the general perception of parsimonious funding for the NHS and subsequent raiding of conveniently ‘underspent’ mental health budgets by health authorities trying to balance their acute hospital budgets. Mental health underspends were largely caused by the poor recruitment of staff.2 A quarter of all London general hospital beds closed during these years. Mental health suffered as a neglected poor relation.

The Drive to Improve the Quality of Health Care

These years also witnessed the advent of the systematic examination of health care quality, which perhaps began seriously with the work of Avedis Donabedian, who, in 1966, proposed a framework for a quality of care assessment that described quality along the dimensions of structure, process and outcomes of care. This galvanised the examination of quality in the US health system and prompted similar investigations in the UK.3 Quality of care was defined as ‘the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge’. Health systems should seek to improve performance on six dimensions of quality of care: safety, effectiveness, patient-centredness, timeliness, efficiency and equity. It was clear that the current care systems could not do the job. Trying harder would not work. Changing systems of care, however, would, it was perhaps somewhat naively believed. It followed that what was true of acute hospital systems must also be true of mental health care systems. The desire to squeeze mental health services into a management framework designed for a different model of care, omitting the key role of social services, housing, employment and service users’ own shifting perceptions of what they needed, was bound to lead to difficulties and the corrosion of trust among those who worked at the front line.

Received wisdom among health care pundits declared that health systems must be judged primarily on their impacts, including on better health and its equitable distribution; the confidence of people in their health system; and their economic benefit. Outcomes would depend on processes of competent care and positive user experience. The foundations of high-quality health systems were judged to include the population and their health needs and expectations; governance of the health sector and partnerships across sectors; platforms for care delivery; workforce numbers and skills; and tools and resources, from medicines to data. In addition to strong foundations, health systems would need to develop the capacity to measure and use data to learn. High-quality health systems should be informed by four values: they are for people and they are equitable, resilient and efficient.

The message was both seductive and impressive.4 Mental health services, however, had no well-developed way of demonstrating outcomes for their patients; they were immediately at a disadvantage. Only process measures could be assessed; and what of the half of patients with serious psychotic illness who did not believe themselves to be ill and did not want to engage in services? Where did the model get us to there? Developing an outcomes framework in mental health was a challenge that would take years and still has not affected funding changes to better target effective services.

The Global Commitment to Deinstitutionalisation

The third major impact on services was the agenda driving deinstitutionalisation, one of public and moral necessity. This was based on a growing emphasis on human rights as well as advances in social science and philosophy attacking psychiatry and the boundaries of what constituted mental illness, which reached its height in the 1950s and 1960s. A series of scandals in the 1970s around the ill-treatment of mental health patients and a strong, vocal service user movement provided harrowing stories of people’s experiences of care, which contributed to the opprobrium heaped on services. The timing, however, created the impression that somehow delivering mental health care ‘in the community’, wherever that was, would be cheaper than delivering care to long-stay patients in hospital.

This moral agenda, however, was supported by other developments that facilitated the possibility for transformation. Pharmaceutical advances demonstrated that people with severe mental illness could be treated and it became clear that institutionalisation itself was harmful. Politically, there was consensus among parties about the vision for mental health services. Gradually there emerged an economic impetus for deinstitutionalisation, which accelerated as large institutions became financially unsustainable and, in many cases, were occupying prime development land that finance directors perceived as capital asset money-spinners. Ostensibly the programme was geared towards achieving greater integration of health and social care provision with the development of alternative community services, to be delivered by local authorities, which, however, were never consulted and felt unreasonably put-upon.

New organisations were set up to manage the process of deinstitutionalisation and subsequently deliver services. Many of these were charities, including housing associations. With involvement from each stakeholder group (including the district health authority, local authority and voluntary sector), a key function of these new organisations was to broker relationships to ensure that no single organisation had sole ownership and to manage the power dynamics. They provided opportunities for people to connect around a new organisational form with its own identity and purpose, underpinned by a board and trustees who were accountable for the process and outcomes of transformation. These new organisations led on many aspects of the transformation, bringing new ideas. Subsequently, they received most of the funding, led on developing new services and created systems and structures to manage the transition, including workforce management and training. Yet the people who had previously managed the whole patient experience in hospitals were often left out in the cold, with doctors pigeonholed into ‘drug prescribers’ and senior nurses rejected as ‘more institutionalised than the patients’, an insulting and inaccurate phrase I heard often. The failure to engage with the traditional professional groups caused a serious waste of human resources in mental health services that is still not rectified.

There were in fact many excellent examples in existence before the closures. Psychiatric rehabilitation specialists had been quietly moving people out into group homes and community organisations since the late 1970s, demonstrating that this could be done well and improve patients’ lives if well supported by clinicians. David Abrahamson had a successful programme running from Goodmayes Hospital in the London Boroughs of Newham and Redbridge from the mid-1970s that made a huge impact on my own philosophy of community practice.5

The deinstitutionalisation process involved a significant focus on managing the workforce. Where it resulted in the closure of individual wards, staff were absorbed into the wider organisation. Yet many mental health professionals who had been confident of their role as psychiatrists and nurses in an institution suddenly found themselves expected to take on different working patterns, leave the comfort of the professional silos that had hitherto dominated mental health services and take on completely alien tasks. Many resented it. An impatient senior management cadre, unable to perceive quite what the problem was, steamrollered through the closures, leaving resentful and uncomprehending consultant psychiatrists and senior nurses marooned in different locations from their patients and the staff teams they had previously worked alongside. Psychologists and occupational therapists, however, suddenly found their skills valued more highly and grasped the space left in the vacuum. In the 1980s, I visited a mental health service in Devon where the new community teams located in small dispersed market towns were struggling to provide a service without support from psychiatrists, since all the psychiatrists had remained recalcitrantly fixed to their offices in the old but much-loved Victorian mental hospital in Exeter. It took a new generation of appointments to solve a problem that should have been thought through at the outset.

Terrible mistakes were made as a result of an ideology of community care taking hold, in the face of obvious shortcomings for the most seriously ill and especially for those with profound learning/intellectual disabilities. Everyone had to be squeezed into the same model. Some saintly staff fought to bring common sense to a process that became the end rather than the means. In a devastating critique of the process he lived through in my own health district, Nick Bouras, one of the editors of this book, gave a personal account of the obstacles and challenges he faced in developing, researching and implementing services for people with intellectual disabilities.6 In a very personal memoir of these years, he recorded the successes and frustrations of working in a system that does not always have a shared vision and the tenacity and enthusiasm that are necessary to reform an NHS service from the inside, in the teeth of every obstacle possible.

The cost of the deinstitutionalisation process, if done properly, was to prove much more expensive than originally forecast, and the most profoundly disabled were hugely expensive to care for satisfactorily in the community. I was a close observer of the closure of Darenth Park Hospital in Kent, a vast old mental handicap hospital in the South East Thames Regional Health Authority area, containing patients from all over Kent, East Sussex and inner and outer south-east London. Glennerster’s economic analysis demonstrated that the final costs of care to the NHS, local authorities, the Department of Health and Social Security (DHSS) and Housing Corporation budgets were over a third more expensive than the old hospital. With modern buildings, a higher staff ratio and more personalised care plans, this is scarcely surprising. The extra costs fell on other public sector organisations that hitherto had borne few costs of care.7 It took many years for the skills lost when the hospital closed to be learnt in the community, before a model of largely social care replaced the old nursing model. It is still not clear whether the very high costs of caring for the most disabled patients in this way is a good use of scarce resources. It is as possible to be institutionalised in a flat for two as it is in a ward for thirty.

Then Came Management

The creation of a large managerial stratum within the NHS in the 1980s and 1990s has been one of the most striking characteristics of reforms intended to develop a more efficient and ‘business-like’ service that was meant to address the problems of quality and efficiency. The majority of new managers had previously already been employed with clinical titles as senior nurses or belonging to other professional groups. As a result of the new job titles, the public’s misperception was that an entirely new group of employees were draining resources from the clinical front line. The growth in managers was accompanied by a political rhetoric of decentralisation that cast local managerial autonomy to gauge and respond more easily to the needs and preferences expressed by local communities. In fact, the role of local populations in influencing decisions and determining priorities was considerably less than proclaimed by the sustained political rhetoric in favour of the local voices. That has remained the case. The NHS is done to people; it does not invite them to participate. Again the new purchaser/provider split, the creation of NHS Trusts and a general management structure within the NHS were never created to deal with mental health services but their implementation in mental health was inevitable as systems of financial and professional accountability were necessarily aligned with acute services.

One unforeseen problem was the sudden creation of a tier of managers, formerly labelled as ‘administrators’, with new power and authority beyond their wildest dreams. Trevor Robbins has suggested that one possible cost of this newfound authority is that its operation may be degraded under conditions of stress (e.g. resulting from exposure to a profusion of problems requiring difficult decisions). Speculatively, this may be manifest in part as the ‘hubris syndrome’, which he perceives as an acquired personality disorder that we see often afflicts politicians and others in leadership positions, with serious consequences for society.8 The same phenomenon was witnessed in the NHS, generally, but I think especially in mental health services where passionately committed new-style managers felt that they could at last wage war on consultant psychiatrists who had had far too much power under the old regime to block developments and impose their own view of the world.9

In 1983, I was in a relatively recent appointment as Professor of Old Age Psychiatry at Guy’s Hospital, my chair held at the United and Medical Dental Schools, part of the University of London. Crucially both chair and department were paid for by the NHS, so it was clear my job was going to be to develop a much-needed service for the locality as well as to engage in research and I was enthusiastic about the new community-based approach and anyone who would support my ideas. It was a time when Guy’s Hospital, part of Lewisham and North Southwark Health Authority, was at the forefront of encouraging doctors into front-line management. One of my first tasks as a new professor, a request from the Department of Health, was to spend a day showing local services to Sir Roy Griffiths, a supermarket executive who had been commissioned to review the management of hospitals (see Chapter 3). I complained to him (this was a rare opportunity to get a hearing with a VIP) that the resources and beds were all in the wrong place; that the local authorities were much more important to my work than anyone recognised; and I had a long list of frustrations I could do nothing about. At the end of the day driving round Southwark and Lewisham, he said, ‘So why aren’t you and doctors who feel like you becoming managers?’ It had never occurred to me. In his subsequent report, he concluded that the traditional NHS management had led to ‘institutionalised stagnation’. The report’s recommendations, including that hospitals and community services should be managed by general managers, were accepted and he promoted the idea that clinicians who spend the money in services must be responsible for managing it too. The changes introduced as a consequence of the Griffiths Report brought a large increase in general managers in the NHS, from 1,000 in 1986 to 26,000 in 1995, with spending on administration rising dramatically over the same period.10

I enthusiastically accepted an invitation to join the District Management Board, without any notion whatever of what ‘management’ might entail. There was no process of appointment, no one was consulted, but I vaguely thought it was something one could do on a Friday afternoon after everything important was wound up for the weekend. It did not occur to me then but rapidly became apparent that not only did other consultant and academic colleagues resent some female ‘whipper-snapper’ being appointed but several important men had been slowly waiting their term to be elected by their peers as consultant spokesman for the department. Furthermore, the district health authority was permeated by Labour Party political representatives from our inner-city boroughs who had a vehement revulsion of Thatcherite policies of any kind, whether it was competitive tendering of support services, performance management, devolved financial management or any kind of change in the configuration of even unsafe services. I developed an abiding respect for the NHS professional administrators who had over years learnt to bargain, negotiate, wheedle and cajole what they needed out of a heavy-handed NHS bureaucracy above them and a dismissive set of antipathetic local politicians around them. It was clear that my formerly successful strategy of steamrollering changes through sheer cussedness laced with charm would only go so far. I will admit to developing some hubristic self-confidence that was bound to fail as a long-term management strategy.

The tribal cultures in the NHS have never really been adequately tackled then or now. The problem is both educational and ideological. It is culture that separates medium-status managers and politicians who stay in a post for a relatively short time from high-status clinicians who consider themselves intellectually superior to the managers and who are in post far longer than the managers. Senior clinicians have professional support systems in the Royal Colleges and other institutions that monitor them. Loyalty to a professional specialism is far greater than to an individual employer. This is changing but only very slowly. Nurses felt as alienated from managers as doctors did. They felt that management was too theoretical and out of touch with the daily realities of providing care in a busy and harried environment. Managers were perceived by nurses as being too ready to redecorate their own offices when wards had crumbling paint. Managers saw nurses on the other hand as hopelessly traditional, having a very narrow perspective and largely concerned with making life tolerable for themselves rather than improving the patient’s journey. These tribal stereotypes were wrong and fuelled by clinical resentments at what they saw as a misrepresentation of their commitment to their patients.

Conclusion

By 2010, many early managerial innovations had been accepted as normal. There was a far greater tolerance of the notion, hardly controversial, of operating within a budget for example. However, a distinctive feature of the early reforms was also a drive to co-opt professionals themselves into the management of mental health services and it seems that psychiatrists have been peculiarly unwilling to commit themselves away from patient and clinical work. Doctors and nurses rarely now become full-time managers, or even part-time ‘hybrid’ professional managers, although some are willing to take on temporary clinical director roles and participate in taking responsibility for budgets. One major mistake in the beginning was to emphasise management and downplay the role of personal leadership in inspiring and guiding clinical service change. Professional bodies are now actively supporting and even driving these changes. Clinical leadership has at last moved from ‘the dark side’ to centre stage at last. There is even a Royal College of Psychiatrists’ textbook on how to be a psychiatrist manager.11 That would have been unthinkable in 1970.

Key Summary Points
  • Three key drivers that introduced the new managerialism into mental health services were funding constraints, the drive to measure health care quality and the move to deinstitutionalisation.

  • A new cadre of managers, some of which were clinicians, but many of whom were not, often rode roughshod over traditional clinical administration and many psychiatrists and nurses felt ignored and undervalued.

  • One major mistake in the beginning was to emphasise management and downplay the role of personal leadership in inspiring and guiding clinical service change.

  • Managerialism brought a new understanding of budgets, human resources and objectives into mental health services that was largely positive but mental health services are still fashioned around systems that were established for the acute hospital sector and not readily adapted to mental health service provision.

Chapter 13 Subjectivity, Citizenship and Mental Health: UK Service User Perspectives

Peter Beresford and Liz Brosnan

In a viral 2009 TED Talk entitled ‘The danger of a single story’, Nigerian feminist author Chimamanda Ngozi Adichie said, ‘How [stories] are told, who tells them, when they’re told, how many stories are told, are really dependent on power … Power is the ability not just to tell the story of another person but to make it the definitive story of that person.’1

Introduction

The focus of this chapter is the impact that UK mental health service users/survivors have made on mental health policy and practice in the period covered by this book through their movement and survivor-led organisations. We write as part of this movement, which we believe probably represents the most significant development in this field and therefore one that demands careful and serious examination, particularly in its broader social, political, policy, cultural and economic contexts. It is our aim to develop that discussion and reflect on the ideas of subjectivity and citizenship as pertaining to this social history more broadly.

Training, Mental Health Services and Diversity

How can one chapter tell a story as diverse and multifaceted as our history? At the same time, it has been invisible to most educators of past generations of psychiatrists as we are aware from teaching students over the past decades. While painfully aware of our privilege as highly educated white, Jewish in the case of Peter, survivors of encounters with psychiatry, we have been participants and observers of the matters of which we write and seek to offer an overview of the history of the survivor movement in the UK up to the 2010. This signposts to other writers who can fill in the gaps for future scholarship and research.2

It is imperative that this narrative of our movement’s struggles for subjectivity and citizenship is brought to the attention of future psychiatry students and trainees. They can get a foretaste for further encounters with the vibrant, dynamic and diverse layers of activism, resistance and collaboration by those activists and survivor/Mad scholars who demanded recognition for our rights, dignity and citizenship. We in our movements have variously been labelled mental patients and service users and claim our own designations as survivors and Mad scholars. Our narrative is necessarily partial, in that space will never permit a complete and definitive account of the diverse experiences, often glossed over in a homogenising simplification and omission of the underlying tensions, complexities and compromises evident over a period of rapid flourishing of activism and scholarship.

Another reason the opportunity to contribute this chapter is very welcome is because it allows us to point out an implementation gap in mental health services. As stated, we know generations of psychiatrists have not been educated about the activism and achievements within the user/survivor movements which left many practitioners ignorant of the autonomy and agency achieved over the past fifty years. The implementation gap arising from delays in research-based evidence filtering down to practice, plus systemic hurdles involved in changing academic course content, means that it may take years for new knowledge being developed within the user/survivor social movement activist and research endeavours to be accepted as legitimate perspectives. This is exemplified by experience delivering seminars to medical students during their psychiatric training, where my (LB) input on survivor research consisted of two-hour sessions on an elective module. When asked, the students could not speak about what the survivor-developed recovery concepts meant despite being two years into their medical studies already. These are the clinical leaders of the future. Hopefully, inclusion of a brief account of this activity in a significant social history of psychiatry will help future generations of psychiatrists be more aware of what survivor epistemology and tacit knowledge have contributed to our understandings of mental distress and madness.

We aim to introduce readers to the diversity of both activism and experiences within our heterogenous communities, including the silenced voices of BAME, LGBT and indeed women’s specific experiences, particularly around motherhood. Very legitimate criticisms of both external and internal historical accounts highlight a homogenising narrative, which has presented accounts from a white, straight and able-bodied perspective. External academic accounts of the user movement have been challenged by the Survivor History Group as distorting and misrepresenting the agency of the service user movement.3 Internally, the exclusion of the BAME communities’ perspectives has been challenged recently by several writers, including Faulkner and Kalathil and Kalathil and Jones.4 Carr has highlighted the heteronormativity of the user movement.5 These are the internal reflections of a mature social movement which has developed historically across decades of struggles outlined in the section that follows. We aim also through this work to begin to introduce how the writers and activists in our movement have understood and addressed issues of subjectivity and citizenship.

Social History of the User/Survivor Movement

There is often talk in the UK about ensuring parity between physical and mental health services. This often relates to the funding of mental health services which has increasingly fallen behind that for physical health.6 Perhaps more revealing are the differences in progress of these two branches of medicine within the National Health Service (NHS) made over this period. Thus, in physical medicine we have seen enormous innovation; the development of heart and other transplants, operations on foetuses, keyhole surgery, joint replacements, massively extended survival rates for many cancers, robotics used in surgery, greatly improved pain control, new diabetes treatments, drug drivers and so on – an almost endless list.

It looks like a very different story in mental health, where patients fifty years apart worryingly could expect little changed treatment. This includes a continuing emphasis on compulsion and restraint;7 use of electroconvulsive therapy (ECT), despite its evidenced failings;8 and the ongoing use of drugs like Largactil (chlorpromazine), with well-documented damaging effects like tardive dyskinesia.9 The psychiatric system is still over-reliant on drug treatments. Psychiatric innovations like ‘second-generation antipsychotics’ have brought their own problems, including serious ‘side’ effects and their widespread and problematic use ‘off-label’ for groups they were not intended for, notably older people with dementia.10 It has been estimated that a quarter of a million people are dependent on benzodiazepine and related minor tranquillisers, although it has long been known these should only be prescribed for very short periods of time.11 The ‘talking treatments’ service users have long called for have been institutionalised to six sessions of cognitive behavioural therapy (CBT) through the IAPT (Improving Access to Psychological Therapy) programme; and such interventions have increasingly been directed at getting mental health service users into paid work, regardless of the nature and quality of such employment or of how helpful it is likely to be for their mental well-being.

Admittedly after massive delays, the grim Victorian ‘lunatic asylums’ are now largely gone, although some of their intimidating premises still serve as sites for ‘treatment’. As other contributors in this book have pointed out, in 1961 Enoch Powell as health minister gave his famous ‘Water Tower speech’ promising to get rid of them. It was not until the Act of 1990 and the switch to ‘community care’ that this really happened and then, because the new policy was implemented so poorly, mental health service users, left without adequate help or support, were again stigmatised as ‘dangerous’ and a threat to ‘the public’ (see also Chapters 27 and 28).12

The lack of progression in the modern psychiatric system and its association with control, abuse and institutionalisation in the 1960s gave impetus to the development of a mental health service user/survivor movement in the UK.13 While related ‘mad person’ protests and activism have been identified from the seventeenth century, Peter Campbell, a founding survivor activist, dated the modern UK survivor movement, which has grown on an unprecedented scale, to the mid-1980s, tracing its origins to earlier mental patient groups from the 1970s and acknowledging the help it received from progressive mental health professionals.14

The UK mental health service users/survivor movement can be seen as one of the ‘new social movements’ (NSMs) emerging globally in the second half of the twentieth century, largely based on shared identity and common experiences of oppression – thus the black civil rights, women’s, LGBTQ and grey power movements. Certainly, welfare state user movements like those of survivors and disabled people highlighted their links and overlaps with these NSMs.15 The UK disabled people’s movement was in some ways a separatist one, arguing for different kinds of support to that which had been provided and developing its own underpinning model or theory – the social model of disability and related philosophy for change of ‘independent living’.16 The same separatist drive and radically different philosophy does not seem to have been true of the mental health/survivors’ movement. The many groups and user organisations that emerged often operated within the psychiatric system, its services and related voluntary organisations and were sometimes directly linked with and funded by the services. While the movement did not have the same kind of distinct philosophical basis or perhaps independence as the disabled people’s movement, nonetheless it has highlighted a number of common principles that have endured:

  • The lives of mental health service users are of equal value to those of others.

  • Mental health service users have a right to speak for themselves.

  • There is a need to provide non-medicalised services and support.

  • Service users’ first-hand experience should be valued.

  • Discrimination against people with experience of using mental health services must end.17

The emergence of the survivor movement, like other service user movements, was also facilitated by the political shift to the right from the late 1970s which was associated with both a renewed emphasis on the market and devaluing of the state and a growing government rhetoric for consumer rights in public services. While this did not necessarily chime with service users’ calls for more say and empowerment, it opened doors to them and heralded a new stage in the broader interest in democratisation and public/user participation. Key stages in this history vary from country to country but include the following:

  • Working for universal suffrage in representative democracy and the achievement of social rights, like the right to decent housing, education and health, from the late nineteenth to mid-twentieth century.

  • Provisions for participatory democracy and community development, associated with the 1960s and 1970s.

  • Specific provisions for participation in health and social care, from the1980s through to the first decade of the twenty-first century.

  • State reaction and service user–led renewal as conflicts and competing agendas become more explicit, from 2010 onwards.18

While mental health service users/survivors were organising and campaigning before the 1980s, from then onwards their activities mushroomed in scale, visibility, impact and effectiveness.19 Local and national survivor-led organisations were established. International links were developed. There were organisations that focused on particular issues, like the Hearing Voices Network, as well as some that linked with and included other groups of service users, beyond mental health service users/survivors. These included, for example, the Wiltshire and Swindon Users Network as well as Shaping Our Lives, organisations which engaged with a broad range of disabled people and service users, including people with learning difficulties and long-term conditions. There was an emphasis on organising and offering mutual support to mental health service users/survivors who faced particular barriers – for example, if they had difficulty being in public spaces or whose distress might be particularly difficult for them to deal with at particular times – as well as on working together for change.20

Much was achieved in many different areas, not least a major challenge to conventional assumptions that service users could not contribute and be effectively involved.21 Some local groups made arrangements with local hospitals and service providers, enabling members to be on wards to offer information, advice and advocacy. Schemes for collective as well as self-advocacy developed. Service users began to establish user-run services, providing crisis, out-of-hours, advocacy, advice, support and telephone services based on shared experience and first-hand knowledge. Some service users gained skills as survivor/user trainers and took part in academic and in-service training for professional and other mental health workers, offering insights from their lived experience. In social work, this was extended with the new social work degree introduced in 2001, leading to service users and carers being required to be part of all aspects and stages of qualifying training, with a budget from central government to facilitate this.22

Survivors and their organisations became involved in processes of service monitoring, quality control, audit, evaluation and review. Perhaps most significantly, the mental health service user movement has developed its own survivor research and research initiatives. Not only have these offered fresh insights on mental health policy and practice, as well as distress from the perspectives and lived experience of survivors, and producing a growing cannon of both qualitative and quantitative research, but they have also resulted in the establishment of a major Service User Research Enterprise (SURE) unit at the internationally feted Institute of Psychiatry, Psychology and Neuroscience in London and also led to a growing number of survivors gaining doctorates and other research qualifications, sourcing research funding, publishing in peer-reviewed journals and securing mainstream research posts.23 There were some early examples of user-researchers controlling their own research projects, most notably the work in the Sainsbury Centre for Mental Health and the Strategies for Living project in the 1990s;24 but most of the efforts of user researchers have been occurring within academic spaces that have constrained the parameters of what was possible working within mainstream and services-led research projects. Nevertheless, there has been a flourishing of writing by user-researchers since the initial publication by Beresford and Wallcraft.25

However, while survivors and their organisations made significant progress from the 1980s onwards, it often felt from within like two steps forward and one step back. They were unable to achieve any level of funding parity in relation to traditional charitable organisations, and their significant reliance on funding from within the psychiatric system limited their independence.26 Despite their innovative thinking about new kinds of support, few user-led services were supported or sustained in practice. Increasingly their ideas, from peer support and self-advocacy to recovery and self-management, were taken over and subverted by traditional power holders and service providers. The psychiatric system showed an enormous capacity to resist change while incorporating it at a rhetorical level.27

Two convincing arguments have been offered to explain mental health service users’ frequent reluctance to distance themselves from conventional psychiatry even though their movement offers a clear philosophical challenge to its medical model, confirmed by research.28 First seems to have been the fear that, if they challenge the underpinning medical model, then they will be dismissed as in denial about their own pathology and lack of rationality.29 Second, there seems to be a more generalised reluctance to sign up to any monolithic theories about themselves for fear that these again might dominate and damage them in the same way that they feel psychiatric thinking long has done.30 However, this has changed with the emergence of Mad studies.31 While its flowering in the UK and internationally takes us beyond the period covered by this book, its origins and emergence can be traced to that time and therefore it has clear relevance to this discussion.

Subjectivity and Research

The narrative recounting of the user/survivor movement in the UK would be incomplete without considering the direction towards academic participation which flourished over two decades and initiated a new positioning of user-researchers into academic spaces. This generation of user-researchers took us into the struggles for legitimacy as knowers of our own experiences, holders of our own subjectivity.

Often derided by clinician researchers as of lesser credibility than its binary opposite objectivity, subjectivity designates the experience under investigation as a valid source of knowledge. Within the social sciences, decades of healthy debate and controversy surround the standing of knowledge embodied by marginalised peoples excluded from the academy and elite spaces where knowledge about their communities has been generated without their participation. The epistemological bias has been called out by scholars from the marginalised communities, leading to critical new scholarship in, for example, feminist and women’s studies, working-class scholarship and Marxist studies, critical race studies and decolonial and disability studies, all of which informed the early mental health user-researchers, and latterly the emergence of Mad studies. Disability studies, for example, fostered a reaction to able-bodied researchers describing the position of disabled people, without any benefits returning to the people studied in terms of material changes to their living situation in congregated institutionalised settings. The work of Oliver, who pioneered the idea of emancipatory disability research, inspired the early user-researchers who railed against their exclusion from knowledge generation about them by detached and objective researchers.32

A core element of these critical intellectual and activist endeavours is that they give value and priority to the situated knowledge of those who live with their mental health ‘conditions’ and under oppressive societal structures, for example physically disabled or racialised people. In the mental health field, user/survivor researchers have equally put forward the arguments that those closest to the experiences under investigation have greater tacit knowledge and insights into the phenomena being studied. This privileging of subjectivity has led to greater insights into, for instance, the experiences of hearing voices, those who self-harm, survivors of suicide attempts and those who undergo ECT.33

Later scholarship has illuminated the accumulated experiences of structural oppressions which have greater impact on people with other marginalised identities: racialised people, queer people and other minorities in society. The concept of intersectionality – developed by Kimberle Crenshaw– describes how black women’s experiences cannot be understood by solely examining patriarchal oppressions as their racialised experiences were often ignored or silenced by white feminists and their experiences as women not understood within anti-racist movements.34 Likewise, Kalathil has pointed out how racialised mental health service users experience intersectional oppressions due to the white majorities in the user movement spaces and sanism (prejudice and discrimination against mental health) within anti-racist movements.35

There is now increasing recognition of the significance of subjective experience and this has led to demands that survivors be heard and listened to as individuals and not just treated as a statistic or diagnosis. Survivors’ claims for validating our subjective knowledge are core to the demands to have our stories listened to. Recent scholarship has dealt with these struggles for justice as knowers of our own experience deploying theoretical concepts such as epistemological justice.36 Additionally, narrative therapy has led to innovations in how to recognise and address the many oppressions which induce trauma.37 The significance of the growing evidence on the prevalence of earlier adverse experiences in people who later present to mental health services validates the movement’s historical demands for listening to those who use services.38 This has resulted in a growing demand for trauma-informed mental health services which give people space to tell their stories before arriving at any treatment decisions.

A narrative justice framework has emerged from narrative therapy and trauma work, which highlights the ‘storytelling rights’ of survivors of injustice and oppression. Narrative justice approaches defend people’s rights to ‘name their own experiences; to define their own problems, and to honour how their skills, abilities, relationships, history and culture can contribute to reclaiming their lives from the effects of trauma’; and the framework centres on an ethical question: ‘When meeting with people whose problems are the result of human rights abuses and injustices, how can we ensure we do not separate healing from justice?’ The Dulwich Centre, an Australian-based narrative therapy organisation, has created a Charter of Story-Telling Rights,39 which include the right of survivors ‘to define their experiences and problems in their own words and terms’ and ‘to be free from having problems caused by trauma and injustice located inside them, internally, as if there is some deficit in them. The person is not the problem, the problem is the problem.’ These narrative justice aims are consistent with many of the rights claimed by service users over the decades.

Citizenship

We are not isolated individuals but live in families and in societies. How people treat us once it becomes known that we have experienced distress or acquired a psychiatric diagnosis leads to the final part of our considerations, that is, struggles for citizenship. Citizenship is a concept embedded within political theory and participatory democracy, which asserts the rights of everyone to participate in a society even though not all have equal access to citizenship privileges.40 Citizenship is linked to the notion of belonging to a society, of having rights and associated duties. In human rights legal scholarship and disability rights, disabled people are rights holders, for which the state is the duty bearer; that is, the state has obligations towards its citizens. Of course, many reject the notion of citizenship as an inclusive concept because many people around the world are denied citizenship and it is applied unequally based on difference.41 Nevertheless, when understood in the context of second-class citizenship, it can be a useful way to examine the experiences of people with mental health diagnoses.

There is pervasive stigma and discrimination against people using mental health services (for a fuller discussion, see also Chapter 27). However, we draw attention to a specific aspect of discrimination against psychiatrised people and the way it denies them full citizenship and most essentially epistemic justice as knowers of our own realities. Sanism, a term coined by Perlin, is expanded on in greater depth by Mad scholars.42 Sanism, they argue, operates to deny us credibility and citizenship, positioning us as lesser citizens. Indeed, sanism is used to justify separate laws to treat people against their will, as mental health legislation is drawn up by governments to address this anomaly in citizenship and human rights.43

The legal basis for state violence, as identified by early advocates against forced removal to psychiatric establishments and treatment imposed by medical experts against one’s will,44 has been described as akin to kidnapping. Lindow has argued that any other people undergoing forced removal and interventions experienced as traumatic would receive post-traumatic counselling and support.45 It is this practice along with the institutionalisation of many people, preventing full participation in society, which were the primary concerns of the many psychiatric survivors who participated in the negotiation of the UN Convention on the Rights of Persons with Disabilities (CRPD) (see Chapter 8). Detailed discussion of the UN CRPD is beyond the scope of this chapter, but there is a wide and growing body of literature and activism considering the rights of people to live lives where they are fully encouraged to be active in their communities and it warrants serious attention from all areas of psychiatry and all mental health professionals.46

Conclusion

Our account concludes at the point where new knowledges have blossomed due to international collaborations enabled by developments in internet access and the arrival of survivor researchers and Mad scholars into academic spaces. It is necessarily short and incomplete, as a full narrative would itself fill volumes. It is offered in an attempt to introduce readers to work that is usually ignored, undervalued and struggling for adequate funding which would allow the work to blossom further and demonstrate its potential to contribute to practice both inside and independently of mental health services.

Key Summary Points
  • The user/survivor movement represents a most significant development in mental health and therefore demands careful and serious examination, particularly in its broader social, political, policy, cultural and economic contexts.

  • Generations of psychiatrists have not been educated about the activism and achievements within the user/survivor movements, which left many practitioners ignorant of the autonomy and agency achieved over the past fifty years.

  • The UK mental health service users/survivor movement is one of the ‘new social movements’ (NSMs), including black civil rights, women’s, LGBTQ and grey power, emerging globally in the second half of the twentieth century, largely based on shared identity and common experiences of oppression.

  • The survivor movement, like other service user movements, was facilitated by the political shift to the right from the late 1970s which was associated with a renewed emphasis on the market, devaluing of the state and growing government rhetoric for consumer rights in public services. While this did not necessarily chime with service users’ calls for more say and empowerment, it opened doors to them and heralded a new stage in the broader interest in democratisation and public/user participation.

  • In the mental health field, user/survivor researchers have put forward the arguments that those closest to the experiences under investigation have greater tacit knowledge and insights into the phenomena being studied.

Chapter 14 How the Voice of People with Mental Health Problems, Families and the Voluntary Sector Changed the Landscape

Paul Farmer and Emily Blackshaw
Introduction

The years 1960–2010 mark a period of radical transformation for mental health in Britain. Like all social change, there were many actors in enabling the transformation to take place. This chapter focuses on the role of people with ‘lived experience’, their families and voluntary organisations in acting as catalysts, enablers and, in some cases, architects for change. The move from institutionalised care to care in the community was partly caused by, and in turn further strengthened, the voices of people with mental health problems. People with mental health problems and the friends and family who supported them, alongside other stakeholders and practitioners, formed, influenced and supported voluntary mental health organisations. The voluntary sector has since been a prominent and vocal force in mental health, supporting the rights of those with mental health problems and filling gaps in service provision, where community care has sometimes fallen short. Charities are in a unique position, sitting outside of statutory care and clinic-based spaces, allowing them to build reciprocal and trust-based relationships with the communities that they serve. The voluntary sector must continue to push for the voice of people with mental health problems to be front and centre of mental health service delivery and policymaking.

Deinstitutionalisation: From Asylums to Care in the Community

Going back as far as the Middle Ages, asylums were the main route of care for those with mental health problems.1 Asylums existed in an unregulated and inconsistent form across England in the eighteenth century, and it was not until the 1808 Country Asylums Act that ‘Lunatic Asylums’ were officially established for the poor and ‘criminally insane’. These were further regulated with the 1845 Lunacy Act, which importantly changed the status of those it served from ‘people’ to ‘patients’.2 Up until the establishment of the National Health Service (NHS), mental health services were subject to the 1890 Lunacy Act and the 1930 Mental Treatment Act, which set the terms for compulsory detention and treatment without certification.3 As one patient described their experience of being committed to Ticehurst Asylum in 1875, ‘my liberty, and my very existence as an individual being, had been signed away behind my back’.4 People had no voice, they were ‘out of sight and out of mind’, locked away by a society which simply could not cope with mental ill health and consequentially stigmatised those who had mental health problems.

The First World War threw mental health problems into sharp relief. Approximately 80,000 British soldiers were treated for a range of war neuroses, generally known as ‘shell shock’, presenting with symptoms including tics, obsessive thoughts, fatigue and paralysis.5 Military medical professionals acknowledged the need to act on mental health, in order to preserve the morale and manpower of their troops, but this was far from the era of ‘psychological modernity’ that some argued enlightened post-war Britain.6 Psychotherapy remained the exception to the rule in terms of treatments for shell shock, which included medicinal remedies of iron, arsenic and Ovaltine as well as electrotherapy.7 Some doctors, such as W. H. R. Rivers (who famously treated the poet Siegfried Sassoon at Craiglockhart War Hospital) explored the psychological causes of shell shock, prescribing the ‘talking cure’ alongside creative activities.8 The collective trauma experienced by the British public during the First and Second World Wars may also have softened attitudes towards mental health and its treatment.

After the Second World War, society in Britain became increasingly concerned with social fairness, reflecting the wider political consensus for universal human rights (UN/European declarations on human rights, later extended to the rights of disabled people),9 as well as marked changes in social order and structures, including the provision of universal health care. Two key changes started to emerge: the formation and gathering of voluntary organisations and the increasing opportunity for people with mental health problems to have a voice. These were not always easy encounters, with deep-seated stigma and resistance to the idea that ‘the mad’ could possibly have a view. Equally, family voices were often marginalised, caught between the clinician and the service user. In Alan Bennett’s Untold Stories, he refers to his mother’s history of depression and his grandfather’s suicide and bemoans the lack of support available for families without unusual symptoms, ‘mistake your wife for a hat and the doctor will never be away from your bedside’.10

A Series of Scandals: Increased Service User Voice and Pressure from the Voluntary Sector

In the 1950s and 1960s, several mental health hospital scandals broke, supported by accounts from people who were treated there, including Farleigh, South Ockendon and Normansfield (see also Chapter 7). The investigation into long-stay wards at Ely Hospital in Cardiff took place between 1967 and 1968 and is often cited as the start of ‘an avalanche of scandal in mental health,’11 although an earlier exposé Sans Everything: A Case to Answer foreshadowed many of the revelations.12 The Ely Committee reported evidence of rough and cruel treatment, inhumane and threatening behaviour towards patients and the pilfering of patients’ belongings and food.13 Despite the best attempts of the government (the Ministry of Health sought to keep the Ely inquiry private, limiting a public appeal for witnesses and keeping the scope of the inquiry narrow),14 the allegations were supported and the story broke, shocking a concerned public and leading to increased governmental pressure. Other prominent scandals were featured heavily in the media, including the Shelton Hospital fire and overcrowding at Warwick Central Hospital, and calls for action were demanded by voluntary organisations. Mind (then known as the National Association for Mental Health, NAMH) condemned the handling of the inquiries in the Observer and worked in solidarity with service user groups such as the Patients’ Association calling for more independent inquiries and support for the Post-Ely Working Party.15 Calls for changes to the 1959 Mental Health Act were also supported by the combined voices of people with mental health problems and pressure from groups such as NAMH.16

The voluntary sector remained at the forefront of the ongoing call for deinstitutionalisation. The then minister of health, Enoch Powell, made his remarkable ‘Water Tower speech’ at NAMH’s annual conference in March 1961, where he called for the closure of large psychiatric hospitals, menacingly describing them as ‘isolated, majestic, imperious, brooked over by the gigantic water-tower and chimney combined, rising unmistakeable and daunting out of the countryside’.17 This was promptly followed by supportive legislation, including the 1962 A Hospital Plan for England and Wales, calling for the closure of all mental health beds by 1975, and the 1971 Hospital Services for the Mentally Ill White Paper, proposing the complete abolition of the hospital system for mental health care (see also Chapter 1).

The movement towards deinstitutionalisation in the 1960s and 1970s was considered a ‘public and moral necessity’.18 This shift in approach from institutionalised care to care in the community has been attributed to many causes, including the rise of psychopharmacology, a desire to cut public expenditure, an increased emphasis on human rights and advances in social science and philosophy (see also Chapter 31).19 Bolstering the call for the closure of asylums was a strong service user voice, providing harrowing accounts of experiences of care and calling for patient emancipation. The Mental Patients’ Union was one such group established in 1973 to demand civil and economic rights for patients, including the abolition of compulsory treatment, increased communication of treatment options and risks and the abolition of isolation as treatment.20 NAMH was also centring the voices of people with mental health problems in its work, with a 1969 article in Hospital World emphasising that the charity had ‘developed from a polite, reassuring body, uttering words of comfort to all those involved with mental health, to an organisation which is now firmly on the side of the patient and not at all scared of speaking its mind when the need arises’.21

The journey from institutionalised to community care dominated NAMH annual conference agendas over this period – from ‘Rehabilitation and Resettlement of Mentally Disordered People’ in 1977 to ‘Breakthrough: Making Community Care Work’ in 1994. These events were often characterised by outpourings of frustration and anger about the poor state of services, the coercive nature of poor-quality treatment and the lack of support or recognition for people with their own experiences of mental health problems. These events acted as a convening point for many people who had struggled to find a voice.

Were Communities Ready for Care in the Community?

Deinstitutionalisation in mental health care involved three core strategies: discharging patients from hospital wards; decreasing or halting hospital admissions; and implementing alternative community-based interventions (see also Chapter 30).22 Discharging patients and reducing admissions was largely successful, with a sharp reduction from 143,700 mental health inpatients in England in 1950 to 49,000 in 1990; the length of stays also reduced dramatically from an average stay of 863 days in 1950 to just 83 days in 1990.23 The success of implementing community-based interventions, however, was less certain. As part of the reconfiguration of mental health services under the 1962 Hospital Plan for England and Wales, acute psychiatric inpatient services were developed at general hospitals, outpatient capacity was increased and local authorities developed community mental health teams.

The voluntary sector played a crucial role alongside local and district health authorities in managing and delivering community-based services.24 Throughout the gradual closure of psychiatric hospitals, it became increasingly clear that community statutory services could not meet all the needs of people who were experiencing mental distress or had learning/intellectual disabilities. This placed a greater responsibility on the families of those with mental health problems in terms of providing care.25 It also coincided with an increase in voluntary sector organisations supporting people with mental health problems. Groups of psychiatrists, people experiencing mental health problems and other stakeholders organised themselves into voluntary sector organisations. There was an established legacy in this. Samaritans had been founded by vicar Chad Varah in 1953. The origins of the Mental Health Foundation were established as the Mental Health Research Fund in 1949. The oldest mental health charity in the UK had been Together for Mental Wellbeing, which was originally established in 1879 and had worked to find housing for female patients discharged from asylums.

Mind had been founded as the National Association for Mental Health in 1946. In 1971, NAMH launched its twenty-fifth anniversary Mind campaign, which was so successful that, in 1972, the organisation adopted Mind as its new name. The campaign sought to clarify the organisation’s aims, raise funds, increase public awareness of mental health and improve hospital and community services. Under the influence of the organisation’s new director, Tony Smythe, it took on a social model of care and sought to support those also receiving care in the community, as well as in hospital. As part of the 1974 strategy, Mind worked to establish more regional offices, in order to support local Mind associations across England and Wales. It also strengthened its campaign for improvements to services at a local level, as it became increasingly evident that statutory services could not meet the needs of all people experiencing mental health problems. From the 1970s through to the 1990s, concern around community care dominated the charity’s public education and fundraising campaigns: ‘Home from Hospital’ in 1976 emphasised the housing problems faced by people discharged from hospital wards; ‘A Better Life’ in 1986 called for increased resources in order to develop an effective local network of care; and ‘Breakthrough! Community Care’ in 1994 attempted to advance a service user–centred and holistic approach to mental health care in the community.26

Local associations for mental health were newly established throughout this period in response to the needs of the community. Often formed by family members or local community leaders following the closure of an asylum, local Minds and other voluntary organisations provided hope and help in communities where people with mental health problems were often feared and stigmatised. Since this point, local Minds grew in number up to more than 200 by the start of the next century and had grown in scope too. Today, local Minds offer a variety of services to the public, including counselling and psychological therapies, housing schemes, social clubs and day centres. Arguably, voluntary and community organisations can go beyond what is offered by statutory services delivered in clinical settings; they become rooted in the communities in which they are situated, work alongside people with mental health problems to provide user-led interventions and become trusted by those people they serve.27

While support for people was an important role for the voluntary sector, it was not sufficient. As seen in the section ‘A Series of Scandals: Increased Service User Voice and Pressure from the Voluntary Sector’, there was a wider need for people to be regarded as equal citizens, receiving respect in society. This required mobilisation on a societal level, with campaigning, legal work and policy influencing a part of this new agenda for change. The 1970s marked a decided change in Mind’s positioning in the voluntary sector. In 1975, the first legal officer was appointed and Mind began promoting itself as a lobbying group. In a 1981 issue of Mind Out,28 Mind’s then medical advisor, Anthony Clare, noted that ‘Mind has developed a lusty appetite for legal reform and the issue of patients’ civil rights’. Prominent campaigners who worked at Mind included civil rights lawyer Larry Gostin, Labour MP Tessa Jowell and disability and mental health campaigner Liz Sayce. In 1979, the legal and welfare rights service became a fully fledged legal department (see also Chapter 8). Mind’s subsequent campaigns again demonstrated the preoccupation with deinstitutionalisation – from the ‘Home from Hospital’ 1976 campaign, highlighting the housing problems faced by people with mental health problems, to the 1986 ‘A Better Life’ campaign, raising public awareness of the lack of resources for the development of a local network of care. Gostin’s 1975 publication with Mind, A Human Condition, is said to have largely influenced the 1982 Mental Health (Amendment) Act, including increased opportunities for tribunal review, as well as detailed regulations for consent and treatment.29 The ‘Breakthrough! Community Care’ campaign in 1994 demanded ‘proper care in the community’. In January 1995, Tessa Jowell MP presented the Community Care Bill to the House of Commons alongside Mind, which highlighted the failures of the government to properly implement the Community Care Act, meaning that care in the community was ‘far from an effective reality’.

The Role of Family, Friends and Carers

Mental health care in the community meant shared care of people with mental health problems, from both mental health professionals and family members. The Schizophrenia Fellowship (now Rethink Mental Illness) was founded in 1972, after a father, John Pringle, shared his son’s experience of schizophrenia in the Times. In 1973, Rethink launched its first support groups for carers and three years later received its first government funding to expand this work.30 Then, in 1995, Rethink published its ground-breaking Silent Partners report commissioned by the Department of Health, which was the largest survey at that time of those caring for people with mental health problems. Many of the issues highlighted in that survey still resonate today. The C4C (Caring for Carers) survey reported on seventy-one UK respondents who were caring for a family member with a severe mental illness in 2015.31 It found that family caregivers had typically cared for their relative for sixteen years and spend approximately twenty-nine hours per week caregiving. The report also highlighted that support for these carers was lacking and they often felt unheard. Worryingly, the UK ranks the highest internationally in terms of perceived stigma felt by family caregivers in seeking professional help.

More recently, the efforts of carers, in the form of family and friends, have been better recognised in mental health service provision and research. As highlighted in a 2017 paper on carer involvement in mental health inpatient units,32 engaging carers in mental health treatment can help to improve patient symptoms and quality of life as well as reduce inpatient admissions.33 Consequently, more and more mental health policies recommend the involvement of carers in the treatment of people with mental health problems. For instance, the 2010 UK publication Recognised, Valued and Supported: Next Steps for the Carers Strategy focused on how best to support both carers and those they care for.34 The 2014 Care Act built on these intentions by enabling carers to complete a carer’s assessment to obtain support from their local authority.

Despite these positive steps, many barriers still exist that prevent carers from becoming fully involved in their family member’s or friend’s mental health care, with perspectives of people with mental health problems and carers still sorely lacking in services and research literature.35 Carers of people receiving mental health care in the UK have experienced difficulties in navigating these services, with communication gaps and discontinuities in treatment. In a qualitative exploration of these experiences, a carer of her son with schizophrenia expressed her frustration with repeated staff changes in services: ‘as soon as I get to know one, then they’ve gone. It used to be very upsetting, very disruptive, because every time there was a new doctor or a new key worker or a new social worker, or whoever, you’ve got to start right from scratch.’ Another woman caring for her husband with schizophrenia expressed the difficulties when carers have to work with mental health professionals with whom they have no established contact: ‘it’s an absolutely vital lifeline to have somebody that knows you, that listens to you, that responds to you, at the other end of the phone.’ Voluntary organisations such as Rethink and the Carer’s Trust are still fighting to support these individuals in carer roles and to ensure their voices are at the centre of conversations around mental health support in the UK.

Tackling Stigma

Stigmatising attitudes towards mental health problems have existed in various forms since before the establishment of asylums in Britain. For a long time, stigma meant that people experiencing mental health problems were kept out of sight. As segregated care moved to care in the community, the conversation shifted from concern for the welfare of those patients discharged into the community to fears of dangerousness and a desire to protect the public. The murder of Jonathan Zito ignited public fear and popular ‘mad axeman’ and ‘psychopathic murderer’ myths dominated debate around community care.36 People were keen to protect themselves and the communities in which they lived; a survey carried out in 1994 by the Department of Health showed that 22 per cent of people in the UK felt that locating mental health facilities in a residential area would downgrade the area.37 Fears of dangerousness still exist today and are perpetuated in the media. For instance, a study in 2019 of tweets about mental health generated by the UK national press revealed that 24 per cent of tweets presented mental illness as ‘bad news’ stories, as opposed to ‘understanding’ stories.38 The global director of Time to Change, Sue Baker OBE, stressed that ‘we are still picking up the pieces from terrible headlines of “mad psycho killers” mid to late 90s’.39

The voluntary sector and government increased their efforts to tackle these stigmatising attitudes towards mental health. In 1997, the Health Education Authority launched their making headlines initiative, pushing for less sensationalised portrayals of mental illness in the national press. A study at the time showed that nearly half of articles in the news media referencing mental health presented it negatively and associated mental illness with criminality and violence.40 The SHiFT anti-stigma campaign was launched to promote a disability-inclusion model of mental health and involved cross-departmental governmental input. In more recent times, Rethink and Mind jointly launched the Time to Change campaign in 2007 to tackle stigma, which has reported a 9.6 per cent positive change in attitudes towards mental health between 2008 and 2016. While stigmatising attitudes towards mental health still exist today, these are slowly starting to change for the better.

These campaigns were centred around the voices of people with mental health problems. Time to Change established a community of thousands of champions, people with lived experience, who could tell their own story.41 These champions challenge stigma around mental health by sharing their experiences, talking about their mental health, forming campaigning groups and influencing the work of the Time to Change campaign. One champion, Sophie, feels that ‘involving yourself in open and non-judgemental conversations regarding mental health is an absolutely crucial first step towards fighting stigma and ending discrimination’.42 This work has helped to put the voices of people with mental health problems at the forefront of mental health campaigning. As one champion, Sian, puts it, ‘the way society views mental health is changing. We are talking more and more, showing people that it is OK not to be OK, that it can happen to any of us.’43 By the time the period 1960–2010 was coming to an end, the same voices were back on the streets campaigning for better mental health legislation.

Mind’s approach to tackling stigma has long been focused on creating space for the voices of those experiencing mental health problems. For instance, the October 1974 edition of Mind Out was devoted entirely to the experiences and views of people with experience of mental health problems, and in 1987 Mind launched the service user advisory group Mindlink.44 Mind continues to ensure that lived experience is at the centre of its work, with people with mental health problems involved in strategy planning, service development as well as governance, with a commitment that at least half of Mind’s board have lived experience of mental health problems.

Conclusion

The voluntary sector has been an adaptive but constant source of support for individuals with mental health problems throughout this period of dramatic transformation in mental health services between 1960 and 2010. Charities sought to assist in providing local-level support when the vast majority of care moved from hospital wards to care in the community. The voluntary sector also fought to tackle the stigmatising attitudes towards mental health that negatively impacted communities as mental health patients were discharged. Ensuring that people with lived experience of mental health problems were heard and respected has been at the core of this work. Charities, such as Mind, continue to fight against stigmatising attitudes and to develop and deliver user-led community-based interventions to support all those people in need of help.

Key Summary Points
  • The move from institutionalised care to care in the community was partly caused by, and in turn further strengthened, the voices of people with mental health problems.

  • Charities are in a unique position, sitting outside of statutory care and clinic-based spaces, allowing them to build reciprocal and trust-based relationships with the communities that they serve.

  • Mind (then known as the National Association for Mental Health, NAMH) condemned the handling of the 1970s inquiries into psychiatric hospitals in the Observer and worked in solidarity with service user groups such as the Patients’ Association calling for more independent inquiries. Mind’s approach to tackling stigma has long been focused on creating space for the voices of those experiencing mental health problems.

  • Worryingly, the UK ranks the highest internationally in terms of perceived stigma felt by family caregivers in seeking professional help. The Schizophrenia Fellowship (now Rethink Mental Illness), founded in 1972 after a father, John Pringle, shared his son’s experience of schizophrenia in the Times, launched its first support groups for carers and three years later received its first government funding to expand this work.

  • Time to Change established a community of thousands of champions, people with lived experience, who could tell their own story. These champions challenge stigma around mental health by sharing their experiences, talking about their mental health, forming campaigning groups and influencing the work of the Time to Change campaign.

Chapter 15 Women in UK Psychiatry and Mental Health

Gianetta Rands
Introduction

Throughout this chapter, it is accepted that women’s mental and emotional health are affected by their roles in society, their relationships with other people, their own health and welfare and their financial independence. Some relevant laws enacted between 1960 and 2010 are described. Some experiences of women as mental health professionals and as patients during this time period are considered.

Women’s Lives in 1960

One source of insight into the lives of women at the end of the 1950s and during the 1960s is Jennifer Worth’s book Call the Midwife (2002),1 later turned into a BBC series portraying people’s lives in London’s docklands. Many women stopped working when they married. Sex before marriage was frowned upon. Women often had ten or more children. Maternal death rates were high, usually due to postpartum haemorrhage or sepsis. Contraception was the rhythm method or condoms. Illegal terminations could be procured as ‘back street abortions’ that too often resulted in fatal sepsis or painful pelvic scarring and infertility. Large extended families lived in small tenements with outdoor toilets and no hot water. Domestic violence was brushed off by police and others as ‘just a domestic’.

In 1960, most psychiatric treatment took place in large mental hospitals, the old lunatic asylums, that were among the first employers to provide equal pay to men and women, thanks to Henry Fawcett.2 These were also some of the first medical institutions to employ women doctors, some of whom progressed to careers as psychiatrists, such as Eleanora Fleury and Helen Boyle.

By the early 1960s, conditions in the asylums were miserable for patients and staff, as described by Bradley.3 Barbara Robb, a psychoanalyst working in north London, was horrified by conditions in her local asylum. Her book Sans Everything shook the establishment and the long process of closing asylums began (see also Chapters 1 and 7).4

Mother’s Little Helper

The problem that has no name’ slowly emerged in the consciousness of Betty Friedan as she analysed replies to a questionnaire sent to her cohort of 1942 graduates from Smith College, a women-only university in Massachusetts, United States. Women in the late 1950s and early 1960s were not happy. When they consulted their doctors there was ‘nothing wrong’ with them. Their problem had no name. Friedan’s analysis evolved into her book The Feminine Mystique described by the New York Times as ‘one of the most influential non-fiction books of the twentieth century’.

One of many republications of this book is a Penguin Modern Classic edition, 2010, with an introduction by Lionel Shriver.5 Shriver refers to the award-winning television series Mad Men, in particular the character Mrs Betty Draper as the embodiment of the feminine mystique. She has everything a woman could want – handsome high-earning husband, beautiful suburban house filled with modern domestic gadgets, two healthy children, time for self-pampering and money for pretty clothes. She had married young and rejoiced that she wasn’t a frumpy ‘career-girl’. Why was she unhappy?

Table 15.1 Timeline of laws and events significant for women and mental health, 1960–2010

DateLegislationProfessional LandmarksKey People
1841The Association of Medical Officers of Asylums and Hospitals for the Insane founded in the UK; became Medico-Psychological Association (MPA) in 1865
1894MPA admitted first woman memberEleanora Fleury (1867–1960)
1926MPA became the Royal Medico-Psychological Association (RMPA)
1939Helen Boyle (1869–1957) first woman president, RMPA
1960sMedical training; 30% women
1961Suicide Act
1963The oral contraceptive pill available to women in the UK
1965/9Murder (Abolition of Death Penalty) Act
1967The Abortion Act
1967Sexual Offences Act
1968The Ford sewing machinists strike; women employees demanded equal pay to men in equivalent jobsBarbara Castle; as first secretary of state and secretary of state for employment (1968–70)
1970Equal Pay ActBarbara Castle
1971RMPA became the Royal College of Psychiatrists
1971First refuge for victims of domestic abuse opened in London. In 1993, renamed Refuge, offering advice and refuge to victims of domestic abuseErin PizzeyJack Ashley, Labour MP, House of Commons address (1973) coined the term ‘Domestic Violence’
1975Sex Discrimination Act
1976Women’s Psychotherapy Centre, LondonSusie OrbachLuise Eichenbaum
1983The Mental Health Act
1984First clinic for benzodiazepine withdrawalHeather Ashton
1985The Gillick Decision, House of Lords
1985Prohibition of Female Circumcision Act
1990Human Fertilisation and Embryology ActAmended legal abortions from 28 to 24 weeks, except if extreme risk to mother
1993Women presidents elected by RCPsychiatristsFiona Caldicott (1993–6);Sheila Hollins (2005–8);Susan Bailey (2011–14);Wendy Burn (2017–20)
1996Women in Psychiatry Special Interest Group (WIPSIG) createdJane MountyAnne CremonaRosalind Ramsay
1998The Human Rights Act
2003Sexual Offences ActMarital or spousal rape was now illegal
2003In psychiatry, 36% of consultants and 22% of fellows were women 1 in 5 higher trainees were in flexible or part-time posts
2004Domestic Violence, Crime and Victims Act
2005The Mental Capacity Act, enacted 2007
2006Safeguarding Vulnerable Groups Act
2010The Equality Act

Although Friedan’s research has been criticised for focusing on a few hundred, middle-class, highly educated, white women in the United States, her findings showed insights into a massive, international problem. The Feminine Mystique has been credited with sparking the second wave of feminism in which equality was the main issue.

Other Smith College alumna are Nancy Reagan, Barbara Bush, Gloria Steinem and Sylvia Plath (Class of 1955). Plath, in her writings and her tragic short life, embodied something of the problem that has no name. As a talented poet and writer, the restrictions and drudgery of being a housewife overwhelmed her mental health. She was diagnosed as clinically depressed, received electroconvulsive therapy (ECT) and tragically killed herself in 1963.

Sylvia Plath died in London. A few miles away, Mick Jagger and Keith Richards were creating their legendary rock and roll genre. In 1966, they released this song:

Mother’s Little Helper
Kids are different today, I hear every mother say
Mother needs something today to calm her down
And though she’s not really ill, there’s a little yellow pill
She goes running for the shelter of a mother’s little helper
And it helps her on her way, gets her through her busy day

It goes on:

And if you take more of those, you will get an overdose
No more running for the shelter of a mother’s little helper
They just helped you on your way, to your busy dying day

These lyrics were seen as a response to public criticism of the younger generation’s use of recreational drugs at a time when married women were being prescribed increasing amounts of calming medications such as meprobamate and diazepam (Mother’s Little Helper). Both these drugs are prescribed sedatives and are now known to be addictive. Meprobamate (Miltown or Equanil) can be lethal in overdose. Valium (diazepam), one of the first benzodiazepine group of drugs, was launched in 1963, and in 1978 more than 2 billion tablets were prescribed in the United States alone.

In 1984, in her practice in Newcastle, Heather Ashton noticed the difficulties people had in withdrawing from benzodiazepines and set up the first clinic to help with this. Her manual Benzodiazepines: How They Work and How to Withdraw has guided millions around the world in this difficult task.6

Sedation was an easy way to manage women presenting with ‘the problem that had no name’. As in the case of Betty Draper, alcohol was another sedative increasingly resorted to by unhappy housewives.

As the 1960s progressed towards the ‘Summer of Love’ (1969), addiction to sedatives, alcohol and other recreational drugs all spiralled (see also Chapter 25). Another group of medications that had a massive effect on the mental and physical health of women was the pill: oral contraception for women.

The Pill: Women Control Family Planning

The oral contraceptive (OC) pill is a tablet taken every day, or twenty-one per twenty-eight day cycle, by women wanting to avoid pregnancy. Interest in combining a progesterone-type drug with an oestrogen-type drug for this purpose started in the 1930s. It wasn’t until synthetic versions of these hormones were available that they could be tested in women. In the UK, the first OCs were made available via Family Planning Clinics (FPCs) from 1963.

By 2010, thirty-three varieties of combined OC, five varieties of progesterone-only OC and other hormonal contraception such as subcutaneous implants, intrauterine devices and vaginal rings were available in the UK. The pill could be prescribed to girls under sixteen provided they were ‘Gillick competent’ in which case ‘parental rights’ do not exist.7

The impact on women’s lives and health was huge. Not only could they control their fertility but these drugs were used for other inconvenient and incapacitating conditions such as menorrhagia (heavy bleeding), dysmenorrhoea (painful periods), endometriosis, premenstrual tension and acne.

Women’s health improved because they were not constantly having babies. The risks associated with pregnancy, such as death, septicaemia, anaemia, urinary tract infections, incontinence and deep-vein thrombosis, reduced as the number of pregnancies reduced. The age that women married increased, the number of children they had decreased and more women succeeded in higher education and professional careers. There were concerns that decoupling sex and pregnancy caused an increase in promiscuity and pressure on young women to have sex before they were fully consenting.

With effective contraception available, women with serious illnesses, be they mental, physical or genetic predispositions, could choose whether or not to have babies. For instance, a woman with bipolar affective disorder needing lithium to stay well could choose to avoid pregnancy as lithium is toxic to the foetus (teratogenic).

Mental Illnesses Associated with the Menstrual Cycle and Pregnancy

The three main psychiatric diagnoses associated with the menstrual cycle and pregnancy are premenstrual dysphoric disorder, postnatal depression and postpartum psychosis.

Premenstrual tension refers to a collection of mood and somatic symptoms experienced by many women in the luteal phase of their cycles, that is, after ovulation and before menstruation. Some 3–8 per cent of women experience severe symptoms that constitute a diagnosis of premenstrual dysphoric disorder. These women have a higher risk of postnatal depression and mood disorders during their menopause.

Postnatal depression affects 10–15 per cent of new mothers within the first two months of giving birth and sometimes starts in the last few months of pregnancy. Postpartum psychosis affects about 1 in 1,000 women who give birth and usually comes on very quickly in the first few weeks after having a baby. It has a high risk of suicide and infanticide and needs urgent treatment, usually in hospital and preferably in a mother and baby unit.

These conditions are treatable once they are diagnosed. Research and collaborations between psychiatrists, obstetricians, gynaecologists, general practitioners and scientists led to criteria for diagnoses and improvements in services and training. By the end of the 2010s, these mental illnesses were taken seriously in the UK.8

Laws Affecting Women in UK Psychiatry and Mental Health, 1960–2010

In 1961, the Suicide Act was passed. Before this, anyone found trying to kill themselves could be prosecuted and imprisoned. This law applied to England and Wales. In Scotland, suicide was never an offence. In Northern Ireland, text from the Suicide Act was incorporated into their Criminal Justice Act 1966.

The Abortion Act, passed in October 1967 and effective from April 1968, made medical termination of pregnancy (abortion) legal up to twenty-eight weeks in England, Wales and Scotland. A pregnant woman could obtain termination of her pregnancy if two medical practitioners agreed that continuing with that pregnancy would risk her life, mental or physical health, or put any of her existing children at risk. Northern Ireland decriminalised abortion in October 2019, effective from 31 March 2020. To quote Bradley ‘unwanted pregnancy, whether due to contraceptive failure, rape or incest was often a precursor of severe depression or suicide’.9

The Sexual Offences Act 1967 legalised homosexuality between consenting men aged twenty-one years and over, reduced to eighteen years and over in 1994. Before then, women, knowingly or unknowingly, willingly or unwillingly, in sham marriages would have experienced collateral damage, suffering confusion, deceit, emotional distress and mental illnesses.

Barbara Castle, in her roles as first secretary of state and secretary of state for employment (1968–70) intervened in the Ford machinists strike of 1968 in which women employees demanded pay equal to their men colleagues in equivalent jobs. The awareness of widespread pay inequality precipitated the Equal Pay Act 1970 which gained royal assent in May 1970 but was, curiously, not commenced until 1975. It made discrimination between men and women, in their terms and conditions of employment, illegal. It applied to the whole of the UK except Northern Ireland. Castle, in her role as minister of transport (1965–8), also introduced breathalysers, seat belts and speed limits, all of which have benefited the lives of women.

The Sex Discrimination Act 1975 gained royal assent in November 1975. It specifically covered discrimination and harassment on the grounds of sex or marital status in employment, training and education. It was one of a clutch of equality and discrimination laws that were repealed and incorporated into the Equality Act in 2010.

The current version of the Equality Act 2010 lists protected characteristics as age, disability, gender reassignment, marriage and civil partnership, race, religion or belief, sex and sexual orientation.10 Discrimination, harassment and victimisation are prohibited in employment and private and public services. The Act applies in England, Wales and Scotland but has limited application in Northern Ireland.

Other legislation passed in the UK during this time period and relevant to the mental health of women includes the Prohibition of Female Circumcision Act 1985 and the Domestic Violence, Crime and Victims Act 2004 which gave legal protection to victims of crime, particularly domestic violence. The first refuge for women victims of domestic violence opened in London in 1971. The Sexual Offences Act 2003 made marital, or spousal, rape illegal. Before that, a husband could force ‘conjugal rights’ on his wife claiming ongoing consent through their marriage contract. The 2003 Act reinforced the importance of consent and that, if someone is unable to consent or their consent is obtained by force or intimidation, the sex act is not consensual.

The Human Rights Act was passed in the UK in 1998. It legislated that public organisations, including government, police and local councils, must treat every person resident in the UK equally, with fairness, dignity and respect. It is based on articles of the European Convention on Human Rights (ECHR). Another important law incorporating ECHR articles as principles is the Mental Capacity Act 2005, commenced in 2007. This Act enshrines a person’s right to make their own decisions based on informed consent and defines a process for making decisions when a person lacks mental capacity.

The Feminist Movement

Women’s movements have a long history, at least 500 years, and international representation. Friedan describes progress made as ‘two steps forward, one step back’.11 Since then feminism has been documented as ‘waves’, which inevitably implies troughs.

Feminism as an ideology is based on equality of men and women in all aspects of society, education, employment, politics, economics and human rights. It is about equality of value, opportunity and reward. The principles of feminism have been incorporated into many areas, such as feminist philosophy, psychology and sociology, and into specific groups, such as black and intersectional feminism.

First-wave feminism covers women’s rights movements’ demand for, and gain of, suffrage. Many also demanded equality of educational opportunities. This aim continued into the second wave of feminism, in the 1960s and 1970s, that primarily focused on equality and non-discrimination. As these ideals were enshrined in laws, there was the reasonable assumption that they would soon be achieved. It slowly became evident that this was not so, igniting the third wave of feminism in the early 1990s. This was more multiracial than previous waves, and the concept that social conditioning was the cause of gender inequalities and discriminations was key. The phrase ‘a matrix of domination’ incorporates the idea that gender inequality interacts with homophobia, classism, colonisation and capitalism across the globe, in a way that holds back progress in all those areas.12 The use of social media around 2010 sparked new interest in feminism which started ripples of the fourth wave.

Feminists of the 1970s reinstated the title ‘Ms’ as a formal way to address women without specifying their marital status. It had previously been used in the seventeenth and eighteenth centuries when, like ‘Miss’ and ‘Mrs’, it was a derivation of Mistress. Doctors in the 1920s rejected the title ‘Doctress’ and many women doctors use Dr or Professor not only because they indicate training and academic achievements but also because they are non-gendered titles.

Aspects of Mental Health Services, 1960–2010

With mental health services in the 1960s under-resourced and neglected, the anti-psychiatry movement was an attractive alternative (see also Chapter 20). In the 1970s, there was a flurry of reports about sexual and violent crimes on psychiatry wards and inappropriate behaviour of professionals. The Kerr/Haslam Inquiry (2005) investigated two psychiatrists, William Kerr and Michael Haslam, working in York in the 1970s and 1980s, both of whom had been found guilty of indecent assaults against women psychiatric patients.13 The report found that numerous complaints had not been taken seriously; professionals raising concerns (whistle-blowing) were not heard; whistle-blowing was detrimental to careers; and there was a culture of loyalty to colleagues, tolerance of sexualised behaviours and a predominantly male hierarchy of doctors and female nurses that reinforced gender power dynamics.

Several good recommendations were made. All Mental Health Trusts were to display information leaflets about assessments and treatments. Complaints procedures needed to be clear, and Independent Mental Health Advocates (IMHAs) and Patient Advice and Liaison Services (PALS) were to be provided. The report’s recommendations about protecting vulnerable adults were incorporated into the Safeguarding Vulnerable Groups Act 2006.14

In the 1980s, the psychiatric paradox was described. Women go to psychiatry services for help but instead get blamed for their own illnesses and those of others. Penfold and Walker write that

‘Blame the victim’ models lead to the scapegoating of mothers, blaming the rape victim and battered wife, dismissing the prostitute as primitive or deviant, accusing the alcoholic’s wife of causing her husband’s downfall, pointing the finger at the little girl who is assumed to have seduced her innocent father, and attributing women’s addiction to tranquillisers to neuroticism and inadequacy.15

They describe protection of perpetrators with reference to psychiatry’s function as a social regulator.

Perusal of standard psychiatry textbooks from the 1980s provides evidence for some of these allegations. Their indexes contain few references to women, abuse or perinatal illnesses. A short paragraph about the mental effects of the menopause concludes ‘psychiatric symptoms at this time of life could equally well reflect changes in the woman’s role as her children leave home, her relationship with her husband alters, and her own parents become ill or die’. The recommendations of the Kerr/Haslam report were needed and eventually implemented throughout the UK.

The Women’s Therapy Centre, London, opened in 1976 to offer individual and group psychotherapy to women who, for many reasons, were not able to access other mental health services. It was a social enterprise started by psychotherapists Susie Orbach and Luise Eichenbaum based on their principles of social feminism and their skills in psychotherapy and psychoanalysis. Some of its successes have been to increase the understanding of what it means to grow up as a girl in patriarchal societies, to expand their developmental theory and feminist relational practice and to write books and lectures that are used internationally.

Developments in Mental Health Services since 1960 that have specifically benefited women include Perinatal Psychiatry, Child and Adolescent Mental Health Services, Intellectual Disability and Old Age Psychiatry, the latter because women live longer than men. There have been extensive debates about women-only wards that usually conclude that what matters is good, well resourced, multi-professional teams providing assessment and treatment in a co-operative, collegiate way, with sufficient resources to do their jobs well. With these in place, the demands for single-sex wards diminish.

Private psychoanalysis and psychotherapy thrived in many areas of the UK from 1960 to 2010. A list of women psychoanalysts in Great Britain in the twentieth century includes Enid Balint and Clare Winnicott.16 Enid Balint (1903–94) was a social worker and psychoanalyst. She married Michael Balint in 1953 and introduced him to casework techniques she used to train social workers. These formed the basis for Balint groups in which transference and countertransference are used to analyse clinical cases and the doctor–patient relationship. Balint methodology is usually attributed to Michael but evidence indicates that Enid needs attribution.

Clare Winnicott (1906–84) also trained as a social worker and psychoanalyst and taught at the London School of Economics (LSE) throughout her career. She was interested in the psychic life of children who had suffered loss and separation, how to communicate with them and the role of social workers as ‘transitional participants’. She described ideas such as ‘transitional objects’ before she married Donald Winnicott in 1951. Whose idea was ‘the good-enough mother’?

Attribution for the ideas and work of women therapists, doctors and scientists has been skewed for years, their contributions being invisible and/or claimed by male colleagues. Famous examples include Rosalind Franklin’s work on DNA crystallography and June Almeida’s discovery in 1964 of coronaviruses.

In addition to statutory and private mental health service developments, since 1960 there has been an expansion in voluntary organisations and self-help information for people in emotional distress and managing mental illnesses (see also Chapter 14).17

Women Psychiatrists

When training as a medical student and psychiatrist in the 1970s and 1980s it was not unusual for a lecturer to announce ‘I will refer to the doctor as He and the patient as She, whatever their sex, just for clarity’.

In 1894, after a year debating whether ‘man/men’ could include woman/women the Medico-Psychological Association (MPA, predecessor of the Royal College of Psychiatrists) rewrote its rules and admitted its first woman psychiatrist Eleanora Fleury (1867–1960). Helen Boyle (1869–1957) became a member of the MPA in 1898 and, in 1939, became its first woman president.18

A 1960s analysis of medical staffing predicted that, in 1964, 1,730 medical students would qualify as doctors, 1,330 men and 400 women. They assumed two-thirds ‘wastage’ of women to marriage, leaving 1,464 ‘working doctors’.19 In 1967, 30 per cent of the UK medical school intake were women. Two surveys of qualified women doctors, by the Medical Practitioners Union and the Medical Women’s Federation, found that 80 per cent of respondents actively worked as doctors and nearly half were in full-time work. The researchers concluded that ‘the overall wastage of women doctors is not as alarming as is suggested’.

In 1974, the then health secretary Barbara Castle expressed her intention to improve opportunities for women working part-time in the NHS. Part-time and flexible jobs and training posts now offered options for different work–life balance choices for women and men professionals working in mental health services.

Psychiatrists Jane Mounty, Anne Cremona and Rosalind Ramsay describe the Women in Psychiatry Special Interest Group (WIPSIG) within the Royal College of Psychiatrists. Its initial objectives were to improve both the working lives of women psychiatrists and the provision of care to women using mental health services. The need for part-time jobs and job-shares had been acknowledged, but employers preferred full-time doctors who, at that time, worked up to eighty hours per week.20

They note that ‘by November 2003, 47% of core trainees, 53% of higher trainees, 55% of staff grade and associate specialists, and 36% of consultants in psychiatry, were women. Twenty-two per cent of College Fellows were women, and there were 21 women professors of psychiatry. One in five higher trainees was training flexibly.’

The second aim of this group was to highlight the service provisions for women patients. Professor Dora Kohen is quoted thus:

Half the patients in mental health services are women, although in Old Age services women outnumber men by 2:1 (women live longer). Anxiety, depression and eating disorders are all more common in women. Socio-economic and psychological factors associated with poverty, unemployment and social isolation play a considerable part in female mental illness. Other disorders such as puerperal psychosis, postnatal depression and premenstrual dysphoric disorder are specific to women.21

Gender inequality continues in clinical and academic medicine. In a recent review of the evidence for inequalities in pay, career progression, citations and authorships of academic papers, clinical awards and senior leadership roles, the authors conclude that ‘equality is not just about having a level playing field, it is about unleashing talent’. They challenge science journals to improve the gender balance of their editorial boards.22

Conclusion

In 1988, Punch magazine published the Miss Triggs cartoon, now guaranteed immortality by its reproduction in Mary Beard’s book Women and Power: A Manifesto.23 It is a line drawing of an unspecified board meeting of five men and one woman, with the caption ‘That’s an excellent suggestion, Miss Triggs. Perhaps one of the men here would like to make it.’

Most women immediately recognise this situation. It’s a common experience of everyday sexism which includes silencing, misattribution of skills, talents and contributions, and gender inequality.

Add to this social and institutional structures that perpetuate inequality and discrimination and the toll on women’s mental health seems obvious. In 2010, social media had not yet become ubiquitous and its effect on young people’s mental health, particularly that of young women, was not known. However, it was known that one in four women experienced domestic abuse, only 6 per cent of rape cases resulted in prosecution and that the prevalence of mental illnesses in young women and teenage girls was increasing. Unforeseen events such as economic collapse and new disease outbreaks seem to affect women disproportionately in their ‘double shift’ responsibilities (career and domestic) and a world that remains, predominantly, designed for men.24

Wish You Were Here by Sophie McKay Knight (2016)

Wish You Were Here is part of the Chrysalis project at the University of St Andrews, in which conversations of women at all stages of their careers in science research were interpreted by the artist Sophie McKay Knight. The images she created were displayed in the Byre Gallery in St Andrews as part of the Women in Science Festival 2016. Sophie McKay Knight has said of this artwork,

Throughout it all I was thinking about what people had told me about being apart from loved ones in order to pursue their careers & the instability of contracts and not really having any permanence. The single figure in ‘Wish you were here’ represents that dual positive/negative sense of being alone and yet deeply connected to ‘work’ – which both does and does not make up for any associated loss.25

Key Summary Points
  • Women’s lives changed profoundly between 1960 and 2010. The main contribution to this was hormonal contraception and its impact on women’s mental and physical health.

  • Mental health services changed from asylum-based inpatient facilities to community-based services and from male doctor–dominated organisations towards multidisciplinary collegiate teamwork.

  • By 2010, discrimination and inequality persisted despite forty years of laws making these illegal.

  • While many organisations now monitor gender equality data, research is needed to discover why inequality and discrimination are so resistant to change and what factors perpetuate this status quo.

  • The effects of social stresses, economic or pandemic, seem to disproportionately burden women who work ‘double shifts’ to balance work and home commitments with predictably adverse effects on their mental and physical health.

Chapter 16 Biological Psychiatry in the UK and Beyond

Stephen Lawrie
Introduction

What is ‘biological psychiatry’? With biology being the scientific study of life, if one took the word literally, one could legitimately question whether there is any other kind of psychiatry.1 By this definition, psychology is part of biology. As taught in schools and universities, however, biology is the more constrained study of living organisms and includes anatomy, physiology and behaviour; human biology includes all those aspects as well as genetics, anthropology and nutrition and so on. That is still quite broad.

What biological psychiatry is usually taken to mean is the search for neurobiological underpinnings of mental illness and application of drug and other physical treatments for them. This, of course, assumes that the brain–mind are sufficiently interlinked to justify that approach – something that is taken as read by most doctors and should be self-evident to anyone who has ever consumed any psychoactive drug, including caffeine and alcohol. What biological psychiatry does not (overtly) include are those key elements of human understanding that are the essential tools in the trade of the effective clinician: the application of insights from experience, perhaps informed by the arts and humanities, to the clinical encounter. There are, of course, those members of our broad church of psychiatry that prioritise psychosocial approaches to understanding and psychotherapy as treatment. To some of them, and many outside psychiatry, biological psychiatry is or at least can be reductionistic – reducing or ignoring the mind to little or nothing more than the brain. Anything so ‘mindless’ would be just as bad as dualism or mentalistic ‘brainlessness’.2 One would, however, be hard-pressed to find any so-called or self-declared biological psychiatrist who does not pay heed to the importance of our mental lives.

The Rise of Psychopharmacology

In the 1940s, the therapeutic armamentarium available to psychiatrists included barbiturates and not much else. During the 1950s, cutting-edge neuroscience demonstrated the existence of neurotransmitters in the brain. Coincidentally, several new drugs were discovered, including tricyclic antidepressants, monoamine oxidase inhibitors (MAOIs), antipsychotics and lithium. One of the eminent pharmacologists of the age, John Henry Gaddum, was interested in LSD and proposed a role for serotonin (5‐hydroxytryptamine, 5HT) in mood regulation. Gaddum was Professor of Pharmacology at the University of Edinburgh from 1942 to 1958 and in Cambridge from 1958 to 1965.

Two young psychiatrists working in Gaddum’s departments, George Ashcroft and Donald Eccleston, proposed the monoamine theory of depression. The theory received initial support from a study that showed patients with depression had lower levels of the main 5HT metabolite 5-hydroxy indole acetic acid (5HIAA) in cerebrospinal fluid (CSF) than neurology patient ‘controls’ undergoing lumbar air encephalography.3 Strong support came from a study conducted in what was by then the Medical Research Council (MRC) Brain Metabolism Research Unit, in which CSF was sampled under standardised conditions – 5HIAA not only correlated with the severity of depression but normalised on remission.4 Less appealingly, levels were also low in those with schizophrenia (but not in mania). This apparent early success was further reinforced when Alec Coppen and colleagues at the MRC Neuropsychiatric Research Unit in Epsom, Surrey, showed that adding tryptophan (TRP, a 5HT precursor) to the antidepressant tranylcypromine helped get patients dramatically better, almost as effectively as electroconvulsive therapy (ECT).5

Decreased free and/or total TRP levels in the plasma and CSF in depressed patients were replicated in several labs,6 but Coppen was always concerned that it all could be a secondary change to depression and subsequent work by Ashcroft led him to the same conclusion.7 The weight loss and elevated cortisol of depression were just two of many possible confounders.8 On the other hand, several studies showed that rapid TRP and 5HT depletion – through, for example, ingesting a TRP-free amino acid drink – reduces mood in healthy volunteers and in those who are depressed or have recovered. This realisation led to extensive work on neuroendocrine disruptions in depression – particularly in Oxford – including demonstrations that hormonal responses were blunted in depression and normalised by some antidepressants and lithium, including ECT in patients and electroconvulsive stimulation in animals.

There is, of course, an analogous story to be told about the role of the adrenergic system in depression but the UK contribution to this was less central. Although there is no question that many treatments for depression act on the serotoninergic and other monoamine systems, it has not been established whether there is an abnormality of serotonin metabolism or that treatments correct it. It is more complicated than that. The 5HT system is probably modulating other processes critical to the development and maintenance of depression, such as adaptive responses to aversive events.9 Low 5HTIAA levels in CSF may mark severity and are associated with, and maybe even predictive of, impulsive, violent suicidal behaviour. This also seems to be true, however, of schizophrenia.10

Nevertheless, subsequent work employing functional neuroimaging as a window on the brain has shown that single and repeated doses of various antidepressants increase the recognition of happy facial expressions, and amygdala responses to them, while decreasing amygdala response to negative affect faces, in healthy people and in those with depression.11 These effects are also seen after seven days’ administration in healthy participants and are maintained during longer-term treatment. Further, long-term administration of selective serotonin reuptake inhibitor (SSRI) or norepinephrine reuptake inhibitor antidepressants can enhance synaptic plasticity and block the synaptic and dendritic deficits caused by stress.12

Landmark Clinical Trials

The advent of rigorous randomised controlled trials (RCTs) also coincided with the availability of many new drug treatments for depression, bipolar disorder and schizophrenia. Even if a simple monoamine theory of depression was not to survive, a series of landmark clinical trials carried out by psychopharmacologists and psychiatrists of various persuasions in the 1960s and 1970s established that antidepressants and other biological approaches to major psychiatric disorders worked.

The Clinical Psychiatry Committee of the MRC, which included epidemiologists like Archie Cochrane and Austin Bradford Hill, published its clinical trial of the treatment of depressive illness in the British Medical Journal in 1965.13 No fewer than 250 patients in London, Leeds and Newcastle aged 40–69 years with an untreated primary depressive illness (characterised by persistent low mood, with at least one of the following: morbid or delusional guilt, insomnia, hypochondriasis and psychomotor retardation or agitation) were randomised to ECT (4–8 treatments), 150 mg imipramine, 45 mg phenelzine or placebo over 4 weeks. About one-third of those on placebo improved notably but this was almost doubled in those on imipramine and more than doubled in those given ECT – and these differences were maintained at six months. Moreover, in those who had responded to imipramine, continuation with 75–150 mg over a further six-month period meant that only 22 per cent relapsed as compared to 50 per cent randomised to placebo.14

During the 1960s, Baastrup and Schou working independently and then together in Denmark, conducting studies that suggested lithium was effective in acute mania and had prophylactic properties. However, to Aubrey Lewis and Michael Shepherd in the MRC Social Psychiatry Unit at the Institute of Psychiatry (IOP) in London, lithium was ‘dangerous nonsense’ and ‘a therapeutic myth’, which, in their opinion, was based on ‘serious methodological shortcomings’ and ‘spurious claims’ (see also Chapters 2 and 17).15 Schou and Baastrup undertook a double-blind discontinuation trial with patients with ‘manic-depressive illness’ successfully treated with lithium who were then randomly allocated to continue on lithium or placebo. Lithium was superior in preventing relapse – but only in typical cases.16 Coppen and colleagues randomised sixty-five patients with recurrent affective disorders to lithium or identical-looking placebo in four centres for up to two years – 86 per cent of those on lithium (0.73–1.23 meq per litre) were judged by independent psychiatrists and psychiatric social workers to have had no further episodes over that time, as compared to 8 per cent of the placebo group.17 What is more, lithium seemed to be equally effective in unipolar and bipolar patients.

Following observations that the turnover of 5HT was greatly increased by the administration of the amino acid TRP, Donald Eccleston (having moved to Newcastle) led the introduction of a new ‘5HT cocktail’ or ‘Newcastle cocktail’ therapy for severe depression. Using l-tryptophan alone or combined with other drugs, such as phenelzine (or clomipramine) and lithium, frequently produced dramatic improvement in otherwise chronically treatment-resistant depressed patients.18

The landmark study of the use of antipsychotic drugs in acute schizophrenia was carried out with more than 400 patients admitted to 9 centres around the United States, about a half of whom were in their first episodes.19 By the end of the trial, 75 per cent of the patients receiving antipsychotic showed moderate or marked improvement, whereas only about 23 per cent did on placebo. It was left, however, to British social and biological psychiatrists to robustly demonstrate that these drugs also reduced the risk of relapse in the longer term – whether with oral medication or depot long-acting intramuscular injection.20

Innovative British Neuroimaging

The independent realisation that X-ray intensity reduction by the brain could be accurately measured and reconstructed into a brain image earned Allan Cormack (Tufts University) and Godfrey Hounsfield (Electric & Musical Industries (EMI), Middlesex) the Nobel Prize for Physiology/Medicine in 1979 for the development of computer assisted tomography. This was all the more remarkable as Hounsfield had gone to work for EMI immediately after school, making him the first person to win a Nobel Prize without going to university since Albert Einstein. Computerised tomography (CT), as it came to be known, became available for clinical use in 1971. Demonstration systems for CT of the head were installed in Glasgow, London and Manchester and the first body scanner (the CT5000) was installed for research at the Northwick Park Hospital (NPH) MRC Clinical Research Centre (CRC) on the outskirts of London in 1975.

Tim Crow was Head of the CRC Division of Psychiatry and intrigued that many patients with schizophrenia had cognitive impairment. He gave the young Glaswegian émigré Eve Johnstone the task of using CT to see if this might have an organic basis. Mid-axial brain slice photographs were traced three times each to calculate an average lateral ventricle-to-brain ratio (VBR), which was markedly increased in the patients.21 One can imagine how the finding that schizophrenia – a ‘functional’ psychosis – might have an organic basis was greeted by social psychiatrists, psychotherapists and neurologists at the time. Indeed, the copy of the paper in the Lancet in the IOP library reputedly had ‘Rubbish’ scrawled across it. Regardless, the finding was widely replicated, as was the association with cognitive impairment.22

Important work contributing to the development of magnetic resonance imaging (MRI) as a non-invasive means of imaging brain and body in greater detail was done in Aberdeen by John Mallard and in Nottingham by Peter Mansfield (for which he was to share the 2003 Nobel Prize with Paul Lauterbur of Illinois, United States). US business and researchers capitalised upon this and the landmark MRI studies in schizophrenia were done there.23 It was clear by the turn of the millennium that people with schizophrenia had reduced whole brain volumes and additional decrements in parts of the prefrontal and temporal lobes.24 Further, there is a consistent association between these reductions and negative and positive symptoms respectively.

This work stimulated a resurgence of interest in the neuropathology of mental illness, especially schizophrenia, which provides independent confirmation of the findings and suggests they derive from the reduced density of neurons and glia, and lesser dendritic arborisation.25 These could, of course, partly derive from antipsychotic medication, as well as the other effects of long-term illness and alcohol excess. However, the demonstration of similar but lesser changes in relatives and first episode cases,26 and in those at elevated clinical risk,27 with further reductions as some develop schizophrenia, has opened the way to potentially using neuroimaging to predict schizophrenia – which remains a very active global research effort.

Functional Neuroimaging

The first robust evidence that patients with schizophrenia had ‘hypofrontality’ – underactive prefrontal lobes – came from Ingvar and Franzén at the Karolinksa Institute in Sweden.28 The British contribution was to rather undermine confidence in this finding. Researchers in Edinburgh and London demonstrated that hypofrontality was more anatomically constrained, that it could also be found in depression and even that ‘hyperfrontality’ could be found in unmedicated first episode patients.29 Consequently, a Lancet editorial could pronounce ‘Hypofrontality RIP’ in 1995.

Following Chris Frith’s lead, several important positron emission tomography (PET) studies at the MRC Cyclotron Unit at the Hammersmith Hospital in London showed a complex but compelling picture of neurofunctional correlates of symptoms, especially of auditory–verbal hallucinations. These findings suggested that such hallucinations are associated with under-activation of language areas in the brain concerned with the monitoring of inner speech.30 These insights depended upon a technique that could analyse whole-brain tracer data. Karl Friston not only invented a Statistical Parametric Mapping procedure to do this but also would apparently stay up all night adding functions if one was needed for a particular analysis.

The Functional Imaging Laboratory (FIL) was founded in 1994, within the Institute of Neurology, following a major grant award from the Wellcome Trust. It pioneered new neuroimaging techniques such as functional MRI (fMRI) for understanding human cognition. Generously and wisely, Friston made his ‘SPM’ programme for analysing these data freely available and supported from the FIL and it remains the industry standard worldwide. The combination of fMRI and SPM facilitated more sophisticated studies to map auditory hallucinations, to relate them to dysconnectivity between language regions in the brain and to integrate these findings with dopamine signalling.

Dopamine

Two independent North American groups demonstrated in the early 1970s that the clinical potencies of antipsychotic drugs very strongly correlated with their ability to inhibit tritiated (3 H) dopamine binding to postsynaptic receptors in mammalian brain samples. Clinical trials at NPH reinforced a dopaminergic theory of schizophrenia but also showed that it applied to other psychotic disorders.31 Some post-mortem studies also showed that the binding of 3 H–labelled spiroperidol was increased in parts of the basal ganglia and amygdala, but other studies suggested it was secondary to drug treatment. An early PET study with spiroperidol 77Br-brominated to emit gamma rays found an increase in activity in drug-free patients, but this too was disputed – with sometimes heated exchanges between the labs leading this work at Johns Hopkins and the Karolinksa.

It was only with the development of another tracer – fluorodopa – which is incorporated into dopamine and therefore measures dopamine synthesis and turnover that the dopamine story in schizophrenia has been clarified. Using fluorodopa PET, researchers at Imperial and Kings Colleges in London have shown that young people at high clinical risk of psychosis have elevated dopamine turnover in the striatum, which correlates with psychotic (but not other) symptoms, is highest in those most likely to become ill and increases as they develop a psychotic disorder.32 (It should be noted, however, that there is a similarly strong strand of evidence that glutamatergic neurotransmission is also disrupted in schizophrenia.)

A Note on Dementia Imaging

Psychiatrists of many persuasions were among the vanguard of researchers using early neuroimaging techniques to study morphological and perfusion pattern changes in the brain in Alzheimer’s disease and to distinguish them from those in multi-infarct dementia and from normal controls. Indeed, a generation of Old Age psychiatrists – inspired by Martin Roth in Newcastle and then Cambridge – did much to develop wider scientific and clinical interest in these conditions. As neurologists became more interested, they established that brain atrophy can be visualised by CT or MRI and that serial imaging and quantifying the degree of atrophy could aid diagnosis. Indeed, CT or MRI is now routinely recommended in many clinical guidelines in the evaluation of possible dementia and is now included in some diagnostic criteria. Further, Ian McKeith and John O’Brien have led the application of the accurate and reliable measure of low dopamine transporter activity in the brain in making a diagnosis of Lewy body dementia as distinct from others.33

The Cochrane Collaboration and Evidence-Based Medicine

The Cochrane Collaboration was founded in 1993 in response to Archie Cochrane’s earlier call for up-to-date, systematic reviews of all relevant RCTs across health care (see also Chapter 17). Many academic and clinical psychiatrists from different specialties were early and enthusiastic contributors, and dedicated groups for schizophrenia and dementia were among the first to get established and publish reviews.

It quickly became evident that the RCT literature in psychiatry was about as good or bad as it was in most of medicine – with the notable exceptions of neurology and cardiology – in that there were far too many small, short and poorly reported trials. Nonetheless, systematic reviews and meta-analyses of the best available evidence showed that antidepressants and antipsychotics successfully treated acute depression or schizophrenia and that continuing effective treatment for a year compared with treatment discontinuation reduced relapse rates from around 41 per cent to 18 per cent for depression (31 RCTs, 4,410 participants)34 and from 64 per cent to 27 per cent for schizophrenia (65 RCTs, 6,493 patients).35 These differences of 23 per cent and 37 per cent mean that, on average, about one in three or four patients will benefit from taking these drugs over a year – and these are some of the largest treatment effects in the whole of medicine. There was even RCT evidence (32 RCTs, 3,458 patients) that lithium reduced suicide and overall mortality, although these were based on (fortunately) small numbers of deaths.36 The UK ECT review group included service users and established that real ECT was significantly and substantially more effective than simulated or sham ECT (6 RCTs, 256 patients, all done in the UK in the 1970s) and more effective than pharmacotherapy (18 trials, 1,144 participants).37

Cochrane, as it has become known, and the wider rise of what might be called the evidence-based medicine movement came at roughly the same time as the development and aggressive marketing of the new ‘atypical’ antipsychotics (and valproate and various antidepressants) as more effective and/or better tolerated than the old drugs. Varying definitions of atypicality, study populations, outcomes and the reporting of these meant that well-conducted RCTs could show that ‘olanzapine beats risperidone, risperidone beats quetiapine, and quetiapine beats olanzapine’ and that all were, of course, better than the standard comparator drug haloperidol (as was required for FDA approval). Even if some or all of the apparent benefits were down to these new antipsychotics being used at lower doses than psychiatrists had got into the bad habit of using when prescribing older drugs,38 Big Pharma had realised this. There was a clear need for independently funded and conducted RCTs.

This realisation led to the Clinical Antipsychotic Trials in Intervention Effectiveness (CATIE) study, which is probably the largest and most expensive clinical trial ever done in schizophrenia. It cost the US taxpayer the best part of $100 million. Lieberman and colleagues randomised 1,493 patients at 57 US sites to one of five treatments.39 The primary outcome measure of continuing medication was only achieved in 26 per cent of people at eighteen months but this was about 10 per cent higher in patients allocated to take olanzapine – even if they also tended to put on weight and suffer metabolic derangements. In the UK, the CUtLASS trial showed similarly slight, if any, advantages of the newer antipsychotics,40 while the BALANCE trial showed that lithium was superior to valproate in preventing relapse in bipolar disorder.41 Systematic reviews showed that the new antidepressant, mood stabilising and antipsychotic drugs did not have simple class effects and each drug had subtle differences in terms of reducing certain symptoms and causing various adverse effects.

Laying the Groundwork and Going Global for Genetic Advances

It has long been known that major psychiatric disorders aggregate in families. This was conclusively demonstrated for schizophrenia by Gottesman and Shields in 1966,42 while working at MRC Psychiatric Genetics Unit at the IOP, with the assistance of Elliot Slater. He had kept records of twins of whom at least one had a diagnosis of schizophrenia. Taken together with data from eleven earlier major twin studies, an identical twin was at least forty times more likely to have schizophrenia than a person from the general population and a fraternal twin of the same sex around nine to ten times as likely. These data strongly suggested a strong genetic basis for schizophrenia and adoption studies outside the UK proved it.

Ongoing twin studies at the IOP established beyond doubt that the heritability of schizophrenia, schizoaffective disorder and mania were substantial and similar (82–85 per cent). What was controversial was the mode of inheritance – whether it was due to a small number of rare but highly penetrant mutations or more attributable to polygenic liability in a diathesis stress-model. The identification of a chromosomal translocation from t1:11 in a large Scottish family in 1990 led to the identification of the ‘DISC1’ (Disrupted in Schizophrenia 1) gene.43 The association was, however, strongest when the mental disorders in the phenotype included recurrent major depression and adolescent conduct and emotional disorders. Even though this family may be unique and common variants in DISC1 are not (at least as yet) identified as risk factors for any specific disorder, this discovery kept the field going during the long lean years of non-replicated linkage and association studies.

What was to transform psychiatric genetics was the Human Genetics Project (HGP). This started in 1990, funded by the US Department of Energy and the US National Institutes of Health and supported by the Wellcome Trust through the Sanger Centre in Cambridge. The first draft of the complete sequence of nucleotides in the human genome was published in 2001 and launched modern human genetics. The identification of rare, penetrant genetic variants causing monogenic diseases boomed in the following years and paved the way for the systematic screening of disease genes in diagnostic services – including those with severe learning disability. The HGP also brought about advances in technology, particularly ‘next-generation sequencing’, which led to the first available arrays for genome-wide association studies (GWAS).

The early psychiatric GWAS did not lead to significant findings, which led to some losing faith in the approach. Others persisted, and in one of the first and best examples of collaboration science the Wellcome Trust Case Control Consortium published (in 2007) the then largest GWAS to date and set the scene for the spate of gene discovery that was to follow. They examined approximately 2,000 individuals for each of seven major diseases and a shared set of approximately 3,000 controls and identified 24 independent association signals including one in bipolar disorder (and 1–9 in coronary artery disease, rheumatoid arthritis, type 1 and type 2 diabetes and Crohn’s disease).44

The Psychiatric Genomics Consortium was also formed in 2007, which allowed thousands of samples from all over the world to be shared. This collaboration quickly delivered the first significant findings from GWAS for schizophrenia, as well as evidence that major psychiatric disorder was very highly polygenic.45 Nevertheless, some rare mutations of large effect were clearly implicated in neurodevelopmental disorders such as autism, attention deficit hyperactivity disorder (ADHD) and schizophrenia.46

It has become increasingly clear in the past decade that GWAS is a numbers game. Pooling 100,000 cases of schizophrenia and controls led to no fewer than 108 schizophrenia-associated genetic loci becoming evident.47 Adding another 35,000 people identified another 37 ‘hits’ and more are on the way. Indeed, the success of GWAS in schizophrenia has led to it being called the poster child of the GWAS generation. Bipolar disorder and depression are now yielding their genetic secrets too.

It is, however, equally clear that the genes identified are pleiotropic – that is, they have multiple effects and so do not map neatly on to specific disorders. Just as the rare mutations increase the risk for a variety of conditions, the risk variants for common psychiatric disorders overlap to a large extent. Nonetheless, there are likely to be some specific genes and biological pathways as well as others cutting across disorders. Although such insights have yet to lead to innovations in the clinical management of patients, they certainly have promise for diagnostics and therapeutics.

The Decade of the Brain and the Next Ten Years

While the psychiatric geneticists have been trailblazing, the neuroimaging research community have organised themselves into large global consortia employing common and increasingly innovative methods. Most notably, the Enhancing Imaging Genetics through Meta-Analysis (ENIGMA) consortium have combined data from thousands of scans which have confirmed and strengthened the results from previous studies and meta-analyses and delivered novel insights into the genetics of neuroimaging measures. The application of mathematical graph theory tools to neuroimaging data provides a way of studying neural systems efficiency at a whole-brain (connectome) level.

Most excitingly, contemporary neuroscience and philosophy see the brain–mind (after Reverend Thomas Bayes) as testing hypotheses about the world, from previous experience, against ongoing experience and updating the inner model of the world as required.48 In essence, structural and functional disturbances of fronto-temporal brain systems could reduce their reliable co-ordinated input, disrupt reality testing and impair the use of memories to guide perception and action. Most of the research thus far has been done on schizophrenia – with some replicated findings if not yet a true consensus – but this and other forms of ‘computational psychiatry’ offer objective measures of otherwise subjective impressions that promise to be revealing across psychiatry and indeed neuroscience as a whole.49

It has also become clear in the last ten years that the structural and functional neuroimaging findings in various disorders overlap to a large degree. To some extent, this is hardly surprising given the overlap in genetic and environmental risk factors and the comorbidities of mental disorders. The increasing incorporation of psychosocial risk markers – such as the role of personality, childhood adversity and stress and their biological correlates – into multivariate risk models of mental illness alongside polygenic risk scores and machine learning approaches to data analysis will advance progress towards clinical applications. Despite these complexities, there has been notable progress in developing neuroimaging biomarkers of depression and schizophrenia in the 2010s.50

Conclusion

This has, of necessity, been a relatively brief and focused review of fifty years and more of research endeavour. It has also been positive in stressing replicated advances and ignoring less profitable research streams, such as the red herring of the ‘pink spot’ in schizophrenia. Equally, however, British psychiatrists have made major contributions to understanding and treating many conditions – including autism, ADHD, anxiety and alcohol and drug dependence – which we have not had the space to do justice to.

Overall, it is difficult to avoid the conclusion that ‘biological psychiatry’ has been a success. Indeed, the historian Edward Shorter said as much as far back as 1999.51 However, it makes little sense to talk of a biological psychiatry pursued by biological psychiatrists. It is simply medicine done by doctors who specialise in the diagnosis and management of mental illness.

Disquietingly, far too many psychiatrists seem unaware that drug treatments in psychiatry are about as good as in the rest of medicine.52 As for the research, we should redouble our efforts to find biomarkers of diagnosis and in particular of treatment response. This is within our grasp if the field receives the research funding that reflects the societal costs of the conditions we deal with. This is also what the patients with these conditions and their carers want, as the James Lind Alliance (JLA) has demonstrated. The JLA, whose infrastructure is funded by the UK National Institute for Health Research (NIHR), brings patients, carers and clinicians together, in Priority Setting Partnerships, to identify and prioritise unanswered questions. There is a remarkable convergence of interests in, for example, determining causes, better diagnosis, early interventions, personalised approaches and better treatments with fewer adverse effects.

Key Summary Points
  • Many scientists, academic and clinical psychiatrists have contributed to the search for the biological basis of mental illness, leading to many notable discoveries and particular advances in understanding schizophrenia.

  • RCTs have established beyond reasonable doubt the efficacy of antidepressants, ECT, antipsychotics and mood stabilisers.

  • The most striking diagnostic advances have been made in identifying the genetics of learning disability and in developing neuroimaging and blood-based biomarkers of dementia.

  • Polygenic risk scores and machine learning of neuroimaging and other data have real potential to impact upon clinical practice and improve patient care.

  • Psychiatrists should join those affected by mental illness in calling for increased funding to identify biomarkers, develop new treatments and improve services.

Chapter 17 The Pharmaceutical Industry and the Standardisation of Psychiatric Practice

David Healy
Introduction

The 1950s saw the largely serendipitous discovery in clinical settings of a series of psychotropic drugs, produced primarily in the pharmaceutical divisions of European chemical companies. This accompanied the discovery of antibiotics and medicines for other clinical conditions. While there were large chemical companies and a proprietary medicines industry, there was then no pharmaceutical industry as we know it. The nascent companies producing psychotropics, however, were quick to set up international meetings that brought basic scientists and clinical delegates together from all continents.

1960–80

In the wake of the Second World War, German psychiatry and medicine lost ground, opening a door for English to become the lingua franca of the medical world. This and the detour American psychiatry took into psychoanalysis fostered the reputations of British psychiatrists.

As of 1960, British academic psychiatry was ‘social’. Social meant epidemiological rather than committed to the idea that mental illness was social rather than biological in origin. Social psychiatrists began thinking in terms of the operational criteria and other procedures that would enable research on the incidence and prevalence of nervous problems. Among the leading figures were Michael Shepherd, John Wing and others from the Institute of Psychiatry, who worked to establish methods which laid the basis for an international pilot study of schizophrenia on the one hand to studies on the incidence of primary care nervous disorders, not then part of psychiatry, on the other. These latter studies provided a template for other studies undertaken since then that, perhaps even more in mental health than in any other branch of medicine, created markets for pharmaceuticals.1

This group of psychiatrists played a central role in incorporating mental disorders into the International Classification of Diseases, which influenced the third edition of the American Diagnostic and Statistical Manual of Mental Disorders (DSM) that, published in 1980, laid a basis, along with controlled trials, for an industrialisation of psychiatry.

Randomised controlled trials (RCTs) originated in Britain. The first RCT had been done by Tony Hill on streptomycin in 1948. Michael Shepherd at the Institute of Psychiatry became a coordinator of the Medical Research Council (MRC) clinical trials committee soon after and, working with Hill, fostered the development of RCTs within mental health. Shepherd also ran the first placebo-controlled parallel group RCT comparing reserpine to placebo in anxious depression, which reported in 1955.

RCTs have since invaded medicine. This was not because they are a good way to evaluate a drug but because of the thalidomide crisis. The birth defects thalidomide caused led, in 1962, to a set of amendments to the US Food and Drugs Act that made RCTs the primary method to evaluate treatment. It was not clear then that the necessary focus on a primary end point RCTs require made them, almost by definition, a poor method to evaluate a treatment. Food and Drug Administration (FDA) regulations made RCTs the standard through which industry made gold, and trials done across medicine ever since have been industry-related.2

Before 1945, there were few academic psychiatry posts outside of Germany. After the war, university posts were created in the United States, the UK and elsewhere. Through the 1960s, however, there were comparatively few academic psychiatrists globally and this scarcity along with the involvement of British academics in early epidemiological research, the development of protocols for RCTs and their facility at English gave them a magisterial status at international meetings.

Among the notable figures were Martin Roth, whose concepts of endogenous and neurotic depression were influential. Linford Rees was a prolific clinical trialist. Michael Shepherd brought a scepticism of the enthusiasm for new drugs, most clearly demonstrated in high-profile arguments about the role of lithium (see also Chapters 2 and 16). Max Hamilton is perhaps the best-known figure now, by virtue of having his name on the standard scale for assessing the efficacy of antidepressant drugs – the Hamilton Rating Scale for Depression.

Hamilton’s 1974 view about this scale seems prescient:

It may be that we are witnessing a change as revolutionary as was the introduction of standardization and mass production in manufacture. Both have their positive and negative sides.3

Rating scales, along with operational criteria and RCTs, have made for a standardisation of clinical practice and a development of managed services that have latterly left US and UK psychiatry, and health services more generally, increasingly similar – where, in 1960, they could not have been more different.

In the years between 1960 and 1980, clinicians like George Ashcroft, Alex Coppen, Michael Pare, Donald Eccleston and others played a part in formulating monoamine,4 especially serotonergic, hypotheses for mood disorders and in bringing the notion of a receptor into psychopharmacology, along with researchers in pharmacology like Merton Sandler, Gerald Curzon, Geoff Watkins, John Hughes and others.5 These hypotheses were discarded by the 1970s but in the 1990s in the hands of pharmaceutical marketing departments they provided a basis for a bio-babble that has profoundly shaped public culture.6

In 1974, the British Association for Psychopharmacology (BAP) formed. The BAP became a forum for lively interdisciplinary exchanges for twenty years after which, as in other forums, the divide between clinicians and neuroscientists became increasingly hard to bridge.7 The BAP provided a template for a European College of Psychopharmacology and at the same time a European Psychiatric Association was forming, which along with a World Psychiatric Association was largely underpinned by industry funding.

In addition to new tricyclic antidepressants and neuroleptics for traditional illnesses, LSD, benzodiazepines and contraceptives were at the centre of vigorous debates in the 1960s, playing a key role in stimulating revolutionary ferment in 1968. Did these drugs enhance or diminish us? In the case of the deinstitutionalisation that followed the introduction of the psychotropic drugs, who was being deinstitutionalised – patients or mental health staff?8

As a symbol of the role of medicine and psychiatry in particular, at this time, in 1968, revolting students occupied and ransacked the Paris office of Jean Delay, the discoverer of chlorpromazine, and occupied the Tokyo Department of Psychiatry for ten years, protesting against the biological experiments being undertaken. The biochemical psychopharmacology being undertaken by clinicians like Coppen in England and Van Praag in Holland appeared dehumanising.9

While psychotropic drugs were a focus for concern, it was the prospect of Big Medicine and medical arrogance rather than Big Pharma that was alarming at the time. Physicians working in industry such as Alan Broadhurst at Geigy did a great deal to ‘market’ depression, along with the tricyclic antidepressants and the Hamilton Rating Scale. This was considered respectable medical education then rather than disease mongering as it might be seen now.10 George Beaumont, also at Geigy, took a lead in the promotion of clomipramine as a treatment for obsessive-compulsive disorder (OCD).11

1980–2000

While anti-psychiatry seemed largely contained in the 1970s, from 1980 mainstream figures in Britain like Peter Tyrer, Malcolm Lader and Heather Ashton raised concerns about the risks of dependence on benzodiazepines that seemed continuous with anti-psychiatric concerns about psychotropic drugs in general.12 It was at this point that the pharmaceutical industry slipped into the line of fire,13 symbolised by a set-piece engagement when Ian Oswald accused Upjohn of fraud in clinical trials of their hypnotic Halcion.14

Benzodiazepine dependence fed into a growing public debate about both mental health and health issues since, from 1980 onwards, more people were encountering psychiatrists and physicians than ever before, as both psychiatry and medicine deinstitutionalised from acute inpatient care into chronic disease management in outpatient and outpatient care (i.e. hypertension, osteoporosis, Type 2 diabetes, etc.).

Prior to 1980, other than for major events such as heart transplants, health rarely featured in the media but, in the 1980s, routine stories about benzodiazepine dependence in TV programmes like That’s Life marked a change and health stories now figure in every issue of almost every newspaper and regularly on the headlines of news bulletins.

The media focus then was on breakthroughs and risks. As of 1970, the word risk had featured in the headlines and abstracts of medical articles on 200 occasions.15 By 1990, prior to any mention of Risk Societies, this figure was more than 20,000 articles. Within health, a new numbers-based operationalism pitched medicines as a way to manage risks.

The benzodiazepine controversies opened a door for the Royal College of Psychiatrists to promote a Defeat Depression Campaign in 1990, whose message was that, rather than treat the superficial symptoms of anxiety, it would be better to diagnose and treat underlying depressions with non-dependence–inducing antidepressants.

This campaign coincided with the development and launch of selective serotonin reuptake inhibiting drugs (SSRIs). Eli Lilly and other companies ensured the Defeat Depression message was heard. The campaign helped make Prozac and later SSRIs into blockbuster drugs – drugs which earned a billion or more dollars per year – something unheard of before 1990, even though within three years of its launch there were more reports to regulators of dependence on another SSRI paroxetine than on all benzodiazepines over a twenty-year period.

The triumph of Prozac with its message of becoming Better than Well in books like Listening to Prozac, accompanied by a race to complete the Human Genome Project, seemed around 1990 to be ushering in a new biomedical era that left historians from Roy Porter to Roger Cooter wondering if it was possible any longer to write history of medicine. The term Biological Psychiatry was coined at this time.16

The years around 1990 also saw the emergence of evidence-based medicine (EBM). EBM pitched RCTs as offering gold standard knowledge of what medical drugs did and argued that this kind of knowledge should replace the knowledge born from clinical experience. The supposed validity of the RCT process meant that even trials funded by pharmaceutical companies would offer valid knowledge, although physicians needed to remain alert to tricks companies might get up to on the margins of trials.

As with RCTs, EBM largely began and took shape in Britain, symbolised by the establishment of the Cochrane Collaboration in 1992 (see also Chapter 16). Cochrane’s mission was to review trials systematically, whittle out duplicate publications and take a critical view of efficacy. Around 1990, the pharmaceutical industry seemed increasingly powerful, leading to the establishment of organisations like No Free Lunch that encouraged physicians to beware of Pharma-bearing gifts. For many, Cochrane and EBM seemed the best tool with which to rein in the pharmaceutical industry, given its focus on scientific procedures rather than morality.

Until 1980, clinical trials had been run in single universities or hospitals by academics who knew their patients. By 1990, they were multicentred and run by clinical research companies who collected the trial data in a central repository to which no academics or physicians had access. The reporting of trial results was contracted out to medical writing agencies so that the articles were mostly ghostwritten with academic names chosen for the authorship lines primarily for their value to marketing rather than their knowledge of the issues. British psychiatrists were no longer magisterial figures, who could make or break a drug; they had become ciphers in an industrial process, taking second place to Americans who had now discovered biological psychiatry. The appearances of scientific process remained the same, so few physicians or psychiatrists and no one outside the profession had any sense of the changes.

The first medical guidelines appeared in the mid-1980s aimed at stopping clearly unhelpful practices like stripping varicose veins. As of the early 1990s, a series of bodies like the BAP began to develop guidelines based on RCTs that made recommendations about what to do rather than what not to do. Industry also began to support guidelines but stopped when companies realised their control of publications meant they controlled the guidelines others created.

Industry control of the evidence became almost complete with the establishment by a Labour government of the National Institute for Clinical Excellence (NICE; now the National Institute for Health and Care Excellence) guideline apparatus in 1997. We appeared to have an independent body sifting the evidence without anyone realising that the evidence being sifted had been mostly ghostwritten and there was no access to the underlying data.

Events following a 1990 paper in the American Journal of Psychiatry brought home the change. This paper carried accounts of six cases of patients becoming suicidal on Prozac that offered compelling evidence of causality as traditionally established in medicine.17 Eli Lilly, the makers of Prozac, claimed their trials did not show that Prozac caused suicidality and that, while individual cases might be harrowing, the plural of anecdote was not data.18 Lilly’s defence ran in the British Medical Journal (BMJ), whose editor, Richard Smith, was a proponent of EBM. Lilly’s defence, hinged on a meta-analysis, seeming to show industry playing by EBM rules.

Prozac survived. The BMJ missed the fact that the small print of the meta-analysis showed a significant excess of suicidal acts on Prozac compared to placebo. The tie-up between BMJ and Lilly fuelled support for EBM and transformed medical journals and clinical practice. Up till then, clinicians had received regular drugs bulletins outlining the hazards of treatments, but these were replaced by guidelines which only mentioned benefits. Journals preferentially published RCTs and meta-analyses, which companies paid for, and it became close to impossible to publish case reports or anything on the hazards of treatment.19 Few clinicians noted that the party most consistently exhorting them to practice EBM was the pharmaceutical industry. Industry profits, meanwhile, grew twentyfold in the thirty years from 1980 to 2010. EBM did not rein in industry.

Through to the 1990s, many significant problems on treatment, such as the acute sexual effects on antidepressants or tardive dyskinesia on antipsychotics, were recognised within a year or two of a drug’s launch. After 1990, significant treatment hazards such as impulsivity disorders on dopamine agonists, enduring sexual dysfunction following finasteride, isotretinoin and antidepressants, and the mental state changes linked to asthma drugs like montelukast might wait twenty to thirty years, the expiration of a patent, or company efforts to market new drugs, to come to light.20 If treatment hazards cannot be formally recognised, they are unlikely to be registered in clinical practice. As a result, an increasing part of patients’ experience no longer registered on the eyes or ears of clinicians.

2000–2010 and Beyond

In 2002, a Labour government made NICE guidelines central to a new National Health Service (NHS) plan, which aimed at levelling up health provision supposedly in accordance with best practice. Guidelines would also enable managers to ensure clinicians delivered services rather than exercised discretion and allow nurses and other staff to replace doctors to carry out defined tasks. Health services began to replace health care, and in the new services the exercise of medical discretion was a problem rather than something to be celebrated.

The transition from care to services became clear in 2004, when NICE began drawing up guidelines for the treatment of childhood depression, just as a crisis developed about the efficacy and safety of antidepressants given to children. Investigative journalists rather than scientists, academics or clinicians scrutinised what was happening and found that the clinical trial literature was entirely ghostwritten or company-written and that publications claiming treatments were effective and safe were at odds with what RCT data showed. A Lancet article and editorial ‘Depressing Research’ suggested no guidelines should be written unless there was access to the data.21

The crisis was raised in a House of Commons Health Select Committee meeting later that year, but in response both the editor of the Lancet and a founder of the Cochrane Collaboration assured the committee that the ghostwriting of clinical trials and the lack of access to trial data were not a significant problem.22

There was a brief stay in the increasing rate of antidepressant prescriptions to children in Britain just after this, but antidepressants are now the second most commonly prescribed drugs to teenage girls after contraceptives, in the face of thirty RCTs of antidepressants given to depressed minors – all negative.23

In 1960, RCTs were expected to temper the enthusiasm for new treatments generated by open studies claiming astonishing benefits. A negative RCT would stop therapeutic bandwagons – as with the demise of the monoamine oxidase inhibitor (MAOI) antidepressants following a negative MRC trial in 1965. Now psychiatry leads the world in having the greatest concentration of negative trials ever done for any indication in any age group, but this has had no effect, other than a paradoxical one, on rates of treatment utilisation.

When concerns first arose around 2004 about the use of antidepressants in children the problem could be seen as a rotten apple in a barrel problem that could be put right by professional and media attention. There was some professional attention to the problem of children and antidepressants around 2004 with the then president of the Royal College, Mike Shooter, instituting a review of conflict of interest policies. Industry warned the College to back off.24

We now appear to have a rotten barrel with politicians, health bureaucrats, academics and the media unable to grapple with a problem that extends to both the efficacy and the safety of all drugs across medicine. There are limp discussions about the need to rein in conflicts of interest – transparency – predicated on the idea that we are still dealing with rotten apples. At a time when it would be helpful to have some magisterial clinicians, it is difficult to see any psychiatrist the industry might be worried about.

Almost all industries have an interest in standardising methods and processes. This standardisation and operationalism is at the heart of what is called neoliberalism but has arguably been more apparent in medicine (neo-medicalism) than in any other domain of life since 1980.25 Just as in 1976, according to the then prevalent dogmas of the Chicago School of Economics, the money supply in Chile became a thermostat function that dictated how the Chilean economy would operate, so in medicine numbers, such as those for blood pressure, peak flow rates, bone densities, rating scale scores or the five of nine criteria needed to make a diagnosis of depression in DSM, now dictate what happens.

The room for discretion vanished with the development of guidelines which were embraced by governments of both right and left and in particular the Labour government in the UK, who saw a means to level up care. Instead, guidelines provided a vehicle to expand the role of management in clinical practice, transforming what had been health care into health services and making health part of the wider service sector.

Qualitative assessments of a patient, which had been judicial in nature, best exemplified in the effort to establish whether a treatment is causing an adverse effect or not, were replaced by quantitative processes against which clinical practice would be evaluated.26

In 2016, the pharmaceutical industry declared they were pulling out of mental health because they could make more money elsewhere. There was no apparent fiduciary duty to physicians or patients; their primary fiduciary duty to their shareholders required a maximising of revenues.

Industry’s intention was to turn to anti-inflammatory drugs, among others. This turn did not mean that mental health would be neglected completely but that anti-inflammatory drugs would be developed which would come at a high cost and could then be sold for a variety of indications such as mental health disorders. It is no surprise that in the last decade we have heard a lot more about a possible inflammatory basis to mood disorders – an inflammo-babble. It is unlikely this move will lead to cures of nervous problems in that, as Goldman Sachs recently noted, curing patients is not a good business model.27

Conclusion

In the 1960s, after an astonishing flood of new drugs, a nascent pharmaceutical industry, previously run by chemists and clinicians, brought in management consultants to ensure the breakthroughs continued. The consultants installed professional managers and recommended process changes involving an outsourcing of clinical trials and medical writing initially, followed by drug discovery as drug pipelines dried up. Latterly, public relations have been outsourced so that pharmaceutical industry personnel rarely defend industry in public. Debate about the role of drugs or hazards linked to drugs has been silenced, as media organisations adopt policies to avoid False Balances – a strategy introduced by industry think tanks, the mirror image of Doubt Is Our Product. If drugs are approved by regulators and endorsed in guidelines, dissenting viewpoints should not be aired in order to avoid alarming the public.

The standardisation of processes extended to clinical services in the 1990s and to professional bodies like the Royal College of Psychiatrists from around 2010. An installation of managers is one of the headline features of these changes, but these are not managers in the sense of people who manage conflict or who are entrepreneurial. They are rather bureaucrats ticking operational boxes. This is bad for drug discovery, inimical to health care and may toll a death knell for psychiatry as a profession. Psychiatrists, on current trends, are more likely to end up as middle-grade managers, ensuring nurses and others meeting with patients adhere to guidelines and minimise risks to the organisation, than as clinicians who might exercise discretion or academics who might follow a serendipitous observation.

Key Summary Points
  • As of 1960, British academic psychiatry was ‘social’. Social meant epidemiological rather than committed to the idea that mental illness was social rather than biological in origin. The designation ‘biological psychiatry’ became current in the 1990s.

  • In the 1960s, after an astonishing flood of new drugs, a nascent pharmaceutical industry, previously run by chemists and clinicians, brought in management consultants to ensure the breakthroughs continued but drug discovery in psychiatry has dried up.

  • The pharmaceutical industry has colonised medical research, education and clinical practice. EBM and clinical guidelines have served to extend rather than contain the influence of the industry.

  • Antidepressants are now the second most prescribed drugs to teenage girls after contraceptives, in the face of thirty RCTs of antidepressants given to depressed minors – all negative.

  • While they came with drawbacks, through to 1990 the psychotropic drugs introduced from the late 1950s onwards extended the range of clinical capabilities and likely did more good than harm. It is difficult to make the same claims about developments since 1990.

Chapter 18 The Evolution of Psychiatric Practice in Britain

Allan Beveridge
Introduction

In 1960, the typical psychiatrist was male, white and British-born. He would spend most of his working day in a mental hospital. The majority of his patients would be compulsorily detained and many would have spent long periods incarcerated.1 He would not be a member of the Royal College of Psychiatrists as this organisation was not created until 1971. Instead, he would probably have attained a Diploma in Psychological Medicine (DPM) but would not have participated in a formal training scheme, as these did not exist either.2 In assessing his patients, he would have had a limited number of diagnoses at his disposal. Homosexuality was one of the diagnoses and he might use aversion therapy to treat the ‘condition’.3 He could prescribe antipsychotic medication, such as chlorpromazine and haloperidol, and antidepressants, such as imipramine, amitriptyline and phenelzine. He could also prescribe electroconvulsive therapy (ECT), though he might have to administer the anaesthetic himself. He could employ phenobarbitone and intramuscular paraldehyde for sedation, and he might resort to the use of the padded cell and the straitjacket, which were still available in some mental hospitals. He might have some expertise in psychodynamic psychotherapy, but this would depend on where he was working and whether the local psychiatric culture favoured such an approach. There was little in the way of specialisation, so he would be expected to deal with all the patients referred to him, the referrals coming almost exclusively from the general practitioner. At this time, the number of psychiatrists was small, and his contact with other colleagues outside his work would be at meetings of the Medico-Psychological Association (MPA) or in the pages of the Journal of Mental Science, the forerunner of the British Journal of Psychiatry, which was established in 1962.

By 2010, the typical psychiatrist was quite likely to be female and non-white, as, by this stage, many doctors had emigrated to Britain from the Indian subcontinent, Africa, the Caribbean and the Middle East. She would be working in a community resource centre or in a psychiatric unit in a district general hospital and most of her patients would be voluntary. She would receive referrals from not only general practitioners but also psychologists, social workers, occupational therapists, the police and other agencies. The range of problems she encountered was much larger than in 1960 and the expectation of cure or relief of symptoms was much higher.4 She would be a member of the Royal College of Psychiatrists, having passed her exams and participated in a training scheme, which involved rotating around the different psychiatric specialties. She would have access to many more diagnoses, some of them contentious, such as attention deficit hyperactivity disorder (ADHD), though homosexuality would no longer be considered a psychiatric condition. She would have a greater range of medications from which to choose, such as lithium and other mood stabilisers, though there had been no major breakthrough in psychopharmacology since the 1950s. She would be versed in psychodynamic psychotherapy, which was now a mandatory part of her training, and she would also be familiar with cognitive behavioural therapy and, perhaps, dialectical behavioural therapy and interpersonal therapy. The number of psychiatrists in Britain in 2010 was considerably larger than it had been in 1960, and there were now many specialties, such as addictions, psychotherapy, old age psychiatry, forensic psychiatry, liaison psychiatry, community psychiatry, intellectual disability psychiatry and rehabilitation. There were more conferences at which to meet colleagues, and the British Journal of Psychiatry had expanded to include other publications, such as the Psychiatric Bulletin, which focused on everyday clinical matters.

What sources are available to understand the day-to-day clinical experience of British psychiatrists during this period? As well as the secondary literature, there are witness seminars,5 where clinicians discuss past events. There are also various interviews with psychiatrists, but these tend to be focused on white men and on those who trained in London, more particularly at the Maudsley.6 It is difficult to discover what clinical life was like for women, though Angela Rouncefield, who worked in Liverpool in the 1960s, remembers that psychiatry, like the rest of medicine then, was very male-dominated and that she felt she was constantly having to prove herself.7 It is also difficult to find the testimonies of people from ethnic minorities and those working in the so-called periphery. There tends to be much more written about England than other parts of Britain. Andrew Scull has criticised this exclusively Anglocentric approach, which, although he was referring to Scotland, could equally apply to Wales and Northern Ireland:

With few exceptions, English-centred historiography … largely neglected the very different Scottish approaches to the containment and treatment of the mad … the scholarship of the time embodied the presumption (infuriating for the Scots, and typical of the English) that either the only thing that mattered was what occurred south of the border; or alternatively and without giving the matter much thought, what happened in England was also what occurred in politically subordinate Scotland.8

Within the limitations of the archival record, this chapter will attempt to examine the evolution of psychiatry between 1960 and 2010, as it was experienced by psychiatrists of the era.

The Beginning of the Era

In 2019, the social psychiatrist Tom Burns remembered the 1960s thus (see also Chapter 20):

in the late 60s, when I was doing this, psychiatry was … quite prestigious. I mean nowadays it’s all looked down on, psychiatry, it’s a job that people who can’t get proper medical jobs do. It wasn’t like that then. It was the year of Ronnie Laing, and all that stuff, and but it was quite an exciting thing to do. And it did have its own glamour and prestige, you know, it wasn’t seen as something that people who couldn’t get other jobs did.9

The Mental Hospital

The old mental hospitals were still standing in 1960 but not all were the repressive institutions of folklore. Bill Boyd, who worked at Rosslynlee Hospital outside Edinburgh, remembers:

It was very comforting and comfortable because Rosslynlee was still traditional under a very excellent and benign Physician Superintendent, Dr Andy Hegarty, I found I was accepted there very easily. There was a very informal, warm atmosphere amongst doctors, nurses and all the staff and there was a lot going on there too – out patient clinics in the local towns, patients coming up as day-patients, looking back now the services were remarkably advanced.10

In contrast, Hugh Freeman had more unpleasant memories:

My first experience of it, though, was as a locum at Wakefield, immediately before the Army. The neuroleptic era was just beginning then, but this mental hospital was quite Hogarthian in many ways, and some of the staff seemed to me as peculiar as the patients. When I first arrived there, on a misty January night, it was like the opening of a Hammer film.11

Malcolm Campbell, a neurologist, remembers his shock in the early 1960s when he started work at Friern Hospital, in London with its quarter-of-a-mile-long corridor providing the entrance to more than thirty wards, crammed with patients.12 A recent witness seminar demonstrated that the quality of mental hospitals varied greatly throughout the country, and there was a suggestion that the standard of care was higher in Scotland.13

Changes were afoot, however. The 1957 Report of the Royal Commission on the Law Relating to Mental Illness and Mental Deficiency (Percy Commission, 1957) was a crucial turning point in mental health policy in the UK.14 It urged relocating mental health care from hospital to community settings and inspired the 1959 Mental Health Act, which empowered local authorities in England and Wales to establish community mental health provision.

Psychiatrists tended to see the 1959 Act as benign in its impact on services and patient experience, partly because it allowed them, through the procedure of voluntary admission to mental hospitals, to implement improvements in treatment and care which had been foreshadowed in the 1950s. Services were dominated institutionally and intellectually by psychiatrists.15 Although the 1959 (England and Wales) and 1960 (Scotland) Acts differed in some respects, there were enough similarities between the two to anticipate that the drive in England and Wales to close down psychiatric hospitals would be mirrored north of the border. However, as Victoria Long has pointed out, the Scots were not as convinced as their English counterparts that psychiatric hospitals could be rapidly emptied of their patients.16

Bill Boyd remembers:

I was Chairman of the Scottish Division, we were accused in Scotland of not being as advanced as the south in terms of cutting back on beds and putting people into the community. I remember very clearly writing to the Scotsman on behalf of the Division pointing out that psychiatrists were at the forefront of developing Care in the Community but that we as a group were not prepared to put our patients out of hospital until we were confident that community facilities and indeed public attitudes had matured to a level where it was reasonable to move our patients into Community Care.17

Rehabilitation

During the 1970s, the numbers of patients in psychiatric hospitals fell. This was the era of ‘deinstitutionalisation’. In 1970, Wing and Brown’s classic Institutionalism and Schizophrenia was published.18 The authors compared three psychiatric hospitals and found that the social environment had a considerable effect on the mental state of the patients. The more deprived the environment, the worse the outcome. However, too much stimulation could also have adverse effects on the patients as well. The London social psychiatrist, Jim Birley, maintained that the goal of rehabilitation was to find the appropriate level for each patient.19 Diagnosis was not so important in predicting outcome for these patients. Developing living skills was more important, and here clinical psychologists and occupational therapists played a crucial part, while Industrial Rehabilitation Units helped patients get back into the way of work. However, Birley conceded that the early optimistic belief that patients could become independent was not borne out by experience. There was the danger of what he termed ‘transinstitutionalisation’, whereby patients were merely transferred from hospital to other deprived environments, such as boarding houses, nursing homes or the street. In addition, after the 1960s employment fell and it was less easy for the mentally ill to find jobs.

Birley also felt that many psychiatrists were not particularly interested in looking after the long-term mentally ill. By 1970, many acute psychiatric units had been built in general hospitals, and these proved more attractive to psychiatrists. Birley observed:

Psychiatrists, like most doctors, prefer to look after patients who get better … Psychiatric departments of medical schools, where most psychiatrists were trained, felt that they required a regular supply of acute and preferably ‘new’ cases for teaching.20

As a result, many psychiatrists had not been trained in the management of the chronically mentally ill, whom, Birley maintained, they often perceived as unattractive in appearance, behaviour and level of hygiene. In addition, many psychiatrists felt vulnerable outside the hospital, working with different staff with different approaches, though some greatly enjoyed it.

As well as the efforts to move the long-term psychiatric residents outside asylum walls, there was the movement to treat patients in the community, known as social or community psychiatry. As Tom Burns recalled, the flagship of social psychiatry was the therapeutic community movement, which encompassed Fulbourn, Napsbury, Dingleton and Henderson Hospitals.21 Although a pioneer and an enthusiast, he recognised some of the difficulties:

You’re stripped of the normal paraphernalia of status. You know, you’ve got an outpatient clinic in medicine, there are receptionists and nurses and medical students and being a consultant is good for your ego, you know, you’re the expert that everything circles around. If you do social psychiatry, it’s you and perhaps one of your colleagues, a nurse or social worker, plus the patient and their family on their territory.

… when I look back on it, the, particularly when social psychiatry started to move out more and more into sort of community-based work and stripping away all these structures that we normally relied on, I, I think we probably underestimated how important they are to sustain people in difficult jobs … So, I think one thing that we’ve gotten wrong a bit was that we turned our back a little bit too much on the benefits of institutional care. I mean, not for the patients, but for us.22

The lack of sufficient resources for community care of the mentally ill was already causing frustration to clinicians.23

Other Developments

Hugh Freeman recalled: ‘there was no tradition of multidisciplinary team working; that was one of the achievements of our efforts in the ’60s’,24 a sentiment which David Goldberg also expressed.25 Tom Burns felt that the sectorisation of psychiatric services into geographical ‘catchment areas’ which began in the 1960s was a major and valuable development.26

Further, as Turner (see also Chapter 23) and colleagues observed:

The 1970s also saw significant innovations in treatment and service delivery, led by clinicians responding to these challenges. There was increasing use of psychological treatments with an evidence base and widespread acceptance that the services needed to acknowledge and counteract the social devaluation of their users.27

With the rundown of the old mental hospitals, came the setting up of psychiatric units in general hospitals. The advocates claimed they would lead to a reduction of stigma, greater accessibility for patients and a closer alliance between psychiatry and general medicine, leading to improvements in the patients’ physical health.28 For many psychiatrists, it was a great step forward. Maurice Silverman, working at Blackburn commented:

I think the basic difference is that in the DGH [district general hospital] unit, as a psychiatrist, you have a very much more intimate relationship with other personnel and with the surrounding community. Under ‘other personnel’, I’m including other consultants in every specialty, the GPs in the area, and community social workers.29

Others, though, were less keen. Thomas Freeman, who worked at Gartnavel Royal in Glasgow, commented:

I was perhaps going to swim against the tide as by 1963–1964 there was already talk of the mental hospital becoming superfluous. Hope was now pinned on the new medications and on the psychiatric unit of the general hospital. I strenuously opposed this, pointing to the fact that as yet we were without aetiologically based treatments. Chronically ill patients would remain with us; where else were they to go? Today we have the legacy of these optimistic forecasts and the actions which were based on them.30

Professor Elaine Murphy (see also Chapter 12) also complained:

Psychiatrists pressed for more convenient and congenial facilities in DGHs – the time spent with the long-term patients decreased in asylums and in the community. By the time the Royal College of Psychiatrists was established in 1971, training was focused on short term and emergencies.31

Murphy contended that there had emerged a two-tier system in England, with the new DGHs and the old, less well-funded asylums. Goldberg objected to the practice at the Maudsley of sending their seriously mentally ill patients, who had relapsed, to local mental hospitals.32

There were significant differences, however, between England and Scotland in their attitude towards psychiatric units sited in general hospitals. According to Long:

[Health] Departmental officials in Scotland believed that general hospitals lacked a number of resources specially designed to assist psychiatric patients, and favoured upgrading mental hospital care by using the small drop in inpatient numbers to relieve overcrowding and close down wards in old, obsolete buildings. Psychiatric units in general hospitals, they believed, should not be developed at the expense of existing mental hospitals, and efforts should be made to integrate the two forms of provision to stop a two-tier service developing.33

The Royal College of Psychiatrists and the Development of Specialties

The Royal College of Psychiatrists was founded in 1971. As Bewley shows, the path to its establishment was tortuous and many psychiatrists as well as doctors from other disciplines opposed it.34 Trainee psychiatrists, who were worried about the entrance requirement of a formal examination, campaigned successfully for the provision of adequate ongoing education before they were expected to sit such an exam. Hugh Freeman discussed the lack of formal training before the advent of the College:

At that time, the psychiatric profession here was very small, compared with today, and the greater part of it consisted of doctors who had grown-up in mental hospitals under the apprenticeship tradition. There was very little alternative to that, as the Universities and teaching hospitals provided only very few places indeed for those who wanted to train in psychiatry.35

The period witnessed the growth of a rapidly ageing population – by 1990, half of all long-stay beds were occupied by old people with dementia.36 The response was the development of the psychiatric care of the elderly, which had its origins in the late 1960s (see also Chapter 22).37 R. A. Robinson at the Crichton Royal Hospital in Dumfries was an early pioneer. In 1970, the government urged that psychogeriatric assessment units should be set up in general hospitals. According to Arie and Jolley,38 the main progress was in the development of services and they contended that the emphasis on bringing treatment to the patient’s home led the way for the rest of psychiatry. Further, the links with medicine contributed to the reintegration of psychiatry into medicine, at least to some extent.

Following the 1959 Mental Health Act in England and Wales, there was a reduction in the number of locked wards in psychiatric hospitals.39 In the early 1970s, there were only eight forensic psychiatrists in England (see also Chapter 29). Following the Butler Report in 1975, there were more than 150 consultants. When he was interviewed in 1990, the Edinburgh forensic psychiatrist Derek Chiswick said:

Forty years ago a handful of forensic psychiatrists spent their time giving evidence in the various ‘hanging’ trials … which decided whether a murderer was to live or die. Today the picture has changed. Forensic psychiatrists are fully involved in both assessment and treatment (the latter very important) of a wide range of mentally abnormal offenders … You will find forensic psychiatrists in various clinical settings including the maximum security or special hospitals (of which there are five in Britain), in the new regional secure units which have developed in England (though not in Scotland), in ordinary psychiatric hospitals and clinics, and also visiting psychiatrists to prisons.40

However, in the 1980s, following the killing of a social worker by a mentally ill individual, it became mandatory in England to set up a homicide inquiry if a mentally ill person committed a murder. As a result, all English psychiatrists, and, indeed, all mental health workers, practised within a ‘blame culture’ setting, and this culture was also apparent, albeit to a lesser degree, in Scotland, though it did not have the same mechanism of automatic inquiries (see also Chapters 8 and 28).

Dr Max Glatt inaugurated the first unit for alcoholism at a mental hospital in Britain in 1951; between then and 1973, more than twenty such units were established by the NHS (see also Chapter 25).41 However, research in the 1970s cast doubt on this approach, demonstrating that many could be improved by outpatient treatment alone or brief interventions.

Bruce Ritson describes how the approach to alcohol problems evolved during this period:

A strongly held view at that time was that people who were alcoholic had become so because of some underlying psychological problem. The emphasis – and this is what attracted me in the first place – was on finding out what the psychodynamics of their particular addiction were and then trying to help, usually with group psychotherapy and sometimes with individual psychotherapy or couple therapy. The focus was on finding an underlying psychic cause, which I do not think I would really go along with now. Sometimes there is an underlying cause, but often it is the outcome of chronic exposure to excessive drinking and the psychological harm is secondary.42

There was a long-running campaign to persuade the College to accept that training in psychotherapy should be an integral part of the training of general psychiatrists. Heinz Wolff, a psychotherapist at University College London, observed:

I recall how hard we had to fight to have the first Guidelines for the Training in Psychotherapy accepted by the Council and other committees in 1971. For me it was less important exactly what the guidelines said but rather that the College should acknowledge the importance of training of psychiatric trainees in dynamic psychotherapy.43

In 1993, the College published guidelines making training in psychotherapy a mandatory requirement for qualification as a psychiatrist.44

Until the 1970s, dedicated liaison services were virtually unknown in Britain. A special interest group in liaison psychiatry was set up in the early 1980s.45 The specialties of intellectual disability, child psychiatry and others also evolved during this period.

Crisis of Confidence among Psychiatrists

The advent of the Conservative government in 1979 saw the rapid development of the New Right, which explicitly encouraged both privatisation and competition. The first service user movements appeared in the early 1970s, demanding civil and economic rights for patients in the community, and, in parallel, pressure groups such as Mind began to agitate for changes to the 1959 Act (see also Chapters 13 and 14).46 In Psychiatry in Dissent, published in 1980, Clare judged that contemporary psychiatry was in an unhealthy state, with problems of recruitment, lack of resources and its lowly status in the medical hierarchy.47 Writing in 1983, Sedgwick delineated many of the circumstances that undermined the authority of psychiatrists.48 He noted the failure to accept psychiatric expertise in the legal courts following the Peter Sutcliffe trial, the popularity of anti-psychiatric arguments and the collusion of this attitude with the political ‘New Right’. An editorial in 1985 in the Lancet claimed that psychiatry was a ‘discipline that had lost its way’.49 In 1986, Bhugra reported on the largely negative public perception of psychiatry.50

Writing in 1989, Tom Harrison, a psychiatrist from Birmingham, reported on the concerns of his consultant colleagues:51

First is the declining morale of the psychiatrist and second are the rising expectations of other mental health workers … Other professions in mental health have been influenced by a number of factors. These include: a broadening knowledge base and range of skills, increasing specialisation, more graduate nursing recruits, less acceptance of authoritarian management, often accompanied by idealistic enthusiasm, and increasing independence of operation with less direct supervision.

More generally, Hugh Freeman lamented:

The NHS was one of the best things that ever happened in Britain. I find that the shift in the philosophy of the service is, with a widespread loss of idealism and commitment, the most disturbing change of all. It derives mainly from the domination by managers and accountants, who seem to have no personal concern with the objectives of a Health Service, but of course, it’s also part of a general cultural shift away from the liberalism and sense of community of the post-war period.52

Psychiatry found itself criticised by patients, relatives, anti-psychiatrists and other mental health professionals. Birley observed:

Psychiatrists responded to these criticisms … in various ways. A positive approach was to view critics as potential allies … Another reaction was to strive to make psychiatry more scientific … but turning to medical science was liable to omit the social and behavioural disciplines.53

According to Rogers and Pilgrim, these opposing attitudes have continued into the twenty-first century (see also Chapters 5 and 20).54 In a paper in the British Journal of Psychiatry in 2008, Craddock and his colleagues warned:

British psychiatry faces an identity crisis. A major contributory factor has been the recent trend to downgrade the importance of the core aspects of medical care … Our contention is that this creeping devaluation of medicine is damaging our ability to deliver excellent psychiatric care. It is imperative that we specify clearly the key role of psychiatrists in the management of people with mental illnesses.55

In response, Bracken and his colleagues countered by arguing that a rigid adherence to the medical model was inappropriate in treating mental illness, and, instead, they favoured an approach which focused on an understanding of the social and existential aspects of the patient.56

Conclusion

At the beginning of this chapter, the clinical world of the psychiatrist in 1960 was contrasted with that of his counterpart in 2010. It would be facile to view the changes in psychiatry during this period as one of straightforward progress. Certainly, there were many improvements. The old asylums, many of which were overcrowded and untherapeutic, were largely closed or had their bed numbers reduced and more patients were now cared for in the community. The patient’s voice was more likely to be heard and attended to than in the past. A witness seminar for English mental health workers concluded: ‘that one of the most important and striking changes in the history of post-war British mental health care has been the rise of the service user perspective’.57 There were now more psychiatrists and they came from more diverse backgrounds than before, with many more women in the profession. However, morale had declined, though personal experience and anecdotal evidence suggest that this was less so in Scotland. There was disagreement as to whether psychiatry should be seen primarily as a branch of medicine or whether it should be more concerned with the psychological and social aspects of patients’ distress. Roy Porter noted a paradox: while psychiatry had reformed its old institutions and now offered a wider range of therapies, the general public had responded with a resurgence of suspicion and lack of confidence in psychiatrists.58

Key Summary Points
  • The era saw the gradual closure of the mental hospitals and the development of care in the community – a process known as ‘deinstitutionalisation’.

  • Community care revealed its own problems: some patients were merely transferred to different kinds of institution; there was a lack of funding; and there was an unease among some psychiatrists at working outside the hospital.

  • There was the innovative development of multidisciplinary teams; sectorisation; and district general hospitals (DGHs).

  • The Royal College of Psychiatrists was founded in 1971, leading to formal education schemes for trainees and the creation of psychiatric specialties.

  • The psychiatric profession underwent a crisis of confidence as a result of several factors: the increasing privatisation of the health service; the challenge to its authority from both other mental health professionals and the emerging service user movement; and problems with recruitment.

Chapter 19 The Changing Roles of the Professions in Psychiatry and Mental Health: Psychiatric (Mental Health) Nursing

Kevin Gournay and Peter Carter
Introduction

To begin – a word about the term psychiatric (mental health) nursing. In this chapter, we use the term ‘mental health nursing’ as this is the legal term to describe the profession. A UK Department of Health review in 1994 recommended: ‘the title of mental health nurse be used both for nurses who work in the community and for those who work in hospital and day services’.1 Indeed, both authors who were once ‘registered mental nurses’ became ‘registered mental health nurses’. Nevertheless, the terms ‘psychiatric nurse’ and ‘community psychiatric nurse’ (CPN) are still in common usage. The 1994 change in terminology was but part of a move to change the more general language used in psychiatry, leaving us with a somewhat oxymoronic term to describe what we do for people with psychiatric problems. In some parts of the chapter (particularly in describing the pre-1994 period), we refer to psychiatric nursing rather than mental health nursing.

This chapter describes some of the changes and key events that have taken place in this fifty-year period so as to illustrate the nature of the history. We therefore describe:

  • Psychiatric nursing in the 1960s.

  • The development of community mental health nursing.

  • A profession characterised by an increase in skills and knowledge: nurses as therapists, prescribers, researchers.

  • Inpatient care.

  • Nurses and other psychiatric professionals.

Psychiatric Nursing in the 1960s

Both authors commenced their career in psychiatric nursing in the late 1960s and are therefore able to offer their observations and commentary on the changes in their profession over this fifty-year period. For readers interested in a complete history of the profession from the sixteenth century to the 1990s, there is perhaps no more authoritative account than that of Professor Peter Nolan.2

When we began our careers in the large asylums, in one sense it was like stepping back in time (see also Chapter 6). We were vaguely aware that the care and treatment of the mentally ill was undergoing what, with the benefit of hindsight, were enormous changes – such changes being so well described in Anthony Clare’s landmark book Psychiatry in Dissent.3 However, as mere student nurses at the bottom of a hierarchy, we were largely unaware of the detail of these changes, particularly because the wards where we began our careers were characterised by the use of rigid routines and a uniformity of approach. The medical superintendent still reigned, with doctors most certainly seen as a different species to nurses. The wards where we worked were dormitory-style, often overcrowded, with little space for personal possessions and certainly – for most patients – little privacy. Some hospitals had wards of up to 100 patients; many had central dining rooms and in some cases central bathing facilities, where patients would troupe up once or twice a week for a bath or a shower. One of us recollects a charge nurse appointed to do nothing else but supervise bed-making – this task then being delegated to several long-stay patients. Charge nurses (and ward sisters on the female side of the hospital – the integration of male and female patients only beginning slowly at the end of the 1960s) supervised the cleaning and other domestic duties carried out by patients. Ward domestic staff only began to appear in the late 1960s; in some hospitals, later. As this was the era just after National Service, many of the male charge nurses had served in the military, and thus an ethos of authority pervaded the atmosphere of the wards. Ward sisters were similarly figures of authority. Our female counterparts were chastised for any incorrect wearing of their uniform, starched aprons still being the order of the day. It took many years more for uniforms to disappear from psychiatric inpatient care settings – the revolution to the wearing of ‘mufti’ not really beginning until the mid-1970s.

Many of the patients that we looked after were totally institutionalised, and in some of the ‘long-stay wards’ where we worked, we had little or no conversations with many of our patients. They were often over-medicated with chlorpromazine, haloperidol and similar drugs; many incapacitated by Parkinsonian symptoms. Some of the locked wards had ‘airing courts’ where patients would either stand and stare at nothing in particular or stride around apace with what, on reflection, was akathisia, the interminable restlessness caused by phenothiazine tranquillisers.

Patients were provided with a range of social and recreational outlets, including the hospital cinema and once-a-week dances, where the male and female patients could mix under supervision. Most hospitals had a patient football and cricket team, with games organised between hospitals. Musical patients joined the hospital band and many hospitals put on pantomimes (one for staff and one for patients). All of these activities came within the responsibility of a member of nursing staff who was often given the title ‘activities coordinator’. Such was the importance of this role that these nurses were employed at ward sister/charge nurse grade or above.

On the wards, within a few short weeks, we witnessed many patients receiving electroconvulsive therapy (ECT) for their depression or acute psychosis, and some patients who were deemed to require ‘building up’ were given morning courses of modified insulin therapy, with the 1960s seeing the era of insulin coma therapy falling into disuse. Those of us who began nursing in the 1960s also remember patients being treated with modified narcosis, that is, being medicated with barbiturates so that sleep prevailed for much of the 24-hour period. Any sense of patient empowerment or ‘user involvement’ was years away.

Some of our experiences in the three years of training to become registered nurses involved placements in industrial therapy units within the hospital, where patients often worked thirty hours a week or more, for derisory levels of pay. It was here that one might find a ‘technical instructor’. Through the 1960s, hospitals began to employ occupational therapists; they were few in number but not as rare as clinical psychologists (179 in 1960; that number rising to 399 in 1970 and nearly 9,000 in 2010).4 In the 1960s, the value of work became formally recognised as a method of rehabilitating the seriously mentally ill – inspired by a number of psychiatrist pioneers, notably Dr Douglas Early,5 who developed one of the first industrial therapy units in Bristol. All patients who could work did so. Hospitals were often relatively self-sufficient with various workshops. In 1960, some hospitals ran thriving farms; indeed, until the 1970s, the NHS employed shepherds and farm hands to work alongside ‘farm nurses’ to oversee the work of the patients. These farms fell into disuse and were sold off, with proceeds going to the Exchequer rather than back into the NHS.

The 1960s were a time of full employment and some of the more able long-stay patients, whose only home was the hospital where they had resided for many years, began ‘working out’ in local factories and other industries. Their employers were very pleased to see workers who would do exactly what they were told and work without complaint. One of the ironies of this ‘working out’ population was the fact that some of these long-stay patients began to earn more money in factories than some members of the nursing staff. However, while pay for nurses was low, student and newly qualified nurses were able to aspire to being able to move into a hospital house at subsidised rent and to a full pension at fifty-five. In the late 1960s, many nurses were institutionalised themselves, with their only social outlet being the use of a staff social club in the grounds of the hospital. The ‘Club’ (which served alcohol at much lower prices than in pubs) was the focus of not only staff social activity but often also the social activity of their families because many hospitals were at some distance from the local town. Male staff were encouraged to join the hospital’s football and cricket teams. In the London area, the London Mental Hospitals Sports Association ran a football league of teams formed from the staff of the many mental hospitals that ringed the London area. Some hospitals recruited nursing staff on the basis of their sporting prowess rather than any other attribute, such was the importance of the hospital team.

Nursing staff recruits, who trained in the schools of nursing that were situated in hospital grounds, came from the local area, often as part of a family tradition. Recruits also came from the Republic of Ireland and, increasingly, from a number of more distant countries, notably Mauritius, Malaysia and the Caribbean islands. At this time, nurses did not need to demonstrate the, then, marker of a good general education, that is, five GCE O levels. Nurses without GCEs would pass a General Nursing Council test that examined general literacy, numeracy and general knowledge.

In our six-week introduction to our education and training as registered mental nurses in our respective schools of nursing, we learned a great deal about the 1959 Mental Health Act and came to understand the importance of the principle of treatment as an informal patient. In the hospital, we came across doctors and nurses who were trying to effect change, but these were in a minority. Some of our nurse tutors told us about Laing, Szasz, Cooper and Goffman. However, most qualified staff had never heard about such figures and were highly disparaging about the changes that were so clearly afoot following Enoch Powell’s 1961 ‘Water Towers speech’ (see also Chapter 1).6 By then, most hospitals had smaller short-stay units, often located away from the main hospital. The late 1960s saw the first drug addiction units and the start of units where medication was not the central approach. Day hospitals were beginning to appear. Nevertheless, most nurses were still largely engaged on ‘long-stay’ wards with their time spent in supervising activities of daily living and ensuring that good order prevailed. The growth of district general hospital psychiatric units was to follow from the mid-1970s onwards.

The idea that psychiatric nursing would one day become an all-graduate profession, or, indeed, a profession in its true sense at all, never crossed anyone’s mind. In the 1960s, none of us envisaged a future where we could become ‘responsible clinicians’ under the Mental Health Act, independent prescribers of medicine or leaders of multidisciplinary teams that included consultant psychiatrists. Neither did we envisage a future when psychiatric nurses rose to other positions of influence and importance – for example, becoming chief executives of sprawling mental health trusts, responsible for managing budgets of many millions of pounds, leading national initiatives or chairing the development of NICE guidelines.

The Development of Community Psychiatric Nursing

In 1954, two psychiatric nurses who were working in a large psychiatric hospital, Warlingham Park in Surrey, were seconded to provide outpatient care and to assist patients discharged from the hospital to establish themselves in the community. This initiative was followed by the development of a service from Moorhaven Hospital in Devon in 1957. As White has described,7 community psychiatric nursing slowly developed and, by 1973, formal training for CPNs was established, with a first course at Chiswick College in London. The development of long-acting (depot) injections of antipsychotic medications led to an increase in the number of CPNs.8 To begin with, the role of the CPN was simply to administer the injection. However, it quickly became apparent that extrapyramidal side effects were a problem and the role of the CPN expanded to the monitoring of side effects, alongside taking more responsibility for the assessment of mental state and risk. On a more negative note, CPNs were delegated to running depot clinics, where literally dozens of patients would attend at a time for their fortnightly injection and spend only a brief period with the nurse. While these patients were in receipt of medications that would be of some benefit, the brief interactions with the CPNs did little to provide the patient and, importantly, the family with any meaningful input to address social or psychological needs.

CPN practice began to change in the 1970s, and by the 1980s many CPNs began to base themselves in primary care settings with GPs and work largely with people with common mental disorders, such as general anxiety, relationship difficulties and ‘stress’. A survey of CPNs in England in 1989 showed that one-quarter of CPNs did not have a single client with a diagnosis of schizophrenia on their caseload.9 The CPN interventions used could be described as counselling rather than any specific evidence-based approach. From the early 1990s, the practice of CPNs took another turn, with CPNs returning to a focus on people with serious and enduring mental illness, such as schizophrenia. This renewed focus was prompted by the results of two research trials. One randomised controlled trial of CPNs in primary health care in North London showed that CPN intervention produced little or no benefit.10 An economic analysis, based on that trial, demonstrated that, per unit of health gain, CPN interventions yielded far fewer benefits than interventions with patients with schizophrenia.11 At the same time, another trial, in Manchester, demonstrated that training CPNs to undertake psychosocial interventions with families caring for a relative with schizophrenia provided benefits to families. This CPN intervention also led to an improvement in both positive and negative symptoms in patients, with some evidence that CPN intervention reduced inpatient episodes.12 Thus began an era that extends to the present, where a large majority of CPN work is focused on people with serious and enduring illnesses. The scope of training for CPNs widened to include mental state and risk assessments and a range of psychosocial interventions, including cognitive behaviour therapy and family interventions. Thus, by the early 2000s, CPNs had become a central resource in delivering high-quality community care, with important roles in crisis intervention and early intervention teams.

The wide dissemination of this work owes a great deal to the Sir Jules Thorn Trust, a charity that, in or around 1990, provided substantial funds to develop training in psychosocial interventions for CPNs at the Institute of Psychiatry, Psychology and Neuroscience, King’s College London and the University of Manchester; this being originally known as the ‘Thorn Nurse Programme’.13 The programme was disseminated across the UK and then increasingly opened up to all mental health professionals. Thorn has evolved into the multidisciplinary training provided to community mental health teams in 2020. From its beginnings in 1990, Thorn nurse training was led by a multidisciplinary group that included one of the pre-eminent figures in psychiatry at the time, Dr Jim Birley, who had been Dean of the Institute of Psychiatry. Jim Birley was generous with his time and took a great interest in not only the programme but all those involved. As this chapter reflects throughout, many of the very positive developments in psychiatric nursing owe much to the contribution of a number of psychiatrists who have recognised the importance of the nursing profession in the care of the mentally ill.

A Profession Characterised by an Increase in Skills, Knowledge and Responsibility

Between 1960 and 2010, there were significant developments in the education and training of mental health nurses. The syllabus for mental health nurse training was updated in 1964 to include psychology and sociology for the first time. It also suggested that student nurses should spend some time outside their hospital on community placements. This represented a major break with traditional training, which had been geared solely towards preparing nurses for work within institutions. By the turn of the century, all preregistration training was located in universities – the hospital training schools having closed in the early 1990s. By 2000, the syllabus covered a wide range of topics, both theoretical and practical, with an emphasis on evidence-based approaches.14 By 2010, degree-level education required students to acquire a breadth and depth of knowledge that was in great contrast to their counterparts in 1960. While their time in clinical placements was more limited, they were expected to critically analyse and reflect on the clinical practice that they observed. There was also a growing expectation that a significant number of the student body would go on to further study, with growing numbers expected to complete further training to equip them with specialist skills and some to continue on to master’s level.

Arguably, one of the most influential figures in the development of specialist nurse roles was Professor Isaac Marks, a distinguished psychiatrist rather than a nurse. Marks’s career has been devoted to three central topics: first, the adherence to an evidence-based approach for all that we do in psychiatry; second, the development of methods for treating common mental disorders (particularly those that cause considerable handicap, e.g. obsessive-compulsive disorder (OCD) and severe agoraphobia); and, finally, innovations that could extend treatment to wider populations. Marks was one of the first to develop computer assisted treatments for common mental disorders.15 As a young psychiatrist, not long in the UK from South Africa and then at the Maudsley Hospital/Institute of Psychiatry, he identified the core elements of effective behavioural treatment for anxiety disorders, notably phobias and OCD. These core elements were exposure and, in the case of OCD, response prevention.16 Marks also realised that these therapies were, in one sense, the domain of practice of clinical psychologists. However, on the other hand, psychologists were few in number and it was clear that this workforce would be unable to deliver treatment to the tens, or hundreds, of thousands of patients who might be responsive to these new psychological treatments. He realised that psychiatric nurses might be suitable for training in these methods, particularly because of their background general experience of dealing with people with a wide variety of mental health problems in various settings. Thus, in 1972, Marks began the Nurse Therapy training programme. Over a pilot programme lasting three years, he developed a rigorous training for nurses to become autonomous therapists. The results of this study and subsequent research demonstrated very clearly that, both clinically and economically, nurses could be effective autonomous therapists.17 From 1975 onwards, Nurse Therapy Training (a full-time course of eighteen months) led to the graduation of literally hundreds of nurses from sites in the UK and the Republic of Ireland. Arguably, this programme seeded the developments for more widespread training in psychological methods, culminating in the Improving Access to Psychological Therapies programme that began in 2006, in which nurses have played leading roles in development, education, evaluation and dissemination.18

In another enormous change in the responsibilities of psychiatric nurses, the year 1992 saw changes in legislation which meant that nurses were able to prescribe from an extended formulary. Following a number of policy reviews and with a change in the NHS landscape,19 the law was changed so that, following an approved and comprehensive training programme, independent nurse prescribers would be able to prescribe any licensed medicine for any medical condition, providing it was within their expertise. Thus, towards the end of the period covered by this chapter, psychiatric nurses across the country were beginning to receive such training and the requisite legal authority. In practice, this meant, for example, that an independent nurse prescriber could make changes to the patient’s medication, thus enabling psychiatrists to spend more time with patients whose needs were complex and who might have significant physical comorbidities. Overall, nurse prescribing was welcomed by the psychiatric profession. However, some opposition has come from nursing academics, who objected to nurses being enveloped by the ‘medical model’.20 By 2010, it became clear that, while mental health nurse prescribing was growing slowly, the way in which nurse prescribing was used across the NHS varied considerably and there was still some disagreement regarding the most appropriate settings for this work.21

In 2007, the Mental Health Act was amended so that nurses could become a ‘responsible clinician’ or an ‘approved mental health professional’, important roles in the supervision and safeguarding of the rights of patients subject to the involuntary provisions of the Act. These legal changes were prompted by the fact that CPNs had, since 1991, become ‘care coordinators’, with a wide range of responsibilities under the Care Programme approach.22

In another development that followed improvements in the education and training of psychiatric nurses, they acquired sufficient skills to be able to lead and conduct research trials that would meet the standard required of high-impact psychiatric journals. Thus, in the same edition of the British Journal of Psychiatry in 1994, the results of two research trials involving CPNs were published, both led by graduates of Marks’s Nurse Therapy Programme.23 By the year 2000, the Medical Research Council had begun the funding of postdoctoral fellowships for mental health nurses. This enabled nurses who already had a PhD and some research training to go on to complete postdoctoral courses in subjects such as epidemiology, statistics and trial design. Towards the end of the first decade of the present century, psychiatric nurses began to figure in the range of research studies funded by the National Institute of Health Research (NIHR). By this time, nurses had also begun to take leading parts in the National Institute for Health and Care Excellence (NICE) and the Cochrane Collaboration. The year 1995 saw the inauguration of the first chair in psychiatric nursing at the Institute of Psychiatry/Maudsley Hospital. This development was due to the efforts of Sir David Goldberg, at that time Professor of Psychiatry, who had become a great friend to nursing some years before in Manchester as a collaborator on the aforementioned trial of training nurses in psychosocial interventions.24

Changes in the Nursing of Inpatients

In 1960, there were around 150,000 patients in the large mental hospitals; by 2010, there were 23,000.25 The reduction in bed numbers over the years resulted in nurses needing to provide care for a population with acute episodes of illness that posed significant challenges for nursing staff. Many patients had the additional problem of drug and alcohol use. Two important problems had become the source of great concern: patient suicide and violence. The National Confidential Inquiry into Suicide and Homicide by People with Mental Illness (established in 1996) published its first report, Safety First, in 2001.26 This reported that 16 per cent of all inquiry cases of suicide in England were psychiatric inpatients. These tragic events were most likely to be by hanging, most commonly from a curtain rail and using a belt as a ligature. Wards were also seeing higher levels of violence.27

The NHS responded to the matter of inpatient suicide by spending vast sums on making wards safer, resulting in a great reduction in suicide rates.28 At the same time, three nursing bodies, the United Kingdom Council for Nursing Midwifery and Health Visiting (the predecessor body to the Nursing and Midwifery Council); the Royal College of Nursing and the Standing Nursing Midwifery Advisory Committee for the UK responded to the challenge with a number of surveys, literature reviews, visits to services and consultations with all interested parties to begin developing recommendations for changing nursing practice. Eventually NICE guidelines on the management of disturbed and violent behaviour in mental health and emergency settings were published in 2005,29 with the guidelines leading to much-improved standards for the observation of patients at risk and improved training in the prevention and management of violence.

Nursing and Other Professions in Psychiatric Settings

As noted, and for a range of reasons, in 1960 nurses were at the very bottom of a hierarchy. In 2010, most newly qualified nurses were graduates and could go on to develop a wide range of specialist skills. Importantly, the remuneration for nurses was, in relative terms, much improved. A nurse qualifying in 2010 could aspire to become a nurse consultant, a director of nursing or an NHS Trust chief executive. While pay is still a contentious issue, senior nurses earn salaries that are within the same range as social workers, occupational therapists and psychologists.

In clinical settings, by 2010 many clinical services and multidisciplinary teams were led by nurses. By this time, nurses were performing roles that had, only a few years before, belonged to other disciplines – for example, providing high-quality psychological treatments, prescribing medication or making recommendations in respect of detention in hospital. Arguably, this blurring of roles made for a more harmonious approach to patient care, with nurses, by 2010, being highly skilled and much-respected members of the professional psychiatric community.

Conclusion

It would not be an exaggeration to say that, in comparison with all other professions in psychiatry, nursing changed beyond recognition between 1960 and 2010. Nurses became members of an established profession. Over the years, the profession saw the acquisition of a wide range of knowledge and skills that in most ways have reflected the changing landscape in psychiatry, notably following the closure of the large hospitals. The question now posed is, what changes will take place in our profession in years to come?

Key Summary Points
  • Psychiatric (mental health) nursing is a relatively young profession that developed with great speed over this fifty-year period. In 1960, nearly all nurses were employed in large mental hospitals.

  • While education and training were improving, nurses’ roles in the 1960s largely involved the care and supervision of institutionalised patients. The pay and status of nurses were low, with nursing at the bottom of a medically led hierarchy.

  • The 1970s saw a great expansion in community psychiatric nursing; the development of Nurse Therapy training; and the gradual emergence of multidisciplinary teams.

  • The education and training of nurses improved, as did pay conditions and status; and by 2010, nursing was becoming an all-graduate profession.

  • The end of the era saw nurses becoming independent prescribers and skilled clinicians. Changes in the Mental Health Act meant that nurses could assume additional roles by becoming ‘responsible clinicians’ or ‘approved mental health professionals’.

Chapter 20 Critical Friends: Anti-psychiatry and Clinical Psychology

Tom Burns and John Hall

The immediate post-war period in the UK was focused on the establishment of the welfare state. There was a broad consensus that the old order had failed and that those who had served and sacrificed should be heard. Mental health services had been absorbed into the NHS from the local authorities, although this had not been a foregone conclusion.1 This aligning with NHS practice threw the disparities in care into stark relief. The Percy Commission was established in 1957 to enquire into the care of mentally ill and learning disabled patients. This resulted in the 1959 Mental Health Act and also mandated social services spending on discharged patients. As well as tightening up the supervision of compulsory care, it brought health and social care together, enabling the development of geographical catchment area services and the growth of British community and social psychiatry.

These developments were accelerated by the discovery of antipsychotics in 1954 and the first tricyclic antidepressants in 1958 and 1960. Confidence in psychiatry’s future outside the asylum was high. This is nowhere more evident than in the health secretary Enoch Powell’s much-quoted ‘Water Tower’ speech to the Mind conference in 1961 and in legislation with the 1962 Hospital Plan.2

Despite its unprecedented progress (or perhaps because of it), the 1960s onwards found psychiatry’s legitimacy challenged on two fronts. In the 1960s and 1970s, a dramatic onslaught came from the celebrity ‘anti-psychiatrists’ (a term coined by David Cooper in 1967). Also, alongside this, a relative newcomer to mental health services – clinical psychology – was rapidly growing in power and influence.

The Anti-psychiatrists

The anti-psychiatry movement can be understood as one front in the baby boomers’ assault on the old order. Social turmoil and rebellion characterised the next two decades, from the civil rights movement in the United States, through the Vietnam protests and the student revolts that erupted in Paris in May 1968. The established order was challenged and in many aspects upended. Psychiatry’s prominence in this counterculture owes much to the anti-psychiatrists and, in particular, to four iconic books. These all appeared within eighteen months of each other in 1960–1.

Each of these very different books was entirely independent – their authors had no connection with each other before or after their publication. They came to be seen as the four seminal texts of the anti-psychiatry movement. This term was never accepted by the authors themselves but it has endured. All four spoke powerfully to psychiatry’s relationship with society, with an unflinching demand that both must change.

Laing

The first to appear was R. D. (Ronnie) Laing’s The Divided Self (1960),3 which went on to become an international campus bible. Laing was a Scottish psychiatrist who trained in psychoanalysis in London. He was heavily influenced by existentialism, particularly by Jean-Paul Sartre. He saw the struggle of the ‘psychotic patient’ through the existentialists’ lens of becoming rather than being. As agents of our own identity and destiny, we create ourselves by the choices we must constantly make (as Sartre wrote in 1946, ‘Man is condemned to be free’).4

Laing believed that psychotic patients were struggling against confusing and contradictory messages to make sense of their experiences. Their confusion communicates itself to us as a threat to our ontological security (our confidence in our own stable identity). We then contain this anxiety through diagnosis. Laing proposed an approach of ‘existential phenomenology’, which seeks to understand the patient’s struggle for identity rather than clarifying signs and symptoms as in classical phenomenology. This existential phenomenology requires full engagement rather than traditional professional distance.

Such direct engagement was to avoid ‘objectifying’ the patient. Sartre insisted that humans simply cannot be understood as objects because identity inheres in active agency (choices). A ‘snapshot’ diagnosis would miss the individual’s most important quality – their agency.

Laing was a wonderful writer. He had an ability to engage fully with very disturbed patients and conveyed these experiences vividly and humanely. A striking and charismatic speaker, his message was taken up enthusiastically by a youthful readership. His second book (Sanity Madness and the Family, with Aaron Esterson 1964) became an iconic film, Family Life, in 1971, spreading the message even wider.5

Laing was a restless individual in both his intellectual and his personal life. His later writing became increasingly obscure, partly coloured by his alcohol and drug abuse. His impact on psychiatry began to fade as he shifted his focus to the emerging global counterculture.

Foucault

Madness and Civilization is the English translation of Foucault’s abridged PhD thesis published in 1961.6 Foucault’s approach was historical and he argued that ‘madness’ was timeless but ‘the madman’ was a recent construction. Foucault ascribed this new identity to the ‘great confinement’, which occurred in France in the mid-seventeenth century. The enormous ‘grands hôpitaux’ had been established in Paris to contain the poor, the mad and the socially disruptive. They were a response to France’s rapid economic and social development and new categories were required to facilitate this extrajudicial incarceration.

Foucault’s writings continue to have enormous influence in the social sciences and wider cultural circles. ‘Mental illness’ as a convenient label to remove uncomfortable and supposedly deviant individuals, who are unable to contribute economically, remains a pervasive trope. Franco Basaglia, responsible for Italy’s radical reforms in the 1970s, repeatedly insisted that psychiatric diagnoses were based on economic inutility and resulted in social exclusion. Current thinking around stigma derives much from these ideas and those of labelling theory – that an imposed identity, once accepted, becomes a self-fulfilling prophecy and an impediment to recovery and social reintegration.

Goffman

The Canadian-American sociologist Erving Goffman spent 1955–6 in participant observation (effectively ‘undercover’) in a large Washington mental hospital. Asylums: Essays on the Social Situations of Mental Patients and Other Inmates (1961) was the result and had an enormous impact on psychiatry.7 Unlike Russell Barton’s 1959 book Institutional Neurosis,8 explaining the apathy of psychotic inpatients as a side effect of institutional care, Goffman believed it resulted from deliberate policy.

Goffman coined the term ‘total institutions’ to include mental hospitals, prisons, monasteries and even the military. Such all-embracing organisations needed to ‘manage’ large numbers of people efficiently. Goffman described a range of initiation rituals, rigid rules, hierarchies and roles that stripped away individual identity. The result was predictable and pliable individuals. Monotonous routines and the absence of personal choice generated dependency and apathy. This process of ‘institutionalisation’ served the institution’s needs, not the individual’s.

Goffman’s also observed that real power within the institution often originated low down the organisational hierarchy. Rather than the doctors and senior nurses controlling the wards, the nursing aides, cleaners and even more-established patients made the day-to-day decisions. They ran the ward on simple moral concepts of good and bad behaviour and punishment and reward rather than on notions of symptoms and treatment.

Like Laing, Goffman could write striking prose and captivate his reader. He, however, presented evidence, not just a theoretical revision. Despite its highly critical message, Asylums was readily taken up by the profession. Underscored by regularly occurring scandals, it became a central text in the deinstitutionalisation movement. His observations of hospital power structures fed into the development of modern multidisciplinary working.

Szasz

Thomas Szasz escaped Soviet-dominated Hungary at age eighteen and trained in medicine and psychiatry in the United States. He remained active as a psychiatrist in both private and university practice for the rest of his life. He wrote more than thirty books, of which The Myth of Mental Illness (1961) is the most famous.9 He makes two basic points (reiterated throughout his subsequent books). The first is that mental illnesses are not ‘real’ illnesses because they have no physical markers (such as glucose levels in diabetes). The second, following on from this but clearly reflecting experiences of oppression under Soviet occupation, is that there is absolutely no justification for treating ‘mental patients’ against their will. As any special treatment is an abuse of power, no allowance can be made for any criminal or deviant behaviour. There is no ‘insanity clause’. So the murderer with psychosis goes to prison and can there receive treatment but even then only if they consent.

Szasz had, and continues to have, a powerful influence outside the profession. His message is immediate and simple and resonates with liberalism and anti-authoritarianism. He was also an effective communicator and used vivid (albeit questionable) examples to make his points: ‘If I speak to God I am a Christian, if God speaks to me I am schizophrenic.’ Szasz became a figurehead for anti-psychiatry groups such as the Scientologists, and his ideas find a home with civil libertarians. However, because their message is essentially so simplistic, with no striking insights or reinterpretations of established assumptions, they rarely figure in professional discourses about psychiatry.

Anti-psychiatry’s Impact

Although often perceived as infuriating and superficial by many clinicians, anti-psychiatry certainly raised public interest in the profession. In the late 1960s and early 1970s, it was ‘cool’ to be a psychiatrist. Recruitment, ironically, improved. By this time, the British anti-psychiatrists were moving away into the broader worldwide counterculture. In 1967, they staged the Dialectics of Liberation Congress at the Roundhouse in London with keynote speeches from anti-psychiatrists, beat poets, black activists and more. By 1970, Kingsley Hall (the most high-profile manifestation of UK anti-psychiatry) had collapsed amidst acrimony and discord. The caravan had moved on.

British Psychiatry’s Response to the Anti-psychiatrists

The mainstream psychiatric response to the anti-psychiatrists was essentially to ignore or dismiss them.10 Some brave individuals engaged publicly with the argument but it was rarely constructive or successful. The sight of the eminent Michael Shepherd spluttering with incredulity on TV at proposals from a young enthusiast that psychiatry really had nothing to offer a psychotic young mother left most reluctant to engage in what seemed a dialogue of the deaf. Responses based on pragmatism and experience simply had no impact on ideology. In their article about Basaglia’s reforms, Kathleen Jones and Alison Poletti record this impotence:

[we] remained mystified by the insistence of Psichiatria Democratica on ‘closing the mental hospital’ when it was patently obvious that mental hospitals were not being closed. … When the Trieste team say ‘We closed the mental hospital’ they do not mean ‘We closed the mental hospital’. They mean ‘We broke the power of the mental hospital over the patients’.11

In his 1970 book Psychiatry in Dissent, Anthony Clare tried to engage with the debate, particularly in his chapters ‘Concepts of mental illness’ and ‘What is schizophrenia?’12 He forensically examined Laing’s proposals and Joseph Berke’s portrayal of Mary Barnes’s ‘journey’ through psychosis. Experience is impotent, however, against what Andrew Scull calls ‘word-magic’.13 In addition, Clare is preaching to the converted – few who bought The Divided Self also bought Psychiatry in Dissent.

Radical (Critical) Psychiatry and Post-psychiatry

In understanding the impact of the anti-psychiatrists in the 1960s and 1970s, it is important to remember that psychiatry has never been without vociferous critics. In the following decades up to 2010, there have constantly been groups of dissident voices, both within and outside the profession. The most prominent of these critical groups are the radical psychiatry group and the post-psychiatry movement.

The radical psychiatrists include senior and respected figures, several with academic appointments. Joanna Moncrieff at University College London (UCL) has persistently challenged the claims for efficacy of prophylactic drug treatments such as lithium and maintenance antipsychotics.14 Derek Summerfield at the Institute of Psychiatry has written prolifically about the imposition of diagnoses such as post-traumatic stress disorder (PTSD) and depression on more diffuse human problems (and in particular the export of these ‘Western’ concepts).15

The post-psychiatry movement draws on the ideas of postmodernism.16 It emphasises the loss of faith in science as an effective solution for current problems and questions its status as a privileged discourse among others. It emphasises social and cultural contexts and places ethics before technology.

Both the radical psychiatry group and the post-psychiatrists differ fundamentally from the earlier anti-psychiatrists in that they are focused on ‘improving’ psychiatry and limiting its poor (damaging) practice. This is fundamentally more a technological than an ideological attack despite some of the language. Both have been fuelled by the one-sidedness of the evolving biomedical model within psychiatry and the disquieting growth in power of ‘Big Pharma’. The muted response of orthodox psychiatry to these critics undoubtedly stems from similar shared concerns (albeit less extreme) being widespread throughout the profession. Psychiatry has learnt to coexist with its internal critical friends. This may reflect its need to pay urgent attention to a more cogent threat to its status growing alongside it in the multidisciplinary team.

Psychiatry and Psychology

Psychology and psychiatry relate to each other at three different levels. They are distinct conceptual and methodological disciplines, with border territories of abnormal and medical psychology. Psychologists and psychiatrists relate to each other as immediate clinical colleagues in day-to-day practice; and their professional bodies relate to each other in the context of their own competition for political influence and funding. The British Psychological Society (BPS) was founded in 1901, gained its Royal Charter in 1965 and represents all psychologists, both academic and applied. In most of these areas, there has been co-operation and mutual acknowledgement of each other’s knowledge and skills – and there have been differences and mutual criticisms.

It is no surprise that the strongest advocates for psychological input to the new NHS were psychiatrists in the asylums and mental handicap hospitals. This built on pre-existing joint practice between psychiatrists and educational psychologists (and social workers) in the Child Guidance Clinics that were set up from the late 1920s and on the less well-known joint working in military settings during the Second World War, as colleagues in both military selection and training.

Psychiatrists initially wanted psychologists to bolster their search for a more measurable and scientific psychiatry. Consequently, the two main functions of the early psychologists were as psychometricians (‘test-bashers’) – mainly with ability, personality and projective tests – and to contribute to doctor-led research projects. Yet growth in numbers before 1960 was very slow; in that year, there were only around 180 clinical psychologists in England and Wales, with a few in Scotland.

Changes in Clinical Psychology, 1960–2010

The scope and nature of the work of clinical psychologists in Britain, and their numbers, have changed almost beyond recognition from that early period.17 Many of those early clinical psychologists were former educational psychologists who had moved into the NHS. In 1957, there were only three formal training courses in Britain, and it was also possible to qualify simply by a three-year apprenticeship.

Involvement in anything that could be called treatment was forbidden – not least by Aubrey Lewis at the Maudsley – but if it could be called education or training that might be permitted. In the mental handicap field, clinical psychologists had an early innovative role, as shown by the work of Jack Tizard and Neil O’Connor at the Medical Research Council’s Social Psychiatry Research Unit and of the Clarkes at the Manor Hospital at Epsom.18

The Todd Report on medical education led to the establishment of both new medical schools and new university departments of psychiatry from the 1960s. The new professors of psychiatry – many of them trained at the Maudsley – were keen to appoint clinical psychologists, and they were strong supporters of the new regional clinical psychology training courses that developed from that period.

Two factors then conspired to broaden the fields of activity of psychologists beyond psychiatry and so to loosen the control of psychiatrists over their work. In 1974, the creation of area health authorities meant that the management of clinical psychologists was often transferred to an area officer, and clinical psychologists began to be seen as a resource for other areas of clinical work, such as neurology and paediatrics. In 1977, the DHSS-sponsored report on the role of psychologists in the health services,19 chaired by William Trethowan (then professor of psychiatry at Birmingham), recommended that clinical psychologists ‘should have full professional status’ in the NHS and should develop specialist services beyond psychiatry. Some psychiatrists resented and vehemently opposed these proposals; others felt the recommendations did not go far enough.

A third factor was the explosive growth of behaviour therapy, soon expanded to cognitive behavioural therapy (CBT) and which changed the core role of clinical psychologists in all clinical fields.20 From the early 1960s, the increasing evidence for the effectiveness of behaviour therapy, initially for anxiety-related conditions, led to high levels of demand for psychological services – not least from GPs. From the 1980s, the major expectation of clinical psychologists was a contribution to psychologically informed treatments and to psychiatric rehabilitation, with increasing demand in outpatient and Community Mental Health Team (CMHT) settings and significant developments in work with older adults, for example.

Hans Eysenck, the deliberately provocative and now controversial professor of psychology at the Institute of Psychiatry up to 1983, has often been seen as the representative leading clinical psychologist in Britain. While he was a prodigiously productive researcher, he did not himself see patients and his influence on clinical and professional practice has been overstated,21 with Monte Shapiro and Jack Rachman instead having been central to developments in training and CBT at the Maudsley during the early part of this period.

Away from the London and the Maudsley, David Smail in Nottinghamshire and Dorothy Rowe in Lincolnshire, for example, promoted more reflective and critical stances,22 with other firebrands such as Don Bannister at Bexley Hospital. Other centres developed their own highly productive research programmes, such as at Birmingham, Edinburgh, Manchester and Oxford.

A number of effective professional and academic leaders worked closely with the Department of Health, and with the Royal College, to influence policy and improve practice. These included the three clinical psychologist members of the Trethowan Committee – Alan Clarke, Gwynne Jones and May Davidson. In a later generation, psychologists such as Glenys Parry and Anne Richardson worked very effectively within the Department of Health.

Fifty years on, in 2010, this led to there being 8,800 clinical psychologists in England and Wales who had trained at one of the thirty-five 3-year highly selective university doctoral programmes throughout the UK that adopted a ‘scientist-practitioner’ model of training. In 2010, the lead psychologist within a service was likely to be as experienced as the consultant psychiatrist. Psychologist-led research programmes mushroomed, and the BPS is now an equal partner with the Royal College in contributing to mental health–related NICE guidelines. With the advent of general management, clinical psychologists can be clinical directors or Trust CEOs and, with the 2007 Mental Health Act, could be responsible clinicians (RCs) in their own right.

In 1960, there would typically be one relatively inexperienced psychologist in a mental hospital, working essentially as a subordinate scientific technician, when psychiatric services were emerging from the hierarchical systems that existed before the 1959 Mental health Act. So the crucial underlying factor in the changing relationships between the two professions over the period is the shift from a marked imbalance in power and experience to a position of equivalence in expertise within their own fields and of equality in experience and numbers.

So What Is Distinctive about Clinical Psychologists?

The distinctive characteristic of psychologists is their felt primary identity as psychologists, acquired during their initial education and training and sustained by their ongoing professional relationships and exercise of distinctive skills and perspective. First degrees in psychology are not vocational. Although the curriculum has to cover core issues required by the BPS, degrees vary significantly in their content and orientation. Many potential clinical psychologists choose to take courses in abnormal psychology that often adopt a critical approach to psychiatry.

Psychology has become one of the most popular undergraduate subjects in Britain, so entry to clinical psychology is highly competitive, with trainees already high academic achievers. They are in effect required to have several years’ ‘relevant’ experience before beginning their clinical training. One significant consequence of this is that most trainees are several years older than most medical students when they begin medical school.

Simon Sinclair, himself a psychiatrist, explored from a social anthropological perspective the socialisation process which medical students undergo, forming what he considered a distinctive ‘embodied disposition’ or ‘medical habitus’.23 Clinical psychologists are undoubtedly socialised into their roles, but they have gone through a different intellectual and social journey to doctors. They have much in common with other psychological colleagues also working in mental health – health, child, counselling and forensic psychologists as well as clinical neuropsychologists.

Psychologists are, however, now also entering the mental health field in other guises. Completely unexpectedly, the Labour government elected in 1997 gave priority to mental health. The 1999 Adult Mental Health National Service Framework (NSF) identified the need to improve staffing and uncovered serious difficulties in recruitment.24 In 2000, a commitment was made to train a thousand ‘graduate primary care mental health workers’ – in practice, nearly all psychology graduates – to administer brief psychological therapy techniques in GP settings, and the 2007 Improving Access to Psychological Therapies (IAPT) programme brought in even more psychologists to NHS mental health services. Add in the psychology graduates who are now training as mental health nurses and occupational therapists, for example, and psychologists are now embedded in the NHS mental health services at every level.

Inter-professional Co-operation, Competition and Criticism

What psychologists in health care and psychiatrists share is their concern for their patients – or clients or service users, as you will – and their families. When working well as a team, or as immediate colleagues, their concerns lead to a fuller understanding of the needs of their patient and to a wider range of available interventions. With experience, the assumption by mental health workers from all professions of origin of key worker roles and with the cross-professional take-up of post-basic training in new therapeutic modalities, there has been a softening of professional differences. When workers from different disciplinary backgrounds have worked together for thirty years or more, coping with very demanding and distressing circumstances, mutual confidence and trust deepen.

The professional bodies collaborate in a number of areas: the 1995 joint Royal College and BPS policy on psychological therapies in the NHS is an early example.25 Yet there remain tensions within the BPS, earlier primarily a learned society with an open membership, with it recognising only in 1966 that it needed to also become a professional body and it now fighting to represent academic and research psychologists in a highly competitive funding environment.

There is competition between differing biomedical, psychological and social perspectives: applied psychologists do not privilege biomedical explanations for the myriad range of distress, disability and dysfunction they encounter. R. E. Kendell, in his 2000 Royal College presidential lecture,26 explicitly saw clinical psychologists as direct professional competitors; he could ‘visualise a scenario in which clinical psychology might seem … to both general practitioners and to the Health Departments … to be both the most important source of therapeutic skills and professional advice in the mental health field’. An important example of such challenges to conventional psychiatric thinking is Richard Bentall’s book on the nature of psychosis.27

There is direct financial competition in the mental health private practice marketplace with the increased numbers of psychologists and other psychological therapists and counsellors. More psychologists engage in the lucrative business of court work and in the sound bite media comments that influence public attitudes to mental health.

There can still be strains to relationships, usually when one or another person plays power games or holds rigidly to a particular position and is unwilling to compromise; but for every rigid controlling psychiatrist encountering bumptious young psychologists, there have been supportive psychiatrists mentoring their new colleagues.

Just as there is critical and radical psychiatry, so there is critical and radical psychology. There has been no shortage of clinical psychologists challenging their own profession, with David Pilgrim, for example, being a trenchant critic for thirty years.28 The sociologist Nikolas Rose has similarly been a trenchant critique of the ‘psy’ professions.29

Conclusion

The world of mental health has changed markedly over the past sixty years. Boundaries between normality and abnormality, sickness and health, are complicated by different ideas of distress and dysfunction. The voice of patients (experts by experience) now challenges our authority and mental health policy is firmly in the public and political arena.

To pretend that all is now well in the world of professional mental health practice is dangerous. Psychiatrists and clinical psychologists can be good clinical colleagues and conceptual sparring partners; however, both psychiatrists and clinical psychologists must be open to criticism, whether friendly or unfriendly.

Key Summary Points
  • Psychiatry has never been without vociferous critics.

  • Anti-psychiatry raised legitimate, albeit irritating, concerns about psychiatric practice.

  • Clinical psychologists in Britain now outnumber psychiatrists, with an enormously expanded clinical remit.

  • Lead psychologists are now as experienced as consultant psychiatrists and vie for leadership.

  • To pretend that all is well in the world of professional mental health practice and relationship is dangerous.

Footnotes

Chapter 8 Mental Health Law: ‘Legalism’ and ‘Medicalism’ – ‘Old’ and ‘New’

Chapter 9 Ken Clarke in Conversation with Peter Tyrer: My Role in Justice and Health

Chapter 10 UK Mental Health Policy and Practice

Chapter 11 Mental Health Policy and Economics in Britain

Chapter 12 True Confessions of a New Managerialist

Chapter 13 Subjectivity, Citizenship and Mental Health: UK Service User Perspectives

Chapter 14 How the Voice of People with Mental Health Problems, Families and the Voluntary Sector Changed the Landscape

Chapter 15 Women in UK Psychiatry and Mental Health

Chapter 16 Biological Psychiatry in the UK and Beyond

Chapter 17 The Pharmaceutical Industry and the Standardisation of Psychiatric Practice

Chapter 18 The Evolution of Psychiatric Practice in Britain

Chapter 19 The Changing Roles of the Professions in Psychiatry and Mental Health: Psychiatric (Mental Health) Nursing

Chapter 20 Critical Friends: Anti-psychiatry and Clinical Psychology

Figure 0

Figure 8.1 Schematic presentation of mental health legislation changes, 1890–2016

Figure 1

Figure 8.2 Involuntary admissions for England, 1964–2014

Figure 2

Table 15.1 Timeline of laws and events significant for women and mental health, 1960–2010

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×