To save this undefined to your undefined account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your undefined account.
Find out more about saving content to .
To save this article to your Kindle, first ensure firstname.lastname@example.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A pressing question during the first half-decade of the third plague pandemic (1894–9) was what was a ‘suitable soil’ for the disease. The question related to plague’s perceived ability to disappear from a given city only to reappear at some future point; a phenomenon that became central to scientific investigations of the disease. However, rather than this simply having a metaphorical meaning, the debate around plague’s ‘suitable soil’ actually concerned the material reality of the soil itself. The prevalence of plague in the working-class neighbourhood of Taipingshan during the first major outbreak of the pandemic, in 1894 in Hong Kong, led to an extensive debate regarding the ability of the soil to harbour and even spread the disease. Involving experiments, which were seen as able to procure evidence for or against the demolition or even torching of the area, scientific and administrative concerns over the soil rendered it an unstable yet highly productive epistemic thing. The spread of plague to India further fuelled concerns over the ability of the soil to act as the medium of the disease’s so-called true recrudescence. Besides high-profile scientific debates, hands-on experiments on purifying the soil of infected houses by means of highly intrusive methods allowed scientists and administrators to act upon and further solidify plague’s supposed invisibility in the urban terrain. Rather than being a short-lived, moribund object of epidemiological concern, this paper will demonstrate that the soil played a crucial role in the development of plague as a scientifically knowable and actionable category for modern medicine.
Current policy and practice directed towards people with learning disabilities originates in the deinstitutionalisation processes, civil rights concerns and integrationist philosophies of the 1970s and 1980s. However, historians know little about the specific contexts within which these were mobilised. Although it is rarely acknowledged in the secondary literature, MIND was prominent in campaigning for rights-based services for learning disabled people during this time. This article sets MIND’s campaign within the wider historical context of the organisation’s origins as a main institution of the inter-war mental hygiene movement. The article begins by outlining the mental hygiene movement’s original conceptualisation of ‘mental deficiency’ as the antithesis of the self-sustaining and responsible individuals that it considered the basis of citizenship and mental health. It then traces how this equation became unravelled, in part by the altered conditions under the post-war Welfare State, in part by the mental hygiene movement’s own theorising. The final section describes the reconceptualisation of citizenship that eventually emerged with the collapse of the mental hygiene movement and the emergence of MIND. It shows that representations of MIND’s rights-based campaigning (which have, in any case, focused on mental illness) as individualist, and fundamentally opposed to medicine and psychiatry, are inaccurate. In fact, MIND sought a comprehensive community-based service, integrated with the general health and welfare services and oriented around a reconstruction of learning disabled people’s citizenship rights.
After losing the importance it had held around 1900 both as a colonial power and in the field of tropical medicine, Germany searched for a new place in international health care during decolonisation. Under the aegis of early government ‘development aid’, which started in 1956, medical academics from West German universities became involved in several Asian, African and South American countries. The example selected for closer study is the support for the national hygiene institute in Togo, a former German ‘model colony’ and now a stout ally of the West. Positioned between public health and scientific research, between ‘development aid’ and academia and between West German and West African interests, the project required multiple arrangements that are analysed for their impact on the co-operation between the two countries. In a country like Togo, where higher education had been neglected under colonial rule, having qualified national staff became the decisive factor for the project. While routine services soon worked well, research required more sustained ‘capacity building’ and did not lead to joint work on equal terms. In West Germany, the arrangement with the universities was a mutual benefit deal for government officials and medical academics. West German ‘development aid’ did not have to create permanent jobs at home for the consulting experts it needed; it improved its chances to find sufficiently qualified German staff to work abroad and it profited from the academic renown of its consultants. The medical scientists secured jobs and research opportunities for their postgraduates, received grants for foreign doctoral students, gained additional expertise and enjoyed international prestige. Independence from foreign politics was not an issue for most West German medical academics in the 1960s.
This paper explores the social, medical, institutional and enumerative histories of blindness in British India from 1850 to 1950. It begins by tracing the contours and causes of blindness using census records, and then outlines how colonial physicians and observers ascribed both infectious aetiologies and social pathologies to blindness. Blindness was often interpreted as the inevitable consequence of South Asian ignorance, superstition and backwardness. This paper also explores the social worlds of the Blind, with a particular focus on the figure of the blind beggar. This paper further interrogates missionary discourse on ‘Indian’ blindness and outlines how blindness was a metaphor for the perceived civilisational inferiority and religious failings of South Asian peoples. This paper also describes the introduction of institutions for the Blind in addition to the introduction of Braille and Moon technologies.
This article examines the research implications and uses of data for a large project investigating institutional confinement in Australia and New Zealand. The cases of patients admitted between 1864 and 1910 at four separate institutions, three public and one private, provided more than 4000 patient records to a collaborative team of researchers. The utility and longevity of this data and the ways to continue to understand its significance and contents form the basis of this article’s interrogation of data collection and methodological issues surrounding the history of psychiatry and mental health. It examines the themes of ethics and access, record linkage, categories of data analysis, comparison and record keeping across colonial and imperial institutions, and constraints and opportunities in the data itself. The aim of this article is to continue an ongoing conversation among historians of mental health about the role and value of data collection for mental health and to signal the relevance of international multi-sited collaborative research in this field.
Medical historians have recently become interested in the veterinary past, investigating the development of animal health in countries such as France, the Netherlands, the United Kingdom and the United States. An appreciation of the German context, however, is still lacking – a gap in the knowledge that the present article seeks to fill. Providing a critical interpretation of the evolution of the veterinary profession, this investigation explains why veterinary and medical spheres intersected, drifted apart, then came back together; it also accounts for the stark differences in the position of veterinarians in Germany and Britain. Emphasis is placed on how diverse traditions, interests and conceptualisations of animal health shaped the German veterinary profession, conditioned its field of operation, influenced its choice of animals and diseases, and dictated the speed of reform. Due to a state-oriented model of professionalisation, veterinarians became more enthusiastic about public service than private practice, perceiving themselves to be alongside doctors and scientists in status, rather than next to animal healers or manual labourers. Building on their expertise in epizootics, veterinarians became involved in zoonoses, following outbreaks of trichinosis. They achieved a dominant position in meat hygiene by refashioning abattoirs into sites for the construction of veterinary knowledge. Later, bovine tuberculosis helped veterinarians cement this position, successfully showcasing their expertise and contribution to society by saving as much meat as possible from diseased livestock. Ultimately, this article shows how veterinarians were heavily ‘entangled’ with the fields of medicine, food, agriculture and the military.
The influence of a range of actors is discernible in nutrition projects during the period after the Second World War in the South Pacific. Influences include: international trends in nutritional science, changing ideas within the British establishment about state responsibility for the welfare of its citizens and the responsibility of the British Empire for its subjects; the mixture of outside scrutiny and support for projects from post-war international and multi-governmental organisations, such as the South Pacific Commission. Nutrition research and projects conducted in Fiji for the colonial South Pacific Health Service and the colonial government also sought to address territory-specific socio-political issues, especially Fiji’s complex ethnic poli,tics. This study examines the subtle ways in which nutrition studies and policies reflected and reinforced these wider socio-political trends. It suggests that historians should approach health research and policy as a patchwork of territorial, international, and regional ideas and priorities, rather than looking for a single causality.
This article shows how funding research on Alzheimer’s disease became a priority for the British Medical Research Council (MRC) in the late 1970s and 1980s, thanks to work that isolated new pathological and biochemical markers and showed that the disease affected a significant proportion of the elderly population. In contrast to histories that focus on the emergence of new and competing theories of disease causation in this period, I argue that concerns over the use of different assessment methods ensured the MRC’s immediate priority was standardising the ways in which researchers identified and recorded symptoms of Alzheimer’s disease in potential research subjects. I detail how the rationale behind the development of standard assessment guidelines was less about arriving at a firm diagnosis and more about facilitating research by generating data that could be easily compared across the disciplines and sites that constitute modern biomedicine. Drawing on criticism of specific tests in the MRC’s guidelines, which some psychiatrists argued were ‘middle class biased’, I also show that debates over standardisation did not simply reflect concerns specific to the fields or areas of research that the MRC sought to govern. Questions about the validity of standard assessment guidelines for Alzheimer’s disease embodied broader concerns about education and social class, which ensured that distinguishing normal from pathological in old age remained a contested and historically contingent process.
In recent years there has been growing acknowledgement of the place of workhouses within the range of institutional provision for mentally disordered people in nineteenth-century England. This article explores the situation in Bristol, where an entrenched workhouse-based model was retained for an extended period in the face of mounting external ideological and political pressures to provide a proper lunatic asylum. It signified a contest between the modernising, reformist inclinations of central state agencies and local bodies seeking to retain their freedom of action. The conflict exposed contrasting conceptions regarding the nature of services to which the insane poor were entitled.
Bristol pioneered establishment of a central workhouse under the old Poor Law; ‘St Peter’s Hospital’ was opened in 1698. As a multi-purpose welfare institution its clientele included ‘lunatics’ and ‘idiots’, for whom there was specific accommodation from before the 1760s. Despite an unhealthy city centre location and crowded, dilapidated buildings, the enterprising Bristol authorities secured St Peter’s Hospital’s designation as a county lunatic asylum in 1823. Its many deficiencies brought condemnation in the national survey of provision for the insane in 1844. In the period following the key lunacy legislation of 1845, the Home Office and Commissioners in Lunacy demanded the replacement of the putative lunatic asylum within Bristol’s workhouse by a new borough asylum outside the city. The Bristol authorities resisted stoutly for several years, but were eventually forced to succumb and adopt the prescribed model of institutional care for the pauper insane.
This paper analyses how the 1950–61 conflict between Portugal and India over the territories that constituted Portuguese India (Goa, Daman and Diu) informed Portugal’s relations with the World Health Organization’s Regional Office for South East Asia (SEARO). The ‘Goa question’ determined the way international health policies were actually put into place locally and the meaning with which they were invested. This case study thus reveals the political production of SEARO as a dynamic space for disputes and negotiations between nation-states in decolonising Asia. In this context, health often came second in the face of contrasting nationalistic projects, both colonial and post-colonial.
How did the complex concepts of psychoanalysis become popular in early twentieth-century Britain? This article examines the contribution of educator and psychoanalyst Susan Isaacs (1885–1948) to this process, as well as her role as a female expert in the intellectual and medical history of this period. Isaacs was one of the most influential British psychologists of the inter-war era, yet historical research on her work is still limited. The article focuses on her writing as ‘Ursula Wise’, answering the questions of parents and nursery nurses in the popular journal Nursery World, from 1929 to 1936. Researched in depth for the first time, Isaacs’ important magazine columns reveal that her writing was instrumental in disseminating the work of psychoanalyst Melanie Klein in Britain. Moreover, Isaacs’ powerful rebuttals to behaviourist, disciplinarian parenting methods helped shift the focus of caregivers to the child’s perspective, encouraging them to acknowledge children as independent subjects and future democratic citizens. Like other early psychoanalysts, Isaacs was not an elitist; she was in fact committed to disseminating her ideas as broadly as possible. Isaacs taught British parents and child caregivers to ‘speak Kleinian’, translating Klein’s intellectual ideas into ordinary language and thus enabling their swift integration into popular discourse.
In the wake of the Second World War there was a movement to counterbalance the apparently increasingly technical nature of medical education. These reforms sought a more holistic model of care and to put people – rather than diseases – back at the centre of medical practice and medical education. This article shows that students often drove the early stages of education reform. Their innovations focused on relationships between doctors and their communities, and often took the form of informal discussions about medical ethics and the social dimensions of primary care. Medical schools began to pursue ‘humanistic’ education more formally from the 1980s onwards, particularly within the context of general practice curricula and with a focus on individual doctor–patient relationships. Overall from the 1950s to the 1990s there was a broad shift in discussions of the human aspects of medical education: from interest in patient communities to individuals; from social concerns to personal characteristics; and from the relatively abstract to the measurable and instrumental. There was no clear shift from ‘less’ to ‘more’ humanistic education, but rather a shift in the perceived goals of integrating human aspects of medical education. The human aspects of medicine show the importance of student activism in driving forward community and ethical medicine, and provide an important backdrop to the rise of competencies within general undergraduate education.
In recent decades, historians of English psychiatry have shifted their major concerns away from asylums and psychiatrists in the nineteenth century. This is also seen in the studies of twentieth-century psychiatry where historians have debated the rise of psychology, eugenics and community care. This shift in interest, however, does not indicate that English psychiatrists became passive and unimportant actors in the last century. In fact, they promoted Lunacy Law reform for a less asylum-dependent mode of psychiatry, with a strong emphasis on professional development. This paper illustrates the historical dynamics around the professional development of English psychiatry by employing Andrew Abbott’s concept of professional development. Abbott redefines professional development as arising from both abstraction of professional knowledge and competition regarding professional jurisdiction. A profession, he suggests, develops through continuous re-formation of its occupational structure, mode of practice and political language in competing with other professional and non-professional forces. In early twentieth-century England, psychiatrists promoted professional development by framing political discourse, conducting a daily trade and promoting new legislation to defend their professional jurisdiction. This professional development story began with the Lunacy Act of 1890, which caused a professional crisis in psychiatry and led to inter-professional competition with non-psychiatric medical service providers. To this end, psychiatrists devised a new political rhetoric, ‘early treatment of mental disorder’, in their professional interests and succeeded in enacting the Mental Treatment Act of 1930, which re-instated psychiatrists as masters of English psychiatry.
The history of ‘electroshock therapy’ (now known as electroconvulsive therapy (ECT)) in Europe in the Third Reich is still a neglected chapter in medical history. Since Thomas Szasz’s ‘From the Slaughterhouse to the Madhouse’, prejudices have hindered a thorough historical analysis of the introduction and early application of electroshock therapy during the period of National Socialism and the Second World War. Contrary to the assumption of a ‘dialectics of healing and killing’, the introduction of electroshock therapy in the German Reich and occupied territories was neither especially swift nor radical. Electroshock therapy, much like the preceding ‘shock therapies’, insulin coma therapy and cardiazol convulsive therapy, contradicted the genetic dogma of schizophrenia, in which only one ‘treatment’ was permissible: primary prevention by sterilisation. However, industrial companies such as Siemens–Reiniger–Werke AG (SRW) embraced the new development in medical technology. Moreover, they knew how to use existing patents on the electrical anaesthesia used for slaughtering to maintain a leading position in the new electroshock therapy market. Only after the end of the official ‘euthanasia’ murder operation in August 1941, entitled T4, did the psychiatric elite begin to promote electroshock therapy as a modern ‘unspecific’ treatment in order to reframe psychiatry as an ‘honorable’ medical discipline. War-related shortages hindered even the then politically supported production of electroshock devices. Research into electroshock therapy remained minimal and was mainly concerned with internationally shared safety concerns regarding its clinical application. However, within the Third Reich, electroshock therapy was not only introduced in psychiatric hospitals, asylums, and in the Auschwitz concentration camp in order to get patients back to work, it was also modified for ‘euthanasia’ murder.
In 1970 the medical associations of South Africa and Rhodesia (now, Zimbabwe) were expelled from the Commonwealth Medical Association. The latter had been set up, as the British Medical Commonwealth Medical Conference, in the late 1940s by the British Medical Association (BMA). These expulsions, and the events leading up to them, are the central focus of this article. The BMA’s original intention was to establish an organisation bringing together the medical associations of the constituent parts of the expanding Commonwealth. Among the new body’s preoccupations was the relationship between the medical profession and the state in the associations’ respective countries. It thus has to be seen as primarily a medico-political organisation rather than one concerned with medicine per se. Although, there were also tensions from the outset regarding the membership of the Southern African medical associations. Such stresses notwithstanding, these two organisations remained in the BMA-sponsored body even after South Africa and Rhodesia had left the Commonwealth. This was not, however, a situation which could outlast the growing number of African associations which joined in the wake of decolonisation; and hardening attitudes towards apartheid. The article therefore considers: why the BMA set up this Commonwealth body in the first place and what it hoped to achieve; the history of the problems associated with South African and Rhodesian membership; and how their associations came to be expelled.
The Vietnam War has long been regarded as pivotal in the history of the Republic of Korea, although its involvement in this conflict remains controversial. While most scholarship has focused on the political and economic ramifications of the war – and allegations of brutality by Korean troops – few scholars have considered the impact of the conflict upon medicine and public health. This article argues that the war had a transformative impact on medical careers and public health in Korea, and that this can be most clearly seen in efforts to control parasitic diseases. These diseases were a major drain on military manpower and a matter of growing concern domestically. The deployment to Vietnam boosted research into parasitic diseases of all kinds and accelerated the domestic campaign to control malaria and intestinal parasites. It also had a formative impact upon the development of overseas aid.
Huelva’s copper mines (Spain) have been active for centuries but in the second half of the nineteenth century extractive activities in Riotinto, Tharsis, and other mines in the region were intensified in order to reach world leadership. The method used in these mines for copper extraction from low grade ores generated continuous emissions of fumes that were extremely controversial. The inhabitants had complained about the fumes for decades but as activity intensified so did complaints. The killing of anti-fumes demonstrators in 1888 led to the passing of a Royal Decree banning the open-air roasting of ore and to the drafting of numerous reports on the hazards of the fumes. Major state and provincial medical institutions, as well as renowned hygienists and engineers, took part in the assessment, contributing to a scientific controversy especially rich in content. In my paper I will analyse the production and circulation of knowledge and ignorance about the impact of fumes on public health, as well as the role of medical experts and expertise in the controversy. The analysis will focus on the reports drafted between the 1888 ban and its 1890 repeal, and will show the changing nature of the expert assessment and the numerous paths followed by experts in producing ignorance. The paper will conclude by considering other stakeholders, who may shed some light on the reasons behind the performance of the medical experts.
In 2014 the World Health Organization (WHO) was widely criticised for failing to anticipate that an outbreak of Ebola in a remote forested region of south-eastern Guinea would trigger a public health emergency of international concern (pheic). In explaining the WHO’s failure, critics have pointed to structural restraints on the United Nations organisation and a leadership ‘vacuum’ in Geneva, among other factors. This paper takes a different approach. Drawing on internal WHO documents and interviews with key actors in the epidemic response, I argue that the WHO’s failure is better understood as a consequence of Ebola’s shifting medical identity and of triage systems for managing emerging infectious disease (EID) risks. Focusing on the discursive and non-discursive practices that produced Ebola as a ‘problem’ for global health security, I argue that by 2014 Ebola was no longer regarded as a paradigmatic EID and potential biothreat so much as a neglected tropical disease. The result was to relegate Ebola to the fringes of biosecurity concerns just at the moment when the virus was crossing international borders in West Africa and triggering large urban outbreaks for the first time. Ebola’s fluctuating medical identity also helps explain the prominence of fear and rumours during the epidemic and social resistance to Ebola control measures. Contrasting the WHO’s delay over declaring a pheic in 2014, with its rapid declaration of pheics in relation to H1N1 swine flu in 2009 and polio in 2014, I conclude that such ‘missed alarms’ may be an inescapable consequence of pandemic preparedness systems that seek to rationalise responses to the emergence of new diseases.
This paper focuses on homeopaths’ strategies to popularise homeopathy from 1850 to 1870. I argue that homeopaths created a space for homeopathy in Mexico City in the mid-nineteenth century by facilitating patients’ access to medical knowledge, consultation and practice. In this period, when national and international armed conflicts limited the diffusion and regulation of academic medicine, homeopaths popularised homeopathy by framing it as a life-enhancing therapy with tools that responded to patients’ needs. Patients’ preference for homeopathy evolved into commercial endeavours that promoted the practice of homeopathy through the use of domestic manuals. Using rare publications and archival records, I analyse the popularisation of homeopathy in Ramón Comellas’s homeopathic manual, the commercialisation of Julián González’s family guides, and patients’ and doctors’ reception of homeopathy. I show that narratives of conversion to homeopathy relied on the different experiences of patients and trained doctors, and that patients’ positive experience with homeopathy weighed more than the doctors’ efforts to explain to the public how academic medicine worked. The fact that homeopaths and patients used a shared language to describe disease experiences framed the possibility of a horizontal transmission of medical knowledge, opening up the possibility for patients to become practitioners. By relying on the long tradition of domestic medicine in Mexico, the popularisation of homeopathy disrupted the professional boundaries that academic physicians had begun to build, making homeopaths the largest group that challenged the emergent medical academic culture and its diffusion in Mexico in the nineteenth century.
In the first decades of the twentieth century, a group of doctors under the banner of the social hygiene movement set out on what seemed an improbable mission: to convince American men that they did not need sex. This was in part a response to venereal disease. Persuading young men to adopt the standard of sexual discipline demanded of women was the key to preserving the health of the nation from the ravages of syphilis and gonorrhoea. But their campaign ran up against the doctrine of male sexual necessity, a doctrine well established in medical thought and an article of faith for many patients. Initially, social hygienists succeeded in rallying much of the medical community. But this success was followed by a series of setbacks. Significant dissent remained within the profession. Even more alarmingly, behavioural studies proved that many men simply were not listening. The attempt to repudiate the doctrine of male sexual necessity showed the ambition of Progressive-era doctors, but also their powerlessness in the face of entrenched beliefs about the linkage in men between sex, health and success.