This section of the volume explores the tools, processes and actors at play in regulating health research. Regulators rely on a number of tools or regulatory devices to strike a balance between promoting sound research and protecting participants. Some of the paradigmatic examples are (informed) consent and research ethics review of proposed projects; both are explored in this section. Other examples include intellectual property (especially patents), data access governance models, and benefit-sharing mechanisms. Much of the contemporary scholarship on and practice of health research regulation relies on, and criticises, these tools. Relatedly, and arguably, regulation itself is processual; it is about guiding human practices towards desirable endpoints while avoiding undesirable consequences. There has been little discussion of this processual aspect of regulation to date and the specific processes at play in health research. Contributors in this section explore some of the most crucial processes, including risk–benefit analysis, research ethics review and data access governance mechanisms. Further, as becomes apparent, processes can themselves become tools or mechanisms for regulation. Finally, one cannot robustly explore the contours of health research regulation without a consideration of the roles regulatory actors play. Here, several contributors look at the institutional dimension of regulatory authorities and the crucial role experts and science advisory bodies play in constructing health research regulation.
Despite the breadth of topics explored within this section, an overarching theme emerges across the thirteen chapters: that technological change forces us to reassess the suitability of pre-existing tools, processes, and regulatory/governance ecosystems. While a number of tools and processes are long-standing features of health research regulation and are practised by a variety of long-standing actors, they are coming under increasing pressure in twenty-first-century research, driven by pluralistic societal values, learning healthcare systems, Big Data-driven analysis, artificial intelligence and international research collaboration across geographic borders that thrives on harmonised regulation. As considered by the authors, in some cases, new tools, processes or actors are advocated; in other cases, it may be more beneficial to reform them to ensure remain they fit for purpose and provide meaningful value to health research regulation.
Much of the discussion focusses therefore not only on the nature of these long-standing tools, processes, and actors, but also on how they might be sustained – if at all – well into the twenty-first century. For example, the digital-based data turn necessitates reconsidering fundamental principles like consent and developing new digital-based mechanisms to put participants at the heart of decision-making, as discussed by Kaye and Prictor (Chapter 10). Shabani, Thorogood and Murtagh (Chapter 19) also speak to the challenges that data intensive research is presenting for governance and in particular the challenges of balancing the need to grant (open) access to databases with the need to protect the rights and interests of patients and participants.
This leads to another related theme emerging within this section: the need to examine more closely the participatory turn in health research regulation. Public and participant involvement is becoming an increasingly emphasised component of health research, as illustrated by public engagement exercises becoming mandatory within many research funding schemes. But, as Aitken and Cunningham-Burley (Chapter 11) note, many different forms of public engagement exist and we need to ask ‘why’ publics are engaged, rather than simply ‘how’ they are engaged. They suggest that framing public engagement as a political exercise can help us to answer this question. For Chuong and O’Doherty (Chapter 12), the process of participatory governance also necessitates unpacking, particularly due to the varied approaches taken towards embedding deliberative practices and including patients and participants as partners within health research initiatives. Both of these chapters help set up discussion and analysis to come later in this book, specifically the contribution from Burgess (Chapter 25), who makes a case for mobilising public expertise in the design of health research regulation.
Beyond the inclusion of publics and participants in decision-making, many authors in this section raise additional questions about decision-making tools and processes involving other regulatory actors. For example, Dove (Chapter 18) notes how research ethics committees have evolved into regulatory entities in their own right, suggesting that they can play an important role in stewarding projects towards an ethical endpoint. Similarly, McMahon (Chapter 21) explores the ways in which institutions (and their scaffolding) can shape and influence decision-making in health research and argues that this ought to be reflected when drafting legal provisions and guidance. On the question of guidance, Sethi (Chapter 17) lays out different implications that rules, principles and best-practice-based approaches can carry for health research, including the importance of capturing previous lessons learned within regulatory approaches. Sethi’s discussion of principles-based regulation helps round out the discussion to come later in this book, specifically Vayena and Blassime’s contribution (Chapter 26) on Big Data and a proposed model of adaptive governance. Sethi’s chapter also engages with another key theme emerging within this section: the construction of knowledge-bases and expertise. For example, Flear (Chapter 16) suggests that basing current framings of regulatory harm as technological risk marginalises critical stakeholder knowledges of harm, in turn limits knowledge-bases. Indeed, in considering how governments make use of expertise to inform health research regulation, Meslin (Chapter 22) concludes that it will be best served when different stakeholders are empowered to contribute to the process of regulation, and when governments are open to advice from the expertise of experts and non-experts alike.
Many of the authors highlight the need to analyse how we anticipate and manage the outputs (beneficial and harmful) of health research. For example, Coleman (Chapter 13) questions the robustness and objectivity attributed to risk–benefit analysis, despite the heavy reliance placed upon it within health research. Similarly, benefit sharing has become a key requirement for many research projects but, as discussed by Simm (Chapter 15), there are practical challenges to deploying such a complex tool to distinct concrete projects. Patents are also a standard feature of health research and innovation. As considered by Nicol and Nielsen (Chapter 14), these can be used both as a positive incentive to foster innovation and, paradoxically, as a means to stifle collaboration and resource sharing.
Three final cross-cutting themes must be kept in mind as we continue to attempt to improve health research regulation. First, in closing this section, Nicholls (Chapter 20) reminds us that we must be mindful of the constant need to evaluate and adapt our approaches to the varying contexts and ongoing developments in health research regulation. Second, in recognition of the fragility of public trust and the necessity of public confidence for health research initiatives to succeed, we must continue to strive for transparency, fairness and inclusivity within our practices. Finally, as we seek to refine and develop new approaches to health research regulation, we must acknowledge that no one tool or process can provide a panacea for the complex array of values and interests at stake. All must be kept under constant review as part of a well-functioning learning system, as Laurie argues in the Afterword to this volume.
Informed consent is regarded as the cornerstone of medical research; a mechanism that respects human dignity and enables research participants to exercise their autonomy and self-determination. It is a widely accepted legal, ethical and regulatory requirement for most health research. Nonetheless, the practice of informed consent varies by context, is subject to exceptions, and, in reality, often falls short of the theoretical ideal.Footnote 1 The widespread use of digital technologies this century has revolutionised the collection, management and analysis of data for health research, and has also challenged fundamental principles such as informed consent. The previously clear boundaries between health research and clinical care are becoming blurred in practice, with implications for implementation and regulation. Through our analysis we have identified the key components of consent for research articulated consistently in international legal instruments. This chapter will: (1) describe the new uses of data and other changes in health research; (2) discuss the legal requirements for informed consent for research found in international instruments; and (3) discuss the challenges in meeting these requirements in the context of emerging research data practices.
10.2 The Changing Nature of Research
Health research is no longer a case simply of the physical measurement and intimate observation of patients. Rather, it increasingly depends upon the generation and use of data, and new analysis tools such as Artificial Intelligence (AI). Health research has been transformed by innovations in digital technologies enabling the collection, curation and management of large quantities of diverse data from multiple sources. The intangible nature of digital data means that it can be perfectly replicated indefinitely, instantly shared with others across geographical borders and used for multiple purposes, such as clinical care and research. The information revolution enables data to be pulled from different sources such as electronic medical records; wearables and smart phones monitoring chronic conditions; and datasets outside the health care system yielding inferences about an individual’s health. These developments have significant implications for informed consent.
New technologies have enabled the development of ambitious scientific agendas, new types of infrastructure such as biobanks and genomic sequencing platforms and international collaborations involving datasets of thousands of research participants. Much innovation is driven by collaborations between clinical and research partners that provide practical need and clinical data, and companies offering technical expertise and resources. Examples are: national genomic initiatives including Genomics England (UK), All of Us (USA), Aviesan (France), Precision Medicine Initiative (China); international research collaborations like the Human Genome Project, Global Alliance for Genomics and Health, the Personal Genome Project; and mission-orientated collaborations such as Digital Technology Supercluster (Canada) and the UK Health Data Research Alliance.
The greatest challenges emerge around informed consent in these new contexts where already-collected data can be used in ways not anticipated at the time of collection and data can be sent across jurisdictional borders. When data and tissue samples are being collected for multiple unknown future research uses, explicit informed consent to the research aims and methods may not be possible. In response to this practical challenge, the World Medical Association (WMA) adopted the Declaration of Taipei on Ethical Considerations regarding Health Databases and Biobanks (2002, revised 2016). It stipulates that instead of consenting to individual research, individuals may validly consent to the purpose of the biobank, the governance arrangements, privacy protections, risks associated with their contribution and so on. This form of ‘broad consent’ is really an agreement that others will govern the research, since determinations about appropriate uses of the data and biomaterials are decided by researchers with approval by research ethics committees or similar bodies.Footnote 2
10.3 The Basis for Informed Consent
The moral force of consent is not unique to health research; it is integral to many interpersonal interactions, as well as being entrenched in societal values. The key moral values at play in medical research are: autonomy – the right for an individual to make his or her own choice; beneficence – the principle of acting with the participant’s best interests in mind; non-maleficence – the principle that ‘above all, do no harm’; and justice – emphasising fairness and equality among individuals.Footnote 3 The concepts of voluntariness and transparency embedded in informed consent speak to the ethical value of respect for human beings, their autonomy, their dignity as free moral agents and their welfare. This respect for individuals has resulted in special protections for those who are not legally competent to provide informed consent. Beneficence requires that the probable benefits of the research project outweigh the harms. In the context of informed consent, non-maleficence demands that harm is minimised by researchers being attuned to participant welfare and fully disclosing likely benefits and risks to permit adequately informed choice. The principle of justice in the research setting requires that potential participants are equally provided with adequate information to make a knowledgeable decision, helping to avoid participant exploitation. Consideration of the ethical principles underpinning informed consent also requires reflection on cultural values, such as those pertaining to specific indigenous communities or ethnic groups. Cultural values may lead researchers to consider, for example, whether unique harms to cultural integrity and heritage could accrue to certain groups through specific research projects, and whether respect for human beings should be seen through a lens of collective, as well as individual, autonomy and well-being.Footnote 4 These ethical principles underpin informed consent in health research practice, but not all of them have been implemented into law.
10.4 Legal Requirements for Informed Consent
The requirements for informed consent emerged from a range of egregious examples of physical experimentation on humans. Among the most notable examples were the Nuremberg trials following World War II, although concern about harmful research practices internationally had surfaced decades earlier.Footnote 5 The trial of Nazi doctors produced a ten-point Code that became the foundation of modern health research ethics. Voluntary consent was its first and arguably most emphasised principle.Footnote 6 It has since been espoused in declarations by international and non-governmental organisations. A key instrument is the WMA’s Declaration of Helsinki (1964, as amended) setting out the basic requirements for informed consent for research.
In medical research involving human subjects capable of giving informed consent, each potential subject must be adequately informed of the aims, methods, sources of funding, any possible conflicts of interest, institutional affiliations of the researcher, the anticipated benefits and potential risks of the study and the discomfort it may entail, post-study provisions and any other relevant aspects of the study. The potential subject must be informed of the right to refuse to participate in the study or to withdraw consent to participate at any time without reprisal.Footnote 7
Crucial to this formulation is the need to communicate and provide detailed information to the ‘human subject’. While this information should be comprehensive enough for participants to make an informed decision, it positions the researcher as the information provider and the subject as a passive recipient. Yet, the Declaration also posits ongoing engagement as an essential requirement as the participant can withdraw consent at any time.
The principle of free consent also forms part of the United Nations’ International Covenant on Civil and Political Rights (Article 7). Further guidelines and conventions promulgated by international organisations such as the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH),Footnote 8 the Council for International Organizations of Medical Sciences,Footnote 9 and the Council of Europe,Footnote 10 endorse and explain these principles. The ICH Good Clinical Practice Guideline considers consent in the context of human clinical trials; it establishes a unified quality standard for the European Union, Japan and the USA. The Oviedo Convention and the 2005 Additional Protocol relating to biomedical research similarly foreground consent, stipulating that it be ‘informed, free, express, specific and documented’.Footnote 11 The European General Data Protection Regulation (GDPR) has raised the bar for informed consent for data use worldwide. In Australia, the National Health and Medical Research Council’s National Statement on Ethical Conduct in Human Research (2007, updated 2018) is the principle guiding document for health research. From these documents, several key components can be discerned, such as competence, transparency and voluntariness, and that consent must be informed.
Only ‘human subjects capable of giving informed consent’ are the subject of the Helsinki Declaration statement about consent. Ethicists have described competent people as those who have ‘the capacity to understand the material information, to make a judgement about the information in light of his or her values, to intend a certain outcome, and to freely communicate his or her wish to caregivers or investigators’.Footnote 12 Special protections pertain to those not competent to give consent, such as some young children and some people who are physically, mentally or intellectually incapacitated. These protections centre upon authorisation by a research ethics committee and consent provided by a legal representative. The potential participant may still be asked to assent to the research.
Assessing competence represents a challenge in relation to biobanks and other longitudinal research endeavours where people contributing data or tissue samples may have shifting competence over time; for instance people who were enrolled into research as children will become competent to provide consent for themselves as they reach adulthood.Footnote 13 People impacted by cognitive decline or mental illness may lose competence to provide consent, either temporarily or indefinitely. Periodically revisiting consent for participants is an ethically appropriate, yet logistically demanding, response.
As indicated above, the Nuremberg Code and the Declaration of Helsinki outline a range of information that potential research participants are to be given to enable them to be informed before making a choice about enrolment. The ICH Guideline goes into further detail regarding clinical trials, stating that the information should be conveyed orally and in writing (4.8.10), and that that the explanation should include:
Whether the expected benefits of the research pertain to the individual participants;
What compensation is available if harm results;
The extent to which the participant’s identity will be disclosed;
The expected duration of participation;
How many participants are likely to be involved in the research.
National or regional statutes and guidelines stipulate the required informational elements for consent to health research in their jurisdictions, mirroring the elements contained in the international instruments to varying degrees.Footnote 14
Limited disclosure of information may sometimes be permitted, for instance in a study of human behaviour where the research aims would be frustrated by full disclosure to participants.Footnote 15 It may also be a necessary consequence of the difficulty of comprehensive disclosure in the context of Big Data science, where not all the uses of the data (that may not be collected directly from the individual) can be anticipated when the data are collected.
The Declaration of Helsinki requirement that research participants must be ‘adequately informed’ points to further consideration of how best to communicate the complex information described above. This is the focus of much recent law and guidance.Footnote 16 Research has shown repeatedly that participants often do not understand the investigative purpose of clinical trials, key concepts such as randomisation and the risks and benefits of participation.Footnote 17 Using simple language and providing enough time to consider the information can help, as well as tailoring information to participant age and educational level. Researchers have evaluated tools to assist with communicating information in ways that support understanding.Footnote 18 Complex, heterogenous and changing research endeavours that cross geographic boundaries and blur the lines between clinical care, daily life and research pose an additional challenge to the requirement for transparency.
A consistent requirement of international conventions, law and guidelines for ethical research is that for consent to be valid, it must be voluntary.Footnote 19 The Nuremberg Code obliges researchers to avoid ‘any element of force, fraud, deceit, duress, over-reaching, or other ulterior form of constraint or coercion’.Footnote 20 Beyond the problem of overt coercion by another person, other considerations in evaluating voluntariness include: deference to the perceived power of the researcher or institution;Footnote 21 the mere existence of a power imbalance;Footnote 22 the existence of a dependent relationship with the researcher;Footnote 23 and the amount paid to participants.Footnote 24 On power and vulnerabilities, see further Brassington, Chapter 9, this volume.
These concerns are largely associated with duress as a result of specific relationships developed through personal interactions. In Big Data or AI analysis, the concept of voluntariness must be reconsidered, as often the data users are not known to the data subject and the nature of the duress may not be straightforwardly attributed to particular relationships. An example is companies that provide direct-to-consumer genetic tests, where the provision of test results also enables the companies to use the data for purposes including marketing and research. This is a different kind of duress as people lured through the fine print in click-wrap contracts are then enrolled into research.Footnote 25
Traditionally, valid informed consent occurs before the participant’s involvement in the research;Footnote 26 no specific timing is recommended as long as there is time for the person to acquire sufficient understanding of the research. In selected circumstances, ‘deferred’ consent – where individuals do not know they are enrolled in a clinical trial so that the sample is not biased and they are asked for consent later onFootnote 27 – a waiver of consent or an opt-out approach might be justifiable. These are typically addressed within relevant guidance.Footnote 28 Once-off informed consent before a project starts may, however, be insufficient to acquit researchers’ responsibilities in the context of longitudinal data-intense research infrastructures. Modalities that permit ongoing or at least repeated opportunities to refresh consent, such as staged consent and Dynamic Consent, considered below, are a developing response to this issue.
It is a key principle of health research, traceable back to the Declaration of Helsinki, that potential research participants have a right to decline the invitation to participate without giving a reason and should not incur any disadvantage or discrimination as a consequence.Footnote 29 Further, people who have consented must be free to withdraw consent at any time without incurring disadvantage. The GDPR stipulation that ‘It shall be as easy to withdraw as to give consent’,Footnote 30 has energised research into technology-based tools to facilitate seamless execution of a withdrawal decision, or even to support shifting levels of participation over time.Footnote 31
Newer research methods and infrastructures characterised by open-ended research activities and widespread data sharing add complexity to the interpretation of ‘withdrawing consent’. International guidelines have acknowledged that withdrawal in this context might equate to no new data collection while raising a question over whether existing samples and data must be destroyed or remain available for research.Footnote 32
10.5 The Limitations of Consent
In research involving human participants, the informed consent process is foregrounded.
As a legal mechanism intended to protect human subjects in the way envisaged by international instruments, it is also recognised that consent may be insufficient. People often do not understand what they have agreed to participate in, retain the information about the research or even recall that they agreed to be involved.Footnote 33 Consent is not the only legal basis for conducting health research. While there is variation between jurisdictions, broadly speaking research involving data or tissue may be able to proceed without consent in certain circumstances. These include if: there is an overriding public interest and consent is impracticable; there is a serious public health threat; the participant is not reasonably identifiable; or the research carries low or negligible risk. Many researchers have sought to augment traditional modes of consent at the point of entry to research, to support informed decision-making by potential participants. New consent processes seek to enable truly informed consent rather than doing away with this fundamental requirement.
Traditionally, consent is operationalised as a written document prepared by the researcher setting out the information described above. The participant’s agreement is indicated by their signature and date on the document. Concerns about participant problems with reading and understanding the form have led to initiatives including simplified written materials, extra time and the incorporation of multimedia tools.Footnote 34 More nuanced consent modalities might encompass different tiers of information – with simple, minimally compliant information presented first, linking to more comprehensive explanation – and different staging of information, for instance with new choices being presented to participants at a later time.Footnote 35
Scholars have also considered when and how it might be appropriate to diverge from the notion of the individual human subject as the autonomous decision-maker for health research participation, towards a communitarian approach informed by ethical considerations pertaining to culture and relationships. The concept of informed consent must, in this context, expand to incorporate the possibility of family and community members at least being consulted, perhaps even deciding jointly. Osuji’s work on relational autonomy in informed consent points to decisions ‘made not just in relation to others but with them, that is, involving them: family members, friends, relations, and others’.Footnote 36 This approach might particularly suit some groups, with extensive examples deriving from Australian aboriginal and other Indigenous communities,Footnote 37 family members with shared genetic heritageFootnote 38 and some Asian and African cultures.Footnote 39 Communitarian-based consent processes may not meet legal requirements for informed consent to research, but may nevertheless be a beneficial adjunct to standard processes in some instances.
10.6 New Digital Consent Mechanisms
The pervasion of technology into all aspects of human endeavour has transformed health research activities and the consent processes which support them. Electronic consent may mean simply transferring the paper form to a computerised version. Internationally, electronic signatures are becoming generally accepted as legally valid in various contexts.Footnote 40 These may comprise typewritten or handwritten signatures on an electronic form, digital representations such as fingerprints or cryptographic signatures. Progress is being made on so-called digital, qualified or advanced electronic signatures which can authenticate the identity of the person signing, as well as the date and location.Footnote 41
Semi-autonomous consent is emerging in computer science; it refers to an approach in which participants record their consent preferences up-front, a computer enacts these preferences in response to requests – for instance, invitations to participate in research – and the participants review the decisions, refine their expressed preferences and provide additional information.Footnote 42 This could be a way to address consent fatigue by freeing participants from the need to make numerous disaggregated consent decisions. It is a promising development at a time when increasing uses of people’s health data for research may overwhelm traditional tick-box consent.
Dynamic Consent is an approach to consent developed to accommodate the changes in the way that medical research is conducted. It is a personalised, digital communication interface that connects researchers and participants, placing participants at the heart of decision-making. The interface facilitates two-way communication to stimulate a more engaged, informed and scientifically-literate participant population where individuals can tailor and manage their own consent preferences.Footnote 43 In this way it meets many of the requirements of informed consent as stipulated in legal instrumentsFootnote 44 but also allows for the complexity of data flows characterising health research and clinical care. The approach has been used in the PEER project,Footnote 45 CHRIS,Footnote 46 the Australian Genomics Health AllianceFootnote 47 and the RUDY project.Footnote 48 It seems appropriate to have digital consent forms for a digital world that allow for greater flexibility and engagement with patients when the uses of data for research purposes cannot be predicted at the time of collection.
The organisation and execution of health research has undergone considerable change due to technological innovations that have escalated in the twenty-first century. Despite this, the requirements of informed consent enshrined in the Nuremberg Code are still the basic standard for health research. These requirements were formulated specifically in response to atrocities that occurred through physical experimentation. They continue to be applied to data-based research that is very different in its scope and nature, and in the issues it raises for individuals compared to physically-based research, that was the template for the consent requirements found in international instruments. The process for obtaining and recording consent has undergone little change over time and is still recorded through paper-based systems, reliant on one-to-one interactions. While this works well for single projects with a focus on the prevention of physical, rather than informational harm, it is less suitable when data are used in multiple settings for diverse purposes.
Paper-based systems are not flexible and responsive and cannot provide people with the information that is needed in a changing research environment. Digital systems such as Dynamic Consent provide the tools for people to be given information as the research evolves and to be able to change their mind and withdraw their consent. However, given the complexity and scale of research, when data are collected from a number of remote data points it is difficult for consent to effectively respond to all of the issues associated with data-intensive research. The use of collective datasets that concern communal or public interests are difficult to govern through individual decision-making mechanisms such as consent.Footnote 49
Consent is only one of the many governance mechanisms that should be brought into play to protect people involved in health research. Additionally, attention should be given to the ecosystem of research and informational governance that consist of legal requirements, regulatory bodies and best practice that provide the protective framework that is wrapped around health research. Despite its shortcomings, informed consent is still fundamental to health research, but we should recognise its strengths and limitations. More consideration is needed on how to develop better ways to enable the basic requirements of informed consent to be enacted through digital mechanisms that are responsive to the characteristics of data-intensive research. Further research needs to be directed to how the governance of health research should adapt to this new complexity.
1 C. Grady, ‘Enduring and Emerging Challenges of Informed Consent’, (2015) New England Journal of Medicine, 372(9), 855–862.
2 S. Boers et al., ‘Broad Consent Is Consent for Governance’, (2015) American Journal of Bioethics, 15(9), 53–55.
3 T. Beauchamp and J. Childress, Principles of Biomedical Ethics, 4th Edition (Oxford University Press, 1994).
4 For instance: National Health and Medical Research Council [Australia], ‘Ethical Conduct in Research with Aboriginal and Torres Strait Islander Peoples and Communities’, (NHMRC, 2018); L. Jamieson et al., ‘Ten Principles Relevant to Health Research among Indigenous Australian Populations’, (2012) Medical Journal of Australia, 197(1), 16–18.
5 A. Dhai, ‘The Research Ethics Evolution: From Nuremberg to Helsinki’, (2014) South African Medical Journal, 104(3), 178–180.
6 Trials of War Criminals before the Nuremberg Military Tribunals under Control Council Law No. 10 [Nuremberg Code] (1949) para. 1.
7 World Medical Association, ‘Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Subjects’, (World Medical Association, 1964, 2013 version), para. 26. [hereafter ‘Declaration of Helsinki’]
8 International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH), ‘Guideline for Good Clinical Practice’, (ICH, 1996).
9 Council for International Organizations of Medical Sciences, ‘International Ethical Guidelines for Biomedical Research Involving Human Subjects’, (CIOMS, 2002, updated 2016).
10 Convention for the Protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine: Convention on Human Rights and Biomedicine, Oviedo, 04/04/1997, in force 01/12/1999, ETS No. 164.
11 Additional Protocol to the Convention on Human Rights and Biomedicine, Concerning Biomedical Research,’ Strasbourg, 21/05/2005, in force 01/09/2007, CETS No. 195, Article 14.1.
12 Beauchamp and Childress, Principles of Biomedical Ethics, p. 135.
13 M. Taylor et al., ‘When Can the Child Speak for Herself?’, (2018) Medical Law Review, 26(3), 369–391.
14 For example, Human Biomedical Research Act 2015, sec. 12 (Singapore); National Health and Medical Research Council, Australian Research Council, and Universities Australia, ‘National Statement on Ethical Conduct in Human Research’, (NHMRC, 2007), ch 2.2. [hereafter ‘NHMRC National Statement’]; Health Research Authority, ‘Consent and Participant Information Guidance’, (HRA) (UK) ; Federal Policy for the Protection of Human Subjects (‘Common Rule’), 45 CFR part 46, para. 46.114, (1991); The Medicines for Human Use (Clinical Trials) Regulations 2004 No. 1031, Schedule 1 (UK).
15 NHMRC National Statement, chap. 2.3.
16 NHMRC National Statement, para. 5.2.17; Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ 2016 L 119/1 Recital 58.
17 M. Falagas et al., ‘Informed Consent: How Much and What Do Patients Understand?’, (2009) American Journal of Surgery, 198(3), 420–435; On risk-benefit analysis, see also Coleman, Chapter 13 in this volume.
18 For example: A. Synnot et al., ‘Audio-Visual Presentation of Information for Informed Consent for Participation in Clinical Trials’, (2014) Cochrane Database of Systematic Reviews, (5); J. Flory and E. Emanuel, ‘Interventions to Improve Research Participants’ Understanding in Informed Consent for Research: A Systematic Review’, (2004) JAMA, 292(13), 1593–1601.
19 Additional Protocol to the Convention on Human rights and Biomedicine, Article 14.1; ICH, ‘Guideline for Good Clinical Practice’, paras 2.9 and 3.1.8; NHMRC National Statement, para. 2.2.9; General Data Protection Regulation, Article 4(11); Declaration of Helsinki, para. 25.
20 Nuremberg Code, para. 1.
21 NHMRC National Statement, para. 2.2.9.
22 General Data Protection Regulation, Recital 43.
23 Declaration of Helsinki, para. 27.
24 ICH, ‘Guideline for Good Clinical Practice’, para. 3.1.8; NHMRC National Statement, para. 2.2.10.
25 A. Phillips, Buying your Self on the Internet: Wrap Contracts and Personal Genomics (Edinburgh University Press, 2019).
26 ICH, ‘Guideline for Good Clinical Practice’, para. 2.9.
27 L. Johnson and S. Rangaswamy, ‘Use of Deferred Consent for Enrolment in Trials is Fraught with Problems’, (2015) BMJ, 351.
28 NHMRC National Statement, chap. 2.3; The paper N. Songstad et al. and on behalf of the HIPSTER trial investigators, ‘Retrospective Consent in a Neonatal Randomized Controlled Trial’, (2018) Pediatrics, 141(1), e20172092 presents an example of deferred consent.
29 Declaration of Helsinki, paras 26, 31.
30 General Data Protection Regulation, Article 7(3).
31 K. Melham et al., ‘The Evolution of Withdrawal: Negotiating Research Relationships in Biobanking’ (2014) Life Sciences, Society and Policy, 10(1), 1–13.
32 Council for International Organizations of Medical Sciences and World Health Organization, ‘International Ethical Guidelines for Epidemiological Studies’, (CIOMS, 2009) p. 48.
33 J. Sugarman et al., ‘Getting Meaningful Informed Consent From Older Adults: A Structured Literature Review of Empirical Research’, (1998) Journal of the American Geriatrics Society, 46(4), 517–524; P. Fortun et al., ‘Recall of Informed Consent Information by Healthy Volunteers in Clinical Trials’, (2008) QJM: An International Journal of Medicine, 101(8) 625–629; R. Broekstra et al., ‘Written Informed Consent in Health Research Is Outdated’, (2017) European Journal of Public Health, 27(2), 194–195; Falagas et al., ‘Informed Consent’; H. Teare et al., ‘Towards “Engagement 2.0”: Insights From a Study of Dynamic Consent with Biobank Participants’, (2015) Digital Health, 1, 1–13.
34 A. Nishimura et al., ‘Improving Understanding in the Research Informed Consent Process’, (2013) BMC Medical Ethics, 14(1), 1–15; Synnot et al., ‘Audio-Visual Presentation’; B. Palmer et al., ‘Effectiveness of Multimedia Aids to Enhance Comprehension of Research Consent Information: A Systematic Review’, (2012) IRB: Ethics & Human Research, 34(6), 1–15; S. McGraw et al., ‘Clarity and Appeal of a Multimedia Informed Consent Tool for Biobanking’, (2012) IRB: Ethics & Human Research, 34(1), 9–19; C. Simon et al., ‘Interactive Multimedia Consent for Biobanking: A Randomized Trial’, (2016) Genetics in Medicine, 18(1), 57–64.
35 E. Bunnik et al., ‘A Tiered-Layered-Staged Model for Informed Consent in Personal Genome Testing’, (2013) European Journal of Human Genetics, 21(6), 596–601.
36 P. Osuji, ‘Relational Autonomy in Informed Consent (RAIC) as an Ethics of Care Approach to the Concept of Informed Consent’, (2017) Medicine, Health Care and Philosophy, 21(1), 101–111, 109.
37 F. Russell et al., ‘A Pilot Study of the Quality of Informed Consent Materials for Aboriginal Participants in Clinical Trials’, (2005) Journal of Medical Ethics, 31(8), 490–494; P. McGrath and E. Phillips, ‘Western Notions of Informed Consent and Indigenous Cultures: Australian Findings at the Interface’, (2008) Journal of Bioethical Inquiry, 5(1), 21–31.
38 J. Minari et al., ‘The Emerging Need for Family-Centric Initiatives for Obtaining Consent in Personal Genome Research’, (2014) Genome Medicine, 6(12), 118.
39 H3Africa Working Group on Ethics, ‘Ethics and Governance Framework for Best Practice in Genomic Research and Biobanking in Africa’, (H3Africa, 2017).
40 United Nations, ‘United Nations Convention on the Use of Electronic Communications in International Contracts’, (UNCITRAL, 2005) Article 9(3); Electronic Transactions Act 1999 (Cth) sec. 8(1); Regulation (EU) No 910/2014 of the European Parliament and of the Council of 23 July 2014 on electronic identification and trust services for electronic transactions in the internal market and repealing Directive 1999/93/EC (2014); CFR Code of Federal Regulations Title 21 Part 11, (1997) (USA); Electronic Signatures in Global and National Commerce Act 2000, Pub. L. No. 106-229, 114 Stat. 464 (2000) (USA).
41 Health Research Authority and Medicines and Healthcare Products Regulatory Agency, ‘Joint Statement on Seeking Consent by Electronic Means’, (HRA and MHPRA, 2018) p. 5.
42 R. Gomer et al., ‘Consenting Agents: Semi-Autonomous Interactions for Ubiquitous Consent’, Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing (Seattle, Washington: ACM Press, 2014), pp. 653–58.
43 J. Kaye et al., ‘Dynamic Consent: A Patient Interface for Twenty-First Century Research Networks’, (2015) European Journal of Human Genetics, 23, 141–146.
44 M. Prictor et al. ‘Consent for Data Processing Under the General Data Protection Regulation: Could “Dynamic Consent” be a Useful Tool for Researchers?’, (2019) Journal of Data Protection and Privacy, 3(1), 93–112.
45 Genetic Alliance, ‘Platform for Engaging Everyone Responsibly’, www.geneticalliance.org/programs/biotrust/peer.
46 CHRIS eurac research, ‘Welcome to the CHRIS study!’, (CHRIS), www.de.chris.eurac.edu.
47 Australian Genomics Health Alliance, ‘Introducing CTRL’, www.australiangenomics.org.au/introducing-ctrl-a-new-online-research-consent-and-engagement-platform.
48 H. Teare et al., ‘The RUDY Study: Using Digital Technologies to Enable a Research Partnership’, (2017) European Journal of Human Genetics, 25, 816–822.
49 J. Allen, ‘Group Consent and the Nature of Group Belonging: Genomics, Race and Indigenous Rights’, (2009) Journal of Law, Information and Science, 20(2), 28–59.
Public engagement (PE) is part of the contemporary landscape of health research and innovation and considered a panacea for what is often characterised as a problem of trust in science or scientific research, as well as a way to ward off actual or potential opposition to new developments. This is quite a weight for those engaging in engagement to carry, and all the more so since PE is often underspecified in terms of purpose. PE can mean and involve different things but such flexibility can come at the price of clarity. It may allow productive creativity but can limit PE’s traction.Footnote 1
In this chapter we provide a synthesis of current conceptualisations of PE. We then consider what kinds of publics are ‘engaged with’ and what this means for the kinds of information exchanges and dialogues that are undertaken. Different forms of PE ‘make up’ different kinds of publics: engagements do not, indeed cannot, start with a clean sheet – neither with a pure public nor through a pure engagement.Footnote 2 As Irwin,Footnote 3 among others, has noted, PE is a political exercise and this wider context serves to frame what is engaged about. It is therefore all the more important to reflect on the practice of PE and what it is hoped will be achieved. We argue that clarity and transparency about the intention, practice and impact of PE are required if PE is to provide an authentic and meaningful tool within health research governance.
11.2 Engaging with Critique
PE has been a subject of debate for many years, particularly in the Science and Technology Studies literature, through what is termed critical public understanding of science. From Wynne’sFootnote 4 seminal work onwards, this critique has championed the range of expertise that can come to bear on matters scientific and has provided analytical verve to critiques of the institutional arrangements of both science and PE. Criticisms of top-down models of PE were dominant throughout the 1990s and the ‘deficit model of public understanding’ was roundly debunked not least for suggesting that public ignorance of science was a fundamental cause of loss of trust. This critique played an important role in bringing about a new emphasis on two-way processes of PE that went beyond ‘educating the public’.Footnote 5 New commitments to dialogue and engagement – ‘the participatory turn’ – have become more commonplace and mainstream.Footnote 6 However, as Stilgoe and colleagues have commented, the shift from deficit model approaches to dialogic PE, has been only partially successful:
It has been relatively easy to make the first part of the argument that monologues should become conversations. It has been harder to convince the institutions of science that the public are not the problem. The rapid move from doing communication to doing dialogue has obscured an unfinished conversation about the broader meaning of this activity.Footnote 7
Herein lies further threats to the integrity of PE.
PE is now a component of much health research where engagement or patient and public involvement is often a funding requirement. This is particularly pronounced in the UK where public understanding and engagement in science has gained increasing institutional traction since the House of Lords report in 2000. For some, the deficit model of public understanding has simply been replaced with a deficit model of public trust, to which ‘more understanding’ and, even, ‘more dialogue’ remain a solution.Footnote 8 So, on the one hand the deficit model of publics in need of education about science lingers on, sometimes under the guise of trust. Yet, on the other, we see PE being taken up across sectors – and there is evidence of PE, sometimes, bringing science and its governance to account.
PE can be productive as many commentators have posited.Footnote 9 The task for health research governance is to ensure that participatory practices are not skewed towards institutional ends but allow diverse voices into the policy making process so that they can make a difference to how health research is conducted, regulated and held accountable to the very publics it purports to serve. As Braun and Schultz note:
The question that is increasingly discussed in public understanding of science (PUS) today is not so much whether there is a trend towards participation but what we are to make of it, how to assess it, how to understand the dynamics propelling it, how to systematise and interpret the different forms and trajectories it takes, what the benefits, pitfalls or unintended side-effects of these forms and trajectories are and for whom.Footnote 10
11.3 Forms of Public Engagement
Enthusiasm for, and professed commitment to, PE does not easily translate into meaningful engagement in practice. This is in no small part due to the fact that the term ‘public engagement’ can be interpreted in many different ways and PE is undertaken for a variety of reasons.
Key challenges around PE are that the different ideas about its role and value manifest in a variety of purposes and rationales, whether implicit or explicit. PE can be underpinned by normative, substantive or instrumental rationales.Footnote 11 A normative position suggests that PE should be conducted as it is ‘the right thing to do’ – something that is part and parcel of both public and institutional expectations. An instrumental position regards PE as a means to particular ends. For example, PE might be conducted to secure particular outcomes such as greater public support for a policy or project. Such a position aligns PE closely with institutional aims and objectives: it promotes public support through understanding and addressing public concerns. A substantive position suggests that the goal of PE is to lead to benefits for participants or wider publics: this can include empowering members of the public, enhancing skills or building social capital.Footnote 12 While these varying rationales are not mutually exclusive, they lead to different understandings and expectations regarding the objectives and role of PE, as well as different ideas of what it means for such processes to be ‘successful’.
Rowe and Frewer argue that public involvement ‘as widely understood and imprecisely defined can take many forms, in many different situations (contexts), with many different types of participants, requirements, and aims (and so on), for which different mechanisms may be required to maximize effectiveness (howsoever this is defined)’.Footnote 13 Choosing between different forms requires consideration of purpose and an awareness of the wider context within which engagement is taking place; its effectiveness is more than a matter of method. Academic and practitioner literatures on PE contain many different typologies and classifications of forms of engagement. These often take as their starting point Arnstein’sFootnote 14 ladder of public participation. This sets out eight levels of participation, in the form of a hierarchy of engagement. On the bottom rung of the ladder (non-participation), engagement is viewed instrumentally as an opportunity to educate the public and/or engineer support, a common effort when seeking to fill a knowledge deficit or garner social support for a new development. In the middle of the ladder, tokenistic forms of participation include informing and consulting members of the public, where consultation does not involve a two way process, but rather positions the public as having views and attitudes that might be helpful to seek as part of policy development. Again, this is not an unusual mode of engagement in the context of health research. Arnstein suggested that both of these could be valuable first steps towards participation but that they are limited by the lack of influence that participants have. Consultation is described as being a cosmetic ‘window-dressing ritual’ with little impact, although the extent of impact would depend on how the results of any consultation are subsequently used, rather than being intrinsic to the method itself. The top rungs of the ladder, which move towards empowerment and ownership of process, require redistribution of power to members of the public; while the participatory turn gestures towards such an approach, institutional practices often militate against its enactment.
Arnstein’s model has been adapted by a large number of individuals and organisations in developing alternative classification systems and models. This has resulted in a proliferation of typologies, tool kits and models which can be referred to in designing and/or evaluating PE approaches. Aitken has observed that these models, whilst adopting varying terminology and structures, typically follow common patterns:
Each starts with a ‘bottom’ layer of engagement which is essentially concerned with information provision […] They then have one (or more) layer(s) with limited forms of public feedback into decision-making processes (consultation), and finally they each have a ‘top’ layer with more participatory forms of PE which give greater control to participants.Footnote 15
Forms of engagement classified as ‘awareness raising’ are essentially concerned with the dissemination of information. Where awareness raising is conducted on its own (i.e. where this represents the entirety of a PE approach) this represents a minimal form of PE. It may even be argued that awareness raising on its own – as one-sided and unidirectional information provision – should not be considered PE. Rowe and Frewer note that at this level, ‘information flow is one-way: there is no involvement of the public per se in the sense that public feedback is not required or specifically sought’.Footnote 16 Awareness raising is limited in what it can achieve, but the focus on increasing understanding of particular issues may be a prerequisite for the deliberative approaches discussed below.
Examples of PE activities focussed on awareness raising include campaigns by national public health bodies such as Public Health England’s ‘Value of Vaccines’,Footnote 17 or the creation and dissemination of videos and animations to explain the ways that people’s health data is used in research.Footnote 18
Consultation aims to gather insights into the views, attitudes or knowledge of members of the public in order to inform decisions. It can involve – to varying degrees – two-way flows of information. Wilcox contends that: ‘Consultation is appropriate when you can offer some choices on what you are going to do – but not the opportunity [for the public] to develop their own ideas or participate in putting plans into action’.Footnote 19 Consultation provides the means for public views to be captured and taken into consideration, but does not necessarily mean that these views, or public preferences and/or concerns will be acted on or addressed.
Consultation can be either a one-way or two-way process. In a one-way process, public opinion is sought on pre-defined topics or questions, whereas a two-way process can include opportunities for respondents to reflect on and/or question information provided by those running engagement exercises.Footnote 20 Such two-way processes can ensure the questions asked, and subsequently the responses given, reflect the interests and priorities of those being engaged. It can also facilitate dialogue and ‘deeper’ forms of engagement with the aim of characterising, in all their complexity, public attitudes and perspectives.
It is widely recognised that consultation will be best received and most effective when it is perceived to be meaningful. This means that participants want to know how their views are taken into account and what impact the consultation has had (i.e. how has this informed decision-making). Davidson and colleagues caution that: ‘Consultation can be a valuable mechanism for reflecting public interests, but can also lead to disappointment and frustrations if participants feel that their views are not being taken seriously or that the exercise is used to legitimise decisions that have already been made’.Footnote 21 Again, we see that choice of method is no guarantee of meaningful engagement in terms of influence on the practices of research and its governance.
Approaches taken to consultation include: public consultations where any member of the public is able to submit a written response; surveys and questionnaires with a sample which aims to be representative of the wider population (or key groups within it); and, focus groups, deliberative engagement or community-based participatory methods to engage more deeply with communities to shape both research processes and outcomes.
Approaches to PE that can be classified under the heading of empowerment are those that would be positioned at the top of Arnstein’s ladder of participation. These approaches involve the devolution of power to participants and the creation of benefits for participants and/or wider society. This can be achieved through public-led forms of engagement where public members themselves design the process and determine its objectives, topics of relevance and scope or through partnership approaches.Footnote 22 It might also be achieved through engagement approaches that bring together public members in ways that build relationships and social capital that will continue after the engagement process ends.Footnote 23 Both invited and uninvitedFootnote 24 forms of engagement can involve empowerment, so it is possible to engineer a flattening of hierarchies of knowledge and expertise as well as respond to efforts of publics to come together to define and debate issues of concern.
Empowering forms of engagement can lead to outcomes of increased relevance to communities and that most accurately reflect public interests and values. However, they can also be more expensive than traditional forms of engagement, given that they necessitate more open and flexible timeframes and may require extra skills related to facilitation and negotiation. Certainly, they may confront the more uncomfortable social, political and economic consequences and drivers of health research.
One example of engaging with some of the wider issues raised by health data and research is the dialogue commissioned by the Scottish Government to deliberate about private and third sector involvement in data sharing.Footnote 25
While a hierarchical classification, such as Arnstein’s, serves to highlight the importance of how the public are positioned in different modes of engagement, each broad approach described above can add different value and play important roles in PE. In practice it may be most appropriate for PE to use a range of methods reflecting different rationales and objectives. Rather than conceptualising them hierarchically, it is more helpful to think of these methods as overlapping and often working alongside each other within any PE practice or strategy.
11.4 Types of Publics
PE and involvement professionals, policy documents and critical scholars increasingly refer to ‘publics’ as a way to problematise and differentiate within and between different kinds of public. The adoption of such a term signifies that publics are diverse and that we cannot talk of a homogeneous public. However, beyond that, the term may obscure more than it reveals: what kinds of publics are we talking about when we talk about PE, and how are these related to particular forms of engagement? As Braun and Schultz note ‘“The public,” we argue, is never immediately given but inevitably the outcome of processes of naming and framing, staging, selection and priority setting, attribution, interpellation, categorisation and classification’.Footnote 26 How members of ‘the public’ are recruited is more than a practical matter: the process embodies the assumptions, aims and priorities of those designing the engagement.
On the whole, publics are constructed or ‘come into being’ within PE practices rather than being self-forming. As with types of PE, different categorisations of publics have been developed. Degeling and colleagues highlight three different types: citizens (ordinary people who are unfamiliar with the issues, a kind of pure public); consumers (those with relevant personal experience, a kind of affected public) and advocates (those with technical expertise or partisan interests).Footnote 27 And each of these was linked to different types of PE. Citizens were treated as a resource to increase democratic legitimacy; consumers were directed to focus on personal preferences; advocates were most commonly used as expert witnesses in juries – directly linked to policy processes. However, overall the ‘type’ of public sought was often not explicit, and their role not specified.
Braun and SchultzFootnote 28 elaborate a four-fold distinction: the general public, the pure public, the affected public and the partisan public. Different PE methods serve to construct different kinds of publics. The general public is a construct required for opinion polls and surveys; pure publics for citizen conferences and juries; affected publics for consultative panels; partisan publics for stakeholder consultations. However, as with the different types of PE, in practice there will be overlaps across these dimensions and subject positions will shift as expertise is crafted through the processes of engagement and facilitation.Footnote 29 Different types of expertise are presumed here too: the general public gives policy makers knowledge about people’s attitudes; the pure public creates a ‘mature’ citizen who becomes knowledgeable and can develop sophisticated arguments; affected publics bring expertise to ‘educate’ the expert – very common in health research regulation; and a partisan public may be deliberately configured to elicit viewpoints ‘out there’ in society to assess the ‘landscape of possible argument’.Footnote 30
Types of PE and the categorisation of different publics involve processes of inclusion and exclusion and the legitimacy of PE can easily be challenged because of who participates: some voices may be prioritised over others, and challenges may be made to participants’ expertise. We turn now to a case study of how PE is being enacted in one area of health research to explore how we might deal with these problematics of how and who.
11.5 Public Engagement in Data Intensive Health Research: Principles for an Inclusive Approach
The digitisation of society has led to an explosion of interest in the potential uses of more and more population data in research; this is particularly true in relation to health research.Footnote 31 However, recent years have also brought a number of public controversies, particularly regarding proposed uses of health data. Two high profile examples from England are the failed introduction of the care.data scheme to link hospital and GP recordsFootnote 32 and Google Deep Minds’ involvement in processing health data at an NHS Trust in London.Footnote 33 The introduction of Australia’s National Electronic Health Record Systems (NEHRS) also floundered, demonstrating the importance of taking account how such programmes reflect, or jar, with public values.Footnote 34 Such controversies have drawn attention to the importance of engaging with members of the public and stakeholders to ensure that data are used in ways which align with public values and interests and to ensure that public concerns are adequately addressed.
The growing interest in potential uses of population data, and the increasing recognition of the importance of ensuring a social licence for their use, have resulted in considerable interest in understanding public attitudes and views on these topics.Footnote 35 With the expansion of research uses of (health) data there has been a growing interest in public acceptability. As Bradwell and Gallagher have suggested, ‘personal information use needs to be far more democratic, open and transparent’ and this means ‘giving people the opportunity to negotiate how others use their personal information in the various and many contexts in which this happens’.Footnote 36 PE is seen as key to the successful gathering and use of health data for research purposes.
As a recent consensus statement on PE in data intensive health research posits, there are particular reasons to promote PE in data intensive health researchFootnote 37 including its scale – here the wider public is an ‘affected’ public and the distance is increased between researchers and those from whom data are gathered, thus requiring a new kind of social licence.Footnote 38 This requires novel thinking about how best to engage publics in shaping acceptable practices and their effects.
As well as recognising diverse practices, aims and effects, and building reflexive critique into PE for health research regulation and governance, we need to articulate some common commitments that can help steer a useful path through this diversity and thereby challenge criticisms of institutional capture and tokenism.Footnote 39 These commitments must include clarity of purpose and transparency, which will help deal with the challenges of multiple but often implicit purposes and goals. Inclusion and accessibility will broaden reach and two way communication – dialogue – is a necessary but not sufficient condition for impact. The latter can only be achieved if there is institutional buy-in, a commitment to respond to and utilise PE in governance and research. Given the challenges of assessing whether or not PE is impactful, something we discuss in the conclusion below, PE should be designed with impact in mind and be evaluated throughout. It is clear that you cannot straightforwardly get the right public and the right mechanism and be assured of meaningful and impactful PE. The choices are complicated and inflected with norms and goals that need to be explicitly stated and indeed challenged.
The prominent emphasis on PE in relation to health research can be seen as a reflection of a wider resurgence of interest in PE in diverse policy areas.Footnote 40 For example, Coleman and Gotzehave pointed to a widespread commitment to PE, conceived of as a mechanism for addressing problems in democratic societies.Footnote 41 For Wilsdon and Willis, the emphasis on engagement represents a wider pattern whereby the ‘standard response’ of government to public ambivalence or hostility towards technological, social or political innovation is ‘a promise to listen harder’.Footnote 42
PE is not straightforward, and fulfilling the commitments of PE presents challenges and dilemmas in practice. There are many different ways of approaching PE, and these lead to different ideas of what constitutes success. There is no agreed best practice in evaluation; different rationales lead to different approaches to evaluation. Approaches underpinned by normative rationales will evaluate the quality of PE processes (Was it done well?); instrumental rationales lead to a focus on outcomes (Was it useful? Did it achieve the objectives?); and substantive rationales will assess the value added for participants or wider society (Did participants benefit from the process? Were there wider positive impacts?). Evaluation following substantive rationales is typically focussed on longer term outcomes, compared to evaluation following normative or instrumental rationales. Such longer term outcomes may be indirect and difficult to quantify or measure.
While the literature on methods of doing PE continues to proliferate, evaluation of PE remains under-theorised and underreported. The current evidence base is limited, but existing approaches to evaluating PE tend to reflect instrumental rationales and focus on direct outcomes of PE rather than substantive rationales and indirect, less tangible outcomes or impacts.Footnote 43 Wilson and colleaguesFootnote 44 have observed that there is a tendency to focus on ‘good news’ in evaluating PE and that positivist paradigms shaping research projects or programmes can limit the opportunities to fully or adequately evaluate the complexities of PE as a social process.
This is significant as it means that while a variety of rationales and purposes are acknowledged in relation to PE, there is very limited evidence of the extent to which these are realised. This in turn has negative implications for the recognition – and consequently, the institutional support – that PE receives. By providing evidence only of narrow and direct outcomes, instrumental approaches to evaluation obscure the varied and multiple benefits that can result from PE. While ‘the move from “deficit to dialogue” is now recognised and repeated by scientists, funders and policymakers […] for all of the changing currents on the surface, the deeper tidal rhythms of science and its governance remain resistant’.Footnote 45 Despite growing emphasis on dialogue and co-inquiry, simplistic views of the relationship between science and the public persistFootnote 46 and PE is often conducted in instrumental ways which seek to manufacture trust in science rather than foster meaningful dialogue. Greater reflection is required on the question of why publics are engaged rather than how they are engaged.
Finally, in designing, conducting and using PE in health research, we need to be reflective and critical, asking ourselves whether the issues are being narrowly defined and interpreted within existing frameworks (that often focus on privacy and consent). Does this preclude wider discussions of public benefit and the political economy of Big Data research for health? PE can and should improve health research and its regulation by questioning institutional practices and societal norms and using publics’ contributions to help shape solutions.
1 S. Parry et al., ‘Heterogeneous Agendas Around Public Engagement in Stem Cell Research: The Case for Maintaining Plasticity’, (2012) Science and Technology Studies, 12(2), 61–80.
2 K. Braun and S. Schultz, ‘“… A Certain Amount of Engineering Involved”: Constructing the Public in Participatory Governance Arrangements’, (2010) Public Understanding of Science, 19(4), 403–419.
3 A. Irwin, ‘The Politics of Talk: Coming to Terms with the ‘New’ Scientific Governance’, (2012) Social Studies of Science, 36(2), 299–320.
4 B. Wynne ‘May the Sheep Safely Graze? A Reflexive View of the Expert–Lay Knowledge Divide’ in S. Lash et al. (eds), Risk, Environment and Modernity: Towards a New Ecology (London: Sage, 1998).
5 M. Kurath and P. Gisler, ‘Informing, Involving or Engaging? Science Communication, in the Ages of Atom-, Bio-and Nanotechnology’, (2009) Public Understanding of Science, 18(5), 559–573.
6 Irwin, ‘The Politics of Talk’.
7 J. Stilgoe et al., ‘Why Should We Promote Public Engagement with Science?’, (2014) Public Understanding of Science, 23(1), 4–15, 8.
8 S. Cunningham-Burley, ‘Public Knowledge and Public Trust’, (2006) Community Genetics, 9(3), 204–210; B. Wynne, ‘Public Engagement as a Means of Restoring Public Trust in Science – Hitting the Notes, but Missing the Music?’, (2006) Community Genetics, 9(3), 211–220.
9 A. Irwin and M. Michael, Science, Social Theory and Public Knowledge (Berkshire: Open University Press, 2003).
10 Braun and Schultz, ‘… A Certain Amount of Engineering’, 404.
11 D. J. Fiorino, ‘Citizen Participation and Environmental Risk: A Survey of Institutional Mechanisms’, (1990) Science, Technology, & Human Values, 15(2), 226–243.
12 J. Wilsdon and R. Willis, See-Through Science: Why Public Engagement Needs to Move Upstream (London: Demos, 2004).
13 G. Rowe and L. J. Frewer, ‘Evaluating Public-Participation Exercises: A Research Agenda’, (2004) Science, Technology, and Human Values, 29(4), 512–556, 252.
14 S. R. Arnstein, ‘A Ladder of Citizen Participation’, (1969) Journal of the American Planning Association, 35(4), 216–224.
15 M. Aitken, ‘E-Planning and Public Participation: Addressing or Aggravating the Challenges of Public Participation in Planning?’, (2014) International Journal of E-Planning Research (IJEPR), 3, 38–53, 42.
16 G. Rowe and L. J. Frewer, ‘A Typology of Public Engagement Mechanisms’, (2005) Science, Technology, & Human Values, 30(2), 251–290, 255.
17 Public Health England, ‘Campaign Resource Cente’, (Public Health England), www.campaignresources.phe.gov.uk/resources/campaigns.
18 For example those produced by Understanding Patient Data, ‘Data Saves Lives Animations’, (Understanding Patient Data), www.understandingpatientdata.org.uk/animations.
19 D. Wilcox, ‘The Guide to Effective Participation’, (Brighton: Partnerships, 1994), 11.
20 Rowe and Frewer, ‘Typology of Public Engagement’.
21 S. Davidson et al., ‘Public Acceptability of Data Sharing between the Public, Private and Third Sectors for Research Purposes’, (2013) Social Research Series (Edinburgh: Scottish Government), 4.30.
22 L. Belone et al., ‘Community-Based Participatory Research Conceptual Model: Community Partner Consultation and Face Validity’, (2016) Qualitative Health Research, 26(1), 117–135.
23 INVOLVE, ‘People and Participation: How to Put Citizens at the Heart of Decision-Making’ (INVOLVE, 2005), www.involve.org.uk/sites/default/files/field/attachemnt/People-and-Participation.pdf.
24 P. Wehling, ‘From Invited to Uninvited Participation (and Back?): Rethinking Civil Society Engagement in Technology Assessment and Development’, (2012) Poiesis & Praxis, 9(1), 43–60.
25 Davidson et al., ‘Public Acceptability’.
26 Braun and Schultz, ‘… A Certain Amount of Engineering’, 406
27 C. Degeling et al., ‘Which Public and Why Deliberate?—A Scoping Review of Public Deliberation in Public Health and Health Policy Research’, (2015) Social Science & Medicine, 131, 114–121.
28 Braun and Schultz, ‘… A Certain Amount of Engineering’.
29 A. Kerr et al., ‘Shifting Subject Positions: Experts and Lay People in Public Dialogue’, (2007) Social Studies of Science, 37(3), 385–411.
30 Braun and Schultz, ‘… A Certain Amount of Engineering’, 414.
31 K. McGrail et al., ‘A Position Statement on Population Data Science: The Science of Data about People’, (2018) International Journal of Population Data Science, 3(1), 1–11.
32 P. Carter et al., ‘The Social Licence for Research: Why care.data Ran into Trouble’, (2015) Journal of Medical Ethics, 41(5), 404–409.
33 J. Powles and H. Hodson, ‘Google DeepMind and Healthcare in an Age of Algorithms’, (2017) Health and Technology, 7, 351–367.
34 K. Garrety et al., ‘National Electronic Health Records and the Digital Disruption of Moral Orders’, (2014) Social Science & Medicine, 101, 70–77.
35 M. Aitken et al., ‘Public Responses to the Sharing and Linkage of Health Data for Research Purposes: A Systematic Review and Thematic Synthesis of Qualitative Studies’, (2016) BMC Medical Ethics, 17(1), 73; Social Research Institute, ‘The One-Way Mirror: Public Attitudes to Commercial Access to Health Data’, (Wellcome Trust, 2016).
36 P. Bradwell and N. Gallagher, We No Longer Control What Others Know about Us, But We Don’t Yet Understand the Consequences …The New Politics of Personal Information (London: Demos, 2007), pp. 18–19.
37 M. Aitken et al., ‘Consensus Statement on Public Involvement and Engagement with Data Intensive Health Research’, (2019) International Journal of Population Data Science, 4(1), 1–11.
38 Carter et al., ‘The Social Licence for Research’.
39 Aitken et al., ‘Consensus Statement’.
40 M. Pieczka and O. Escobar, ‘Dialogue and Science: Innovation in Policy-Making and the Discourse of Public Engagement in the UK’, (2013) Science and Public Policy, 40(1), 113–126.
41 J. Gotze and S. Coleman, Bowling Together: Online Public Engagement in Policy Deliberation (London: Hansard Society, 2010).
42 Wilsdon and Willis, See-Through Science, p. 16.
43 J. P. Domecq et al., ‘Patient Engagement in Research: A Systematic Review’, (2014). BMC Health Services Research, 14(1), 89.
44 P. Wilson et al., (2015) ‘ReseArch with Patient and Public invOlvement: A RealisT evaluation – the RAPPORT study’, (2015) Health Services and Delivery Research, 3(38), 1–9.
45 Stilgoe et al., ‘Why Should We Promote Public Engagement with Science?’, 4.
46 Kurath and Gisler, ‘Informing, Involving or Engaging?’.
This chapter discusses participatory governance as a conceptual framework for engaging patients and members of the public in health research governance, with particular emphasis on deliberative practices. We consider the involvement of patients and members of the public in institutional mechanisms to enhance responsibility and accountability in collective decision-making regarding health research. We illustrate key principles using discussion of precision medicine, as this demonstrates many of the challenges and tensions inherent in developing participatory governance in health research more generally. Precision medicine aims to advance healthcare and health research through the development of treatments that are more precisely targeted to patient characteristics.
Our central argument in this chapter is that patients and broader publics should be recognised as having a legitimate role in health research governance. As such, there need to be institutional mechanisms for patients and publics to be represented among stewards of health research systems, with a role in articulating vision, identifying research priorities, setting ethical standards, and evaluation. We begin by reviewing relevant scholarship on patient and public engagement in health research, particularly in the context of the development and use of Big Data for precision medicine. We then examine conceptualisations of participatory governance and outline stewardship as a key function of governance in a health research system. Thereafter, we propose the involvement of patients and publics as stewards who share leadership and oversight responsibilities in health research, and consider the challenges that may occur, most notably owing to professional resistance. Finally, we discuss the conditions and institutional design elements that enable participatory governance in health research.
12.2 Patient and Public Engagement in Health Research
Beresford identifies two broad approaches that have predominated in public engagement in health and social research since the 1990s.Footnote 1 Consumerist approaches reflect a broad interest in the market and seek consumer feedback to improve products or enhance services; in contrast, democratic approaches are concerned with people having more say in institutions or organisations that have an impact on their lives. Unlike consumerist approaches, democratic approaches are explicit about issues of power, the (re)distribution of power and a commitment to personal and collective empowerment. Well-known examples of democratic approaches include the social movements initiated by people living with disability and HIV/AIDS, where these communities demanded greater inclusion in the development of scientific knowledge and health policy decisions.Footnote 2 Moral and ethical reasons based on democratic notions of patient empowerment and redistribution of power, and consequentialist arguments that patient and public engagement can improve research credibility and social acceptance, are also offered by health researchers.Footnote 3 It should be noted that patient and public engagement does not, in and of itself, constitute an active role for members of the public in health research and policy decision-making. Conceptual models have often highlighted the multiple forms that engagement can take, which vary in the degree to which members of the public are empowered to participate in an active role (see Aitken and Cunningham-Burley, Chapter 11).
In recent years, the potential to link large data sources and harness the breadth and depth of such Big Data has been hailed as bringing ‘a massive transformation’ to healthcare.Footnote 4 Data sources include those collected for health services (e.g. electronic health records), health research (e.g. clinical trials, biobanks, genomic databases), public health (e.g. immunisation registries, vital statistics), and other innovative sources (e.g. social media). Achieving the aims of precision medicine relies on the creation of networks of diverse data sources and scientific disciplines to capture a more holistic understanding of health and disease.Footnote 5 Conducting research using such infrastructure represents a shift from individual and isolated projects to research enterprises that span multiple institutions and jurisdictions. While the challenges of doing patient and public engagement well have been widely recognised, the emergence of precision medicine highlights the stakes and urgency of involving patients and publics in meaningful ways.
Biomedical research initiatives that involve large, networked research infrastructure rely on public support and cooperation. Rhetorical appeals to democratising scientific research, empowerment and public benefits, have been employed in government-sponsored initiatives in the USA and UK in attempts to foster to public trust and cultivate a sense of collective investment and civic duty to participate, notably to agree to data collection and sharing.Footnote 6 Such appeals have been explicit in the US Precision Medicine Initiative (PMI)Footnote 7 since its inception, whereas they have been used post hoc in the NHS England care.data programme after public backlash. The failure of care.data illustrates the importance of effective and meaningful public engagement – rather than tokenistic appeals – to secure public trust and confidence in its oversight for large-scale, networked research. Established to be a centralised data sharing system that linked vast amounts of patient data including electronic health records from general practitioners, care.data was suspended and eventually closed in 2016 after widespread public and professional concerns, including around its ‘opt-out’ consent scheme, transparency, patient confidentiality and privacy, and potential for commercialisation.Footnote 8 See further, Burgess, Chapter 25, this volume.
Research using Big Data raises many unprecedented social, ethical, and legal challenges. Data are often collected without clear indication of their uses in research (e.g. electronic health records) or under vague terms regarding their future research uses (e.g. biobanks). Challenges arise with regard to informed consent about future research that may not yet be conceived; privacy and confidentiality; potential for harms from misuses; return of results and incidental findings; and ownership and benefit sharing, which have implications for social justice.Footnote 9 As cross-border sharing of data raises the challenges of marked differences in regulatory approaches and social norms to privacy, there have been calls for an international comparative analysis of how data privacy laws might have affected biobank practices and the development of a global privacy governance framework that could be used as foundational principles.Footnote 10 Arguments have been made that relying on informed consent – which was developed primarily for individual studies – is insufficient to resolve many of the social and ethical challenges in the context of large-scale, networked research; rather, the focus should be on the level of systemic oversight or governance.Footnote 11 Laurie proposes an ‘Ethics+’ governance approach that appraises biobank management in processual terms.Footnote 12 This approach focuses on the dynamics and interactions of stakeholders in deliberative processes towards the management of a biobank, and allows for adaptation to changes in circumstances, ways of thinking, and personnel.
12.3 Participatory Governance in Health Research Systems
The concept of governance has theoretical roots in diverse disciplines and has been used in a variety of ways, with a variety of meanings.Footnote 13 In the health sector, the concept of governance has been informed by a systems perspective, notably the World Health Organization’s framework for health systems.Footnote 14 In their review, Barbazza and Tello claim that: ‘Despite the complexities and multidimensionality inherent to governance, there does however appear to be general consensus that the governance function characterizes a set of processes (customs, policies or laws) that are formally or informally applied to distribute responsibility or accountability among actors of a given [health] system’.Footnote 15 Common values, such as ‘good’ or ‘democratic,’ and descriptions of the type of accountability arrangement, such as ‘hierarchical’ or ‘networked,’ may be used to denote how governance should be defined. The notion of distributed responsibility or accountability relates to the assertion that governance is about collective decision-making and involves various forms of partnership and self-governing networks of actors.Footnote 16
A systems perspective allows for a more integrated and coordinated view of health research activities that may be highly fragmented, specialised and competitive.Footnote 17 Strengthening the coordination of research activities promotes more effective use of resources and dissemination of scientific knowledge in the advancement of healthcare. The vision of a learning healthcare system, which was first proposed by the US Institute of Medicine (IOM), illustrates a cycle of continuous learning and care improvement that bridges research and clinical practice.Footnote 18 The engagement of patients, their families and other relevant stakeholders is identified as a fundamental element of a learning healthcare system.Footnote 19 Engaging patients as active partners in the cycle is argued to both secure the materials required for research (i.e. data and samples) and enhance patient trust.Footnote 20
Pang and colleagues propose stewardship as a key function within a health research system that has four components: defining a vision for the health research system; identifying research priorities and coordinating adherence to them; setting and monitoring ethical standards; and monitoring and evaluating the system.Footnote 21 Other key functions of a health research system include: financing, which involves securing and allocating research funds accountably; creating and sustaining resources including human and physical capacity; and producing and using research. An important question is therefore how to engage and incorporate the perspectives and values of patients and publics in governance, particularly in terms of stewardship.
Internationally, participatory governance has been explored in multiple reforms in social, economic, and environmental planning and development that varied in design, issue areas and scope.Footnote 22 Fung and Wright use the term ‘empowered participatory governance’ to describe how such reforms are ‘participatory because they rely upon the commitment and capacities of ordinary people to make sensible decisions through reasoned deliberation and empowered because they attempt to tie action to discussion’.Footnote 23 They outline three general principles: (1) a focus on solving practical problems that creates situations for participants to cooperate and build congenial relationships; (2) bottom-up participation, with laypeople being engaged in decision-making while experts facilitate the process by leveraging professional and citizen insights; and (3) deliberative solution generation, wherein participants listen to and consider each other’s positions and offer reasons for their own positions. A similar concept is collaborative governance, which is defined by Ansell and Gash as ‘a governing arrangement where one or more public agencies directly engage non-state stakeholders in a collective decision-making process that is formal, consensus-oriented, and deliberative and that aims to make or implement public policy or manage public programs or assets’.Footnote 24 The criterion of formal collaboration implies established arrangements to engage publics. Participatory governance is advocated to contribute to citizen empowerment, build local communities’ capacity, address the gap in political representation and power distribution, and increase the efficiency and equity of public services. Unfortunately, however, successful implementation of participatory governance ideals is ‘a story of mixed outcomes’ with the failures still outnumbering the successful cases.Footnote 25
Yishai argues that the health sector has remained impervious to the practice of participatory governance: patients have not had a substantial voice in health policy decisions, even though they may enjoy the power to choose from different health services and providers as consumers.Footnote 26 Professional resistance to non-expert views and marginalisation of public interests by commercial interests are cited as some of the reasons for the limited involvement of patients. Similarly, there are concerns that public voices are not given the same weight as those of professionals in health research decision-making. Tokenism, engaging patients as merely a ‘tick-box exercise’ – for funding or regulatory requirements – and devaluing patient input in comparison to expert input are common concerns.Footnote 27 Furthermore, most engagement efforts are limited to preliminary activities and not sustained across the research cycle; the vast majority of biomedical research initiatives do not engage publics beyond informed consent for data collection and sharing.Footnote 28
Deliberative practices, such as community advisory boards and citizens’ forums, have been suggested as mechanisms to allow public input in the governance of research with Big Data.Footnote 29 Public deliberation has been used to engage diverse members of the public to explore, discuss and reach collective decisions regarding the institutional practices and governance of biobanks, and the use and sharing of linked data for research.Footnote 30 However, in many instances, public input is limited to the point in time at which the deliberative forum is convened. One example of ongoing input is provided by the Mayo Clinic Biobank deliberation, which was used as a seeding mechanism for the establishment of a standing Community Advisory Board. To address the challenge of moving from one-time input to ongoing, institutionalised public engagement, O’Doherty and colleagues propose four principles to guide adaptive biobank governance: (1) recognition of participants as a collective body, as opposed to just an aggregation of individuals; (2) trustworthiness of the biobank, with a reflexive focus of biobank leaders and managers on its practices and governance arrangements, as opposed to a focus on the trust of participants divorced from considerations of how such trust is earned; (3) adaptive management that is capable of drawing on appropriate public input for decisions that substantively affect collective patient or public expectations and relationships; and (4) fit between the particular biobank and specific structural elements of governance that are implemented.Footnote 31
A few cases of multi-agency research networks that engage patients or research participants in governance are also available. For instance, the Patient-Centered Outcomes Research Institute (PCORI) in the USA established multiple patient-powered research networks, each focusing on a particular health condition (www.pcori.org). In the UK, the Managing Ethico-social, Technical and Administrative issues in Data ACcess (METADAC) was established as a multi-study governance infrastructure to provide ethics and policy oversight to data and sample access for multiple major population cohort studies. Murtagh and colleagues identify three key structural features: (1) independence and transparency, with an independent governing body that promotes fair, consistent and transparent practices; (2) interdisciplinarity, with the METADAC Access Committee comprising individuals with social, biomedical, ethical, legal and clinical expertise, and individuals with personal experience participating in cohort studies; and (3) patient-centred decision-making, which means respecting study participants’ expectations, involving them in decision-making roles and communicating in a format that is clear and accessible.Footnote 32
12.4 Enabling Conditions and Institutional Designs
Fung and Wright propose that an enabling condition to facilitate participatory governance is ‘a rough equality of power, for the purposes of deliberative decision-making, between participants’.Footnote 33 Nonetheless, power and resource imbalances are a common problem in many cases of patient and public engagement. Patients and publics bring different forms of knowledge that could be seen as challenging traditional scientific knowledge production and the legitimacy of professional skills and knowledge. Such knowledge could be constructed positively by researchers, but it could also be constructed in ways that question its validity compared to professional/academic knowledge.Footnote 34 Furthermore, patients and publics may not always be capable of articulating their needs as researchable questions, which limits the uptake of their ideas in research prioritisation, or a perceived mismatch may lead to resistance from researchers to act upon priorities identified by patients and publics.Footnote 35
Articulating a vision for advancing patient and public engagement in a health research system is important, whether it is at an organisational or broader level.Footnote 36 We further propose recognition of patients and publics as having legitimate representation as stewards or governors, with a role in articulating vision, identifying research priorities, setting ethical standards, and evaluation. Moreover, we suggest that formal arrangements are required to enable patients and publics in their role as stewards and governors within institutional architecture. A range of innovative mechanisms have been explored and implemented. For instance, ArthritisPower, which is a patient-powered research network within PCORI, established a governance structure in which patients have representation and overlapping membership across the Executive Board, Patient Governor Group and Research Advisory Board. Clear communication of expectations, provision of well-prepared tools for engagement (e.g. work groups organised around particular tasks or topics, online platform for patient governors to connect) and regular assessments of patient governors’ viewpoints are found to be necessary to support and build patients’ capacity within a multi-stakeholder governance structure.Footnote 37
It should be recognised that members of the public vary in their capacity to participate, deliberate and influence decision-making. Those who are advantaged in terms of education, wealth or membership in dominant racial/ethnic groups often participate more frequently and effectively in deliberative decision-making.Footnote 38 Power and resource imbalances can result in the problem of co-optation whereby stronger stakeholders are able to generate support for their own agendas. The lack of representation of certain groups – i.e. youth, Indigenous, Black and ethnic minority groups – has been noted in many efforts of patient and public engagement in health research,Footnote 39 which reflects structural barriers and/or historical discrimination and mistrust due to past ethical violations. This raises challenges of how to promote and support inclusion and equity in decision-making. This also serves as a valuable counterpoint on power dynamics as discussed by Brassington, chapter 9.
There are also concerns that patients may risk becoming less able to represent broader patient perspectives as they become more trained and educated in research and more involved in the governance of research activities. For instance, Epstein documented the use of ‘credibility tactics’, such as the acquisition of the language of biomedical science by HIV/AIDS activists to gain acceptance in the scientific community, and Thompson and colleagues identified the emergence of professionalised lay experts who demonstrated considerable support for dominant scientific paradigms and privileged professional or certified forms of expertise among patients and caregiver participants in cancer research settings in England.Footnote 40 To guard against this, the governance structure of ArthritisPower maintains a mix of veteran and new members by limiting patient governors’ memberships to three years.Footnote 41
Fung and Wright outline three institutional design elements that are necessary for participatory governance: (1) devolution of decision-making power to local units that are charged and held accountable with implementing solutions; (2) centralised supervision and coordination to connect the local units, coordinate and distribute resources, reinforce quality of local decision-making, and diffuse learning and innovation; and (3) transformation of formal governance procedures to institutionalise the ongoing participation of laypeople.Footnote 42 At a national level, devolution of power implies that the state solicits local units, such as community organisations and local councils, to devise and implement solutions. Members of the public are engaged at a local level through these organisations as stakeholders who are affected by the targeted problems. Within a health research system, network or organisation, patients and publics may serve on advisory boards and committees as members within a multi-stakeholder governance structure.
In this section, we discuss factors that may facilitate or impede the participation of patients and publics in the governance structures of health research systems, networks or organisations. It is important to consider multilevel engagement strategies for matching participation opportunities to varying interests, capacities and goals of patients and publics.Footnote 43 These strategies may range from patients and publics having one-time input into a targeted issue, to serving in leadership roles as members of a research team or governing body. Involving patients and publics in governance structures in an ongoing manner requires relationship building over much longer periods of time.
Clarity of roles and purposes of patient and public engagement is needed for relationship building, as well as for developing and maintaining trust. Participatory forms of governance are more feasible when stakeholders have opportunities to identify mutual gains in collaboration. However, pre-existing relationships can discourage stakeholders from seeing the value of collaboration. In health research that spans multiple sites, approaches and willingness to engage patients and publics may differ considerably across the participating sites.Footnote 44 Establishing new relationships with patients as partners may be considered too risky and jeopardising to current relationships by some sites.
Additionally, engagement activities that focus on ‘patients’, ‘citizens’ or ‘members of a community’, may each carry different sets of assumptions. Patients often have a personal connection to the health issue in question, whereas community members are selected to represent a collective experience and perspective. In national biomedical research initiatives, engagement as ‘citizens’ may lead to the exclusion of certain groups, such as advocacy groups and charities, from governing committees to avoid ‘special interests’.Footnote 45 While people may be able to navigate and draw on different aspects of their lives to inform research and policy, further exploration is needed to understand the common and distinctive aspects between different types of roles that people occupy.Footnote 46 In any case, clarity regarding roles and responsibilities, and transparency in the aims of engagement are necessary for relationship and trust building.
Fung and Wright assert that centralised supervision and coordination is needed to stabilise and deepen the practice of participatory governance among local units.Footnote 47 At a national level, centralised coordination is a component of leadership capacity to ensure accountability, distribute resources, and facilitate communication and information sharing across local units. According to Ansell and Gash, facilitative leadership is important for bringing together stakeholders, promoting the representation of disadvantaged groups, and facilitating dialogue and trust-building in the collaborative processes.Footnote 48 Trust-building requires commitment and mutual recognition of interdependence, shared understanding of the problem in question and common values, and face-to-face dialogue. Senior leadership and supportive policy and infrastructure are recognised as building blocks for embedding patient and public engagement in a health research system.Footnote 49
In this chapter, we have discussed the potentials and challenges of involving patients and publics as stewards or governors of health research, whether within a broad health system, a research network, or a specific organisation. We have also outlined some of the conditions and institutional design elements that may impede or facilitate the engagement of patients and publics in governance structures, focusing on issues of power/resource imbalances, representativeness, relationships, trust and leadership support. Some conditions and institutional design elements are necessary for the implementation of participatory governance, but our discussion is not intended to be comprehensive or prescriptive. In particular, we are not proposing a specific governance structure or body as an ideal. Governance structures can vary in their purposes and constituencies. With rapid scientific advances and potential for unanticipated ethical and social issues, a multi-stakeholder governance structure needs to contain an element of reflexivity and adaptivity to evolve in ways that are respectful of diverse needs and interests while responding to changes. Moreover, the literature on patient and public engagement has documented the need for rigorous evaluation of the impact of engagement on healthcare and health research, especially given the problems of inconsistent terminology and lack of validated frameworks and tools to evaluate patient and public engagement.Footnote 50 Stronger evidence of the impact and outcomes, both intended and unintended, of patient and public engagement may help normalise the role of patients and publics as partners in health research regulation.
1 P. Beresford, ‘User Involvement in Research and Evaluation: Liberation or Regulation?’, (2002) Social Policy & Society, 1(2), 95–105.
2 C. Barnes, ‘What a Difference a Decade Makes: Reflections on Doing ‘Emancipatory’ Disability Research’, (2003) Disability & Society, 18(1), 3–17; S. Epstein, ‘The Construction of Lay Expertise: AIDS Activism and the Forging of Credibility in the Reform of Clinical Trials’, (1995) Science, Technology, & Human Values, 20(4), 408–437.
3 J. Thompson et al., ‘Health Researchers’ Attitudes towards Public Involvement in Health Research’, (2009) Health Expectations, 12(2), 209–220.
4 E. Vayena and A. Blassimme, ‘Health Research with Big Data: Time for Systemic Oversight’, (2018) Journal of Law, Medicine & Ethics, 46(1), 119–129.
5 Footnote Ibid., 120.
6 J. P. Woolley et al., ‘Citizen Science or Scientific Citizenship? Disentangling the Uses of Public Engagement Rhetoric in National Research Initiatives’, (2016) BMC Medical Ethics, 17(33), 1–17.
7 The US PMI was launched in 2015 with the aims of advancing precision medicine in health and healthcare. A cornerstone of the initiative is the All of Us Research Program, a longitudinal project aiming to enroll 1 million volunteers to contribute their genetic data, biospecimens and other health data to a centralised national database. ‘National Institutes of Health’, www.allofus.nih.gov/.
8 S. Sterckx et al., ‘“You Hoped We Would Sleep Walk into Accepting the Collection of Our Data”: Controversies Surrounding the UK care.data Scheme and Their Wider Relevance for Biomedical Research’, (2016) Medicine, Health Care, and Philosophy, 19(2), 177–190.
9 W. Burke et al., ‘Informed Consent in Translational Genomics: Insufficient without Trustworthy Governance’, (2018) Journal of Law, Medicine & Ethics, 46(1), 79–86; A. Cambon-Thomsen et al., ‘Trends in the Ethical and Legal Frameworks for the Use of Human Biobanks’, (2007) European Respiratory Journal, 30(2), 373–382; E. Wright Clayton and A. L. McGuire, ‘The Legal Risks of Returning Results of Genomic Research’, (2012) Genetics in Medicine, 14(4), 473–477
10 E. S. Dove, ‘Biobanks, Data Sharing, and the Drive for a Global Privacy Governance Framework’, (2015) Journal of Law, Medicine & Ethics, 43(4), 675–689.
11 Burke et al., ‘Informed Consent’, 83–85; K. C. O’Doherty et al., ‘From Consent to Institutions: Designing Adaptive Governance for Genomic Biobanks’, (2011) Social Science & Medicine, 73(3), 367–374; Vayena and Blasimme, ‘Health Research with Big Data’, 123–127.
12 G. Laurie, ‘What Does It Mean to Take an Ethics+ Approach to Global Biobank Governance?’, (2017) Asian Bioethics Review, 9(4), 285–300.
13 G. Stoker, ‘Governance as Theory: Five Propositions’, (1998) International Social Science Journal, 50(155), 17–28.
14 E. Barbazza and J. E. Tello, ‘A Review of Health Governance: Definitions, Dimensions and Tools to Govern’, (2014) Health Policy, 116(1), 1–11; F. A. Miller et al., ‘Public Involvement in Health Research Systems: A Governance Framework’, (2018) Health Research Policy and Systems, 16(1), 1–15.
15 Barbazza and Tello, ‘Health Governance’, 3.
16 Stoker, ‘Governance as Theory’, 21–24.
17 T. Pang et al., ‘Knowledge for Better Health – A Conceptual Framework and Foundation for Health Research Systems’, (2003) Bulletin of the World Health Organization, 81(11), 815–820.
18 Institute of Medicine, Best Care at Lower Cost: The Path to Continuously Learning Health Care in America (Washington, DC: National Academies Press, 2013).
19 K. H. Chuong et al., ‘Human Microbiome and Learning Healthcare Systems: Integrating Research and Precision Medicine for Inflammatory Bowel Disease’, (2018) OMICS: A Journal of Integrative Biology, 22(20), 119–126; S. M. Greene et al., ‘Implementing the Learning Health System: From Concept to Action’, (2012) Annals of Internal Medicine, 157(3), 207–210; W. Psek et al., ‘Operationalizing the Learning Health Care System in an Integrated Delivery System’, (2015) eGEMs, 3(1), 1–11.
20 Psek et al., ‘Learning Health Care System’.
21 Pang et al., ‘Health Research Systems’, 816–818.
22 A. Fung and E. O. Wright (eds), Deepening Democracy: Institutional Innovations in Empowered Participatory Governance (New York, NY: Verso, 2003).
23 Footnote Ibid., p. 5.
24 C. Ansell and A. Gash, ‘Collaborative Governance in Theory and Practice’, (2008) Journal of Public Administration Research and Theory, 18(4), 543–571, 544.
25 F. Fischer, ‘Participatory Governance: From Theory to Practice’ in D. Levi-Faur (ed.), The Oxford Handbook of Governance (New York, NY: Oxford University Press, 2012), pp. 458–471.
26 Y. Yishai, ‘Participatory Governance in Public Health: Choice, but No Voice’ in D. Levi-Faur (ed.), The Oxford Handbook of Governance (New York, NY: Oxford University Press, 2012), pp. 527–539.
27 J. P. Domecq et al., ‘Patient Engagement in Research: A Systematic Review’, (2014) Health Services Research, 14(89), 1–9; G. Green, ‘Power to the People: To What Extent has Public Involvement in Applied Health Research Achieved This?’, (2016) Research Involvement and Engagement, 2(28), 1–13; P. R. Ward et al., ‘Critical Perspectives on ‘Consumer Involvement’ in Health Research: Epistemological Dissonance and the Know-Do Gap’, (2009) Journal of Sociology, 46(1), 63–82.
28 E. Manafo et al., ‘Patient Engagement in Canada: A Scoping Review of the ‘How’ and ‘What’ of Patient Engagement in Health Research’, (2018) Health Research Policy and Systems, 16(1), 1–11; Woolley et al., ‘Citizen Science’, 5.
29 Burke et al., ‘Translational Genomics’, 84; Vayena and Blasimme, ‘Health Research with Big Data’, 125.
30 S. M. Dry et al., ‘Community Recommendations on Biobank Governance: Results from a Deliberative Community Engagement in California’, (2017) PLoS ONE, 12(2), e0172582; K. C. O’Doherty et al., ‘Involving Citizens in the Ethics of Biobank Research: Informing Institutional Policy through Structured Public Deliberation’, (2012) Social Science & Medicine, 75(9), 1604–1611; J. E. Olson and others, ‘The Mayo Clinic Biobank: A Building Block for Individualized Medicine’, (2013) Mayo Clinic Proceedings, 88(9), 952–962; J. Teng et al., ‘Sharing Linked Data Sets for Research: Results from A Deliberative Public Engagement Event in British Columbia, Canada’, (2019) International Journal of Population Data Science, 4(1), 13.
31 O’Doherty et al., ‘Adaptive Governance’, 368.
32 M. J. Murtagh et al., ‘Better Governance, Better Access: Practising Responsible Data Sharing in the METADAC Governance Infrastructure’, (2018) Human Genomics, 12(1), 1–12.
33 Fung and Wright, Deepening Democracy, p. 24.
34 Thompson et al., ‘Health Researchers’ Attitudes’; Ward et al., ‘Critical Perspectives’.
35 F. A. Miller et al., ‘Public Involvement and Health Research System Governance: Qualitative Study’, (2018) Health Research Policy and Systems, 16(1), 1–15.
36 Miller et al., ‘Health Research Systems’, 4–5.
37 W. B. Nowell et al., ‘Patient Governance in a Patient-Powered Research Network for Adult Rheumatologic Conditions’, (2018) Medical Care, 56(10 Suppl 1), S16–S21.
38 Fung and Wright, Deepening Democracy, p. 34.
39 Miller et al., ‘Health Research System Governance’, 7; Green, ‘Power to the People’, 10.
40 Epstein, ‘The Construction of Lay Expertise’, 417–426; J. Thompson et al., ‘Credibility and the ‘Professionalized’ Lay Expert: Reflections on the Dilemmas and Opportunities of Public Involvement in Health Research’, (2012) Health, 16(6), 602–618.
41 Nowell et al., ‘Patient Governance’, S21.
42 Fung and Wright, Deepening Democracy, pp. 20–24.
43 For an example, see A. P. Boyer et al., ‘Multilevel Approach to Stakeholder Engagement in the Formulation of a Clinical Data Research Network’, (2018) Medical Care, 56(10 Suppl 1), S22–S26.
44 K. S. Kimminau et al., ‘Patient vs. Community Engagement: Emerging Issues’, (2018) Medical Care, 56(10 Suppl 1), S53–S57.
45 Woolley et al., ‘Citizen Science or Scientific Citizenship’, 11.
46 See Kimminau et al., ‘Patient vs. Community Engagement’, for a comparison of the two.
47 Fung and Wright, Deepening Democracy, pp. 21–22.
48 Ansell and Gash, ‘Collaborative Governance’, 554–555.
49 Miller et al., ‘Health Research System Governance’, 6–7.
50 Manafo et al., ‘Patient Engagement’, 4–7. Also, Aitken and Cunningham-Burley, Chapter 11, this volume.
This chapter explores the concept of risk-benefit analysis in health research regulation, as well as ethical and practical questions raised by identifying, quantifying, and weighing risks and benefits. It argues that the pursuit of objectivity in risk-benefit analysis is ultimately futile, as the very concepts of risk and benefit depend on attitudes and preferences about which reasonable people disagree. Building on the work of previous authors, the discussion draws on contemporary examples to show how entities reviewing proposed research can improve the process of risk-benefit assessment by incorporating diverse perspectives into their decision-making and engaging in a systematic analytical approach.
The term ‘risk’ refers to the possibility of experiencing a harm. The concept incorporates two different dimensions: (1) the magnitude or severity of the potential harm; and (2) the likelihood that this harm will occur. The significance of a risk depends on the interaction of these two considerations. Thus, a low chance of a serious harm, such as death, would be considered significant, as would a high chance of a lesser harm, such as temporary pain.
In the context of research, the assessment of risk focuses on the additional risks participants will experience as a result of participating in a study, which will often be less than the total level of risks to which participants are exposed. For example, a study might involve the administration of various standard-of-care procedures, such as biopsies or CT scans. If the participants would have received these same procedures even if they were not participating in the study, the risks of those interventions would not be taken into account in the risk-benefit analysis. As a result, it is possible that a study comparing two interventions that are routinely used in clinical practice could be considered low risk, even if the interventions themselves are associated with a significant potential for harm. This is the case with a significant proportion of research conducted in ‘learning health systems’, which seek to integrate research into the delivery of healthcare. Because many of the research activities in such systems involve the evaluation of interventions patients would be undergoing anyway, the risks of the research are often minimal, even when the risks of the interventions themselves may be high.Footnote 1
The risks associated with health-related research are not limited to potential physical injuries. For example, in some studies, participants may be asked to engage in discussions of emotionally sensitive topics, such as a history of previous trauma. Such discussions entail a risk of psychological distress. In other studies, a primary risk is the potential for unauthorised disclosure of sensitive personal information, such as information about criminal activity, or stigmatised conditions such as HIV, or mental disorders. If such disclosures occur, participants could suffer adverse social, legal, or economic consequences.
Research-related risks can extend beyond the individuals participating in a study. For example, studies of novel interventions for preventing or treating infectious diseases could affect the likelihood that participants will transmit the disease to third parties.Footnote 2 Similarly, studies in which psychiatric patients are taken off their medications could increase the risk that participants will engage in violent behaviour.Footnote 3 Third-party risks are an inherent feature of research on genetic characteristics, given that information about individuals’ genomes necessarily has implications for their blood relatives.Footnote 4 Thus, if a genetic study results in the discovery that a participant is genetically predisposed to a serious disease, other persons who did not consent to participate in the study might be confronted with distressing, and potentially stigmatising, information that they never wanted to know.
In some cases, third-party risks extend beyond individuals to broader social groups. As the Council for International Organizations of Medical Sciences (CIOMS) has recognised, research on particular racial or ethnic groups ‘could indicate – rightly or wrongly – that a group has a higher than average prevalence of alcoholism, mental illness or sexually transmitted disease, or that it is particularly susceptible to certain genetic disorders’,Footnote 5 thereby exposing the group to potential stigma or discrimination. One example was a study in which researchers took blood samples from members of the Havasupai tribe in an effort to identify a genetic link to type 2 diabetes. After the study was completed, the researchers used the blood samples for a variety of unrelated studies without the tribe members’ informed consent, including research related to schizophrenia, inbreeding and migration patterns. Tribe members claimed that the schizophrenia and inbreeding studies were stigmatising, and that they never would have agreed to participate in the migration research because it conflicted with the tribe’s origin story, which maintained that the tribe had originated in the Grand Canyon. The researcher institution reached a settlement with the tribe that included monetary compensation and a formal apology.Footnote 6
Despite the prevalence of third-party risks in research, most ethics codes and regulations do not mention risks to anyone other than research participants. This omission is striking given that some of these same sources explicitly state that benefits to non-participants should be factored into the risk-benefit analysis. A notable exception is the EU Clinical Trials Regulation, which states that the anticipated benefits of the study must be justified by ‘the foreseeable risks and inconveniences’,Footnote 7 without specifying that those risks and inconveniences must be experienced by the participants themselves.
In addition to omitting any reference to third-party risks, the US Federal Regulations on Research With Human Participants state that entities reviewing proposed research ‘should not consider possible long-range effects of applying knowledge gained in the research (e.g. the possible effects of the research on public policy) as among those research risks that fall within the purview of its responsibility’.Footnote 8 This provision is intended ‘to prevent scientifically valuable research from being stifled because of how sensitive or controversial findings might be used at a social level’.Footnote 9
The primary potential benefit of research is the production of generalisable knowledge – i.e. knowledge that has relevance beyond the specific individuals participating in the study. For example, in a clinical trial of an investigational drug, data sufficient to establish the drug’s safety and efficacy would be a benefit of research. Data showing that an intervention is not safe or effective – or that it is inferior to the existing standard of care – would also count as a benefit of research, as such knowledge can protect future patients from potentially harmful and/or ineffective treatments they might otherwise undergo.
Whether a study has the potential to produce generalisable knowledge depends in part on how it is designed. The randomised controlled clinical trial (RCT) is often described as the ‘gold standard’ of research, as it includes methodological features designed to eliminate bias and control for potential confounding variables.Footnote 10 However, in some types of research, conducting an RCT may not be a realistic option. For example, if researchers want to understand the impact of different lifestyle factors on health, it might not be feasible to randomly assign participants to engage in different behaviours, particularly over a long period of time.Footnote 11 In addition, ethical considerations may sometimes preclude the use of RCTs. For example, researchers investigating the impact of smoking on health could not ethically conduct a study in which non-smokers are asked to take up smoking.Footnote 12 In these situations, alternative study designs may be used, such as cohort or case-control studies. These alternative designs can provide valuable scientific information, but the results may be prone to various biases, a factor that should be considered in assessing the potential benefits of the research.Footnote 13
A recent example of ethical challenges to RCTs arose during the Ebola outbreak of 2013–2016, when the international relief organisation Médicins Sans Frontières refused to participate in any RCTs of experimental Ebola treatments. The group argued that it would be unethical to withhold the experimental interventions from persons in a control group when ‘conventional care offers little benefit and mortality is extremely high’.Footnote 14 The difficulty with this argument was that, in the context of a rapidly evolving epidemic, the results of studies conducted without concurrent control groups would be difficult to interpret, meaning that an ineffective or even harmful intervention could erroneously be deemed effective. Some deviations from the ‘methodologically ideal approach’, such as the use of adaptive trial designs, could have been justified by the need ‘to accommodate the expectations of participants and to promote community trust’.Footnote 15 However, any alternative methodologies would need to offer a reasonable likelihood of producing scientifically valid information, or else it would not have been ethical to expose participants to any risk at all.
The potential benefit of scientific knowledge also depends on the size of a study, as studies with very small sample sizes may lack sufficient statistical power to produce reliable information. Some commentators maintain that underpowered studies lack any potential benefit, making them inherently unethical.Footnote 16 Others point out that small studies might be unavoidable in certain situations, such as research on rare diseases, and that their results can still be useful, particularly when they are aggregated using Bayesian techniques.Footnote 17
Often, choices about study design can require trade-offs between internal and external validity. While an RCT with tightly controlled inclusion and exclusion requirements is the most reliable way to establish whether an experimental intervention is causally linked to an observable result – thereby producing a high level of internal validity – if the study population does not reflect the diversity of patients in the real world, the results might have little relevance to clinical practice – thereby producing a low level of external validity.Footnote 18 In assessing the potential benefits of a study, decision-makers should take both of these considerations into account.
In addition to the potential benefit of generalisable knowledge, some research also offers potential benefits to the individuals participating in the study. Benefits to study participants can be divided into ‘direct’ and ‘indirect’ (or ‘collateral’) benefits.Footnote 19 Direct benefits refer to those that result directly from the interventions being studied, such as an improvement in symptoms that results from taking an investigational drug. In some studies, there is no realistic possibility that participants will directly benefit from the study interventions; this would be the case in a Phase I drug study involving healthy volunteers, where the purpose is simply to identify the highest dose humans can tolerate without serious side effects. Indirect benefits include those that result from ancillary features of the study, such as access to free health screenings, as well as the psychological benefits that some participants receive from engaging in altruistic activities. Study participants may also consider any payments or other remuneration they receive in exchange for their participation as a type of research-related benefit.
Most commentators take the position that only potential direct benefits to participants and potential contributions to generalisable knowledge should be factored into the risk-benefit analysis. The concern is that, otherwise, ‘simply increasing payment or adding more unrelated services could make the benefits outweigh even the riskiest research’.Footnote 20 Other commentators reject this position on the ground that it is not consistent with the ethical imperative to respect participants’ autonomy, and that it could preclude studies that would advance the interests of participants, investigators, and society.Footnote 21 The US Food and Drug Administration has stated that payments to participants should not be considered in the context of risk-benefit assessment,Footnote 22 but it has not taken a position on consideration of other indirect benefits, such as access to free health screenings.
13.4 Quantifying Risks and Benefits
Once the risks and benefits of a proposed study have been identified, the next step is to quantify them. Doing this is complicated by the fact that the significance of a particular risk or benefit is highly subjective. For example, a common risk in health-related research is the potential for unauthorised disclosure of participants’ medical records. This risk could be very troubling to individuals who place a high degree of value on personal privacy, but for persons who share intimate information freely, the risk of unauthorised disclosure might be a minor concern. In fact, in some studies, the same experience might be perceived by some participants as a harm and by others as a benefit. For example, in a study in which participants are asked to discuss prior traumatic experiences, some participants might experience psychological distress, while others might welcome the opportunity to process past experiences with a sympathetic listener.Footnote 23
In addition to differing attitudes about the potential outcomes of research, individuals differ in their perceptions about risk-taking itself. Many people are risk averse, meaning that they would prefer to forego a higher potential benefit if it enables them to reduce the potential for harm. Others are risk neutral, or even risk preferring. Similarly, individuals exhibit different levels of willingness to trade harmful outcomes for good ones.Footnote 24 For example, some people are willing to tolerate medical treatments with significant side effects, such as chemotherapy, because they place greater value on the potential therapeutic benefits. Others place greater weight on avoiding pain or discomfort and would be disinclined to accept high-risk interventions even when the potential benefits are substantial.
Another challenge in attempting to quantify risks and benefits is that the way that risks and benefits are perceived can be influenced by a variety of cognitive biases. For example, one study asked subjects to imagine that they had lung cancer and had to decide between surgery and radiation. One group was told that 68 per cent of surgical patients survived after one year, while a second group was told that 32 per cent of surgical patients died after one year. Even though the information being conveyed was identical, framing the information in terms of a risk of death increased the number of subjects who chose radiation from 18 per cent to 44 per cent.Footnote 25 Another common cognitive bias is the ‘availability heuristic’, which leads people to attach greater weight to information that is readily called to mind.Footnote 26 For example, if a well-known celebrity recently died after being implanted with a pacemaker, the risk of pacemaker-related deaths may be perceived as greater than it actually is.
Individuals’ perceptions of risks and benefits can also be influenced by their level of social trust, which has been defined as ‘the willingness to rely on those who have the responsibility for making decisions and taking actions related to the management of technology, the environment, medicine, or other realms of public health and safety’.Footnote 27 In particular, research suggests that, when individuals are considering the risks and benefits of new technologies, their level of social trust has ‘a positive influence on perceived benefits and a negative influence on perceived risks’.Footnote 28 This is not surprising: those who trust that decision-makers will act in their best interests are less likely to be fearful of changes, while those who lack such trust are more likely to be worried about the potential for harm (see Aitken and Cunningham-Burley, Chapter 11, in this volume).
Compounding these subjective variables is the fact that risk-benefit analysis typically takes place against a backdrop of scientific uncertainty. This is true for all risk-benefit assessments, but it is especially pronounced in research, as the very reason research is conducted is to fill an evidentiary gap. While evaluators can sometimes rely on prior research, including animal studies, to identify the potential harms and benefits of proposed studies, most health-related research takes place in highly controlled environments, over short periods of time. As a result, prior research results are unlikely to provide much information about rare safety risks, long-term dangers or harms and benefits that are limited to discrete population subgroups.
13.5 Weighing Risks and Benefits
Those responsible for reviewing proposed research must ultimately weigh the risks and benefits to determine whether the relationship between them is acceptable. This process is complicated by the fact that risks and benefits often cannot be measured on a uniform scale. First, ‘risks and benefits for subjects may affect different domains of health status’,Footnote 29 as when a risk of physical injury is incurred in an effort to achieve a potential psychological benefit. Second, ‘risks and benefits may affect different people’;Footnote 30 risks are typically borne by the participants in the research, but most of the benefits will be experienced by patients in the future.
Several approaches have been suggested for systematising the process of risk-benefit analysis in research. The first, and most influential, approach is known as ‘component analysis’. This approach calls on decision-makers to independently assess the risks and potential benefits of each intervention or procedure to be used in a study, distinguishing those that have the potential to provide direct benefits to participants (‘therapeutic’) from those that are administered solely for the purpose of developing generalisable knowledge (‘non-therapeutic’). For therapeutic interventions, there must be genuine uncertainty regarding the relative therapeutic benefits of the intervention as compared to those of the standard of care for treating the participants’ condition or disorder (a standard known as ‘clinical equipoise’Footnote 31). For non-therapeutic interventions, the risks must be minimised to the extent consistent with sound scientific design, and the remaining risks must be reasonable in relation to the knowledge that is expected to result. In addition, when a study involves a vulnerable population, such as children or adults who lack decision-making capacity, the risks posed by nontherapeutic procedures may not exceed a ‘minor increase above minimal risk’.Footnote 32
Component analysis has been influential, but it is not universally supported. Some critics maintain that the distinction between therapeutic and non-therapeutic procedures is inherently ambiguous, as ‘all interventions offer at least some very low chance of clinical benefit’.Footnote 33 Others argue that the approach’s reliance on clinical equipoise rests on the mistaken assumption that researchers have a duty to promote each participant’s medical best interests, which conflates the ethics of research with those of clinical care.Footnote 34
One alternative to component analysis is known as the ‘net risk test’, which is based on the principle that the fundamental ethical requirement of research is ‘to protect research participants from being exposed to excessive risks of harm for the benefit of others’.Footnote 35 The approach has four elements. First, for each procedure involved in a study, the risks to participants should be minimised and the potential clinical benefits to participants enhanced, to the extent doing so is consistent with the study’s scientific design. Second, instead of clinical equipoise, the approach requires that, ‘when compared to the available alternatives, a research procedure must not present an excessive increase in risk, or an excessive decrease in potential benefit, for the participant’.Footnote 36 Third, to the extent particular procedures involve greater risks than benefits, those net risks ‘must be justified by the expected knowledge gained from using that procedure in the study’.Footnote 37 Finally, the cumulative net risks of all of the procedures in a study must not be excessive.Footnote 38
Both component analysis and the net risk test can add structure to the process of risk-benefit analysis by focusing attention on the risks and potential benefits of each intervention in a study. The advantage of this approach is that it reduces the likelihood that potential direct benefits from one intervention will be used as a justification for exposing participants to risks from unrelated interventions that offer no direct benefits. However, neither approach eliminates the need for subjective determinations. Under component analysis, the principle of clinical equipoise offers a benchmark for judging the risks and potential benefits of therapeutic procedures, but for non-therapeutic procedures, the only guidance offered is that the risks must be ‘reasonable’ in relation to the knowledge expected to result. The net benefit test dispenses with clinical equipoise entirely, instead relying on a general principle of avoiding ‘excessive risk’. Whether a particular mix of risks and potential benefits is ‘reasonable’ or ‘excessive’ is ultimately left to the judgment of those charged with reviewing the study.
Most regulations and ethics codes provide little guidance on the process of weighing the risks and potential benefits of research. The primary exception is the CIOMS guidelines, which adopts what it describes as a ‘middle ground’ between component analysis and the net risk test. In most respects, the CIOMS approach reflects component analysis, including its reliance on clinical equipoise as a standard for evaluating interventions or procedures that have the potential to provide direct benefits to participants. However, the guidelines also call for a judgment that ‘the aggregate risks of all research interventions or procedures … must be considered appropriate in light of the potential individual benefits to participants and the scientific social value of the research’,Footnote 39 a requirement that mirrors the final step of the net risk test.
Neither component analysis nor the net risk test explicitly sets an upper limit on permissible risk, at least in studies involving competent adults. However, one of the developers of component analysis has stated that ‘the notion of excessive net risks, and the underlying ethical principle of non-exploitation, clearly impose a cap on the risks that individuals are allowed to assume for the benefit of others’.Footnote 40 The notion of an upper limit on risk also appears in several ethical guidelines. For example, the CIOMS guidelines state that ‘some risks cannot be justified, even when the research has great social and scientific value and adults who are capable of giving informed consent would give their voluntary, informed consent to participate in the study’.Footnote 41 Similarly, the European Commission has suggested that certain ‘threats to human dignity and shared values’ should never be traded against the potential scientific benefits of research, including ‘commonly shared values like privacy or free movement … certain perceptions of the integrity of a person (e.g. cloning, technological modifications) … [and] widely shared view[s] of our place in the world (e.g. inhumane treatment of animals or threat to biodiversity)’.Footnote 42
In light of the inherent ambiguities involved in weighing the risks and benefits of research, the results of risk-benefit assessments can be heavily influenced by the type of decision-making process used. The next section looks at these procedural issues more closely.
13.6 Procedural Issues in Risk-Benefit Analysis
In most health-related research, the process of risk-benefit assessment is undertaken by interdisciplinary bodies known as research ethics committees (RECs), research ethics boards (REBs), or institutional review boards (IRBs). These committees make judgments based on predictions about the preferences and attitudes of typical research participants, which do not necessarily reflect how the actual participants would react to particular risk-benefit trade-offs.Footnote 43 In addition, because few committees rely on formal methods of risk-benefit analysis, decisions are likely to be influenced by individual members’ personal attitudes and cognitive biases.Footnote 44 For this reason, it is not surprising that different committees’ assessments of the risks and potential benefits of identical situations exhibit widespread variation.Footnote 45
Some commentators have proposed techniques to promote greater consistency in risk-benefit assessments. For example, it has been suggested that committees issue written assessments that could be entered into searchable databases.Footnote 46 Others have called on committees to engage in a formal process of ‘evidence-based research ethics review’, in which judgments about risks and potential benefits would be informed by a systematic retrieval and critical appraisal of the best available evidence.Footnote 47
Outside of research ethics, a variety of techniques have been developed to systematise the process of risk-benefit analysis. For example, several quantitative approaches to risk-benefit assessment exist, such as the Quality-Adjusted Time Without Symptoms and Toxicity (Q-TWIST) test, which ‘compares therapies in terms of achieved survival and quality-of-life outcomes’,Footnote 48 or the ‘standard gamble’, which assigns utility values to health outcomes based on individuals’ stated choice between hypothetical health risks.Footnote 49 Committees reviewing proposed studies can draw on these quantitative analyses when relevant ones exist.
In some cases, formal consultation with the community from which participants will be drawn can be an important component of assessing risks and benefits. For example, in the study of Havasupai tribe members discussed above, prior consultation with the community could have alerted researchers to the fact that research on migration patterns was threatening to the tribe’s cultural beliefs. In cancer research, consultation with patient advocacy groups may help identify concerns about potential adverse effects that might not have been sufficiently considered by the researchers.Footnote 50 Further lessons might be learned from the the analysis by Chuong and O'Doherty, Chapter 12, this volume.
Risk-benefit analysis is a critical part of the process of evaluating the ethical acceptability of health-related research. The primary challenge in risk-benefit assessment arises from the fact that perceptions about risks and potential benefits are inherently subjective. Those charged with assessing the ethical acceptability of research should make efforts to incorporate as many different perspectives into the process as possible, to ensure that their decisions do not simply reflect their own idiosyncratic views.
1 J. Lantos et al., ‘Considerations in the Evaluation and Determination of Minimal Risk in Pragmatic Clinical Trials’, (2015) Clinical Trials, 12(5), 485–493.
2 N. Eyal et al., ‘Risk to Study Nonparticipants: A Procedural Approach’, (2018) Proceedings of the National Academy of Sciences, 115(32), 8051–8053.
3 G. DuVal, ‘Ethics in Psychiatric Research: Study Design Issues’, (2004) Canadian Journal of Psychiatry, 49(1), 55–59.
4 A. McGuire et al., ‘Research Ethics and the Challenge of Whole-Genome Sequencing’, (2008) Nature Reviews Genetics, 9(2), 152–156.
5 Council for International Organizations of Medical Sciences, ‘International Ethical Guidelines for Health-Related Research Involving Humans’, (CIOMS, 2016), p. 13.
6 M. Mello and L. Wolf, ‘The Havasupai Indian Tribe Case: Lessons for Research Involving Stored Biologic Samples’, (2010) New England Journal of Medicine, 363(3), 204–207.
7 Article 28 of the European Union Clinical Trials Regulation 536/2014, OJ 2014 No. L 158/1.
8 The Federal Policy for the Protection of Human Subjects (‘Common Rule’), 45 C.F.R. § 46.111(a)(2) (1991).
9 A. London et al., ‘Beyond Access vs. Protection in Trials of Innovative Therapies’, (2010) Science, 328(5980), 829–830, 830.
10 J. Grossman and F. Mackenzie, ‘The Randomized Controlled Trial: Gold Standard, or Merely Standard?’, (2005) Perspectives in Biology & Medicine, 48(4), 516–534.
11 J. Younge et al., ‘Randomized Study Designs for Lifestyle Interventions: A Tutorial’, (2015) International Journal of Epidemiology, 44(6), 2006–2019.
12 C. J. Mann, ‘Observational Research Methods. Research Design II: Cohort, Cross Sectional, and Case-Control Studies’, (2003) Emergency Medicine Journal, 20(1), 54–60.
13 D. Grimes and K. Schulz, ‘Bias and Causal Associations in Observational Research’, (2002) Lancet, 359(9302), 248–252.
14 C. Adebamowo et al., ‘Randomised Controlled Trials for Ebola: Practical and Ethical Issues’, (2014) Lancet, 384(9952), 1423–1424, 1423.
15 C. Coleman, ‘Control Groups on Trial: The Ethics of Testing Experimental Ebola Treatments’, (2016) Journal of Biosecurity, Biosafety and Biodefense Law, 7(1), 3–24, 8.
16 E. Emanuel et al., ‘What Makes Clinical Research Ethical?’, (2000) JAMA, 283(20), 2701–2711.
17 R. Lilford and A. Stevens, ‘Underpowered Studies’, (2002) British Journal of Surgery, 89(2), 129–131.
18 B. Freedman and S. Shapiro, ‘Ethics and Statistics in Clinical Research: Towards a More Comprehensive Examination’, (1994) Journal of Statistical Planning and Inference, 42(1), 223–240.
19 N. King, ‘Defining and Describing Benefit Appropriately in Clinical Trials’, (2000) Journal of Law, Medicine & Ethics, 28(4), 332–343.
20 Emanuel et al., ‘What Makes Clinical Research Ethical?’, 2705.
21 See, e.g. A. Wertheimer, ‘Is Payment a Benefit?’, (2013) Bioethics, 27(2), 105–116.
22 US Food and Drug Administration, ‘Payment and Reimbursement to Research Subjects’, (US Food and Drug Administration, 2018), www.fda.gov/regulatory-information/search-fda-guidance-documents/payment-and-reimbursement-research-subjects.
23 T. Opsal et al., ‘“There Are No Known Benefits …” Considering the Risk/Benefit Ratio of Qualitative Research’, (2016) Qualitative Health Research, 26(8), 1137–1150.
24 C. Troche et al., ‘Evaluation of Therapeutic Strategies: A New Method for Balancing Risk and Benefit’, (2000) Value in Health, 3(1), 12–22.
25 P. Slovic, ‘Trust, Emotion, Sex, Politics, and Science: Surveying the Risk-Assessment Battlefield’, (1999) Risk Analysis, 19(4), 689–701.
26 T. Pachur et al., ‘How Do People Judge Risks: Availability Heuristic, Affect Heuristic, or Both?’, (2012) Journal of Experimental Psychology: Applied, 18(3), 314–330.
27 M. Siegrist et al., ‘Salient Value Similarity, Social Trust, and Risk/Benefit Perception’, (2000) Risk Analysis, 20(3), 353–362, 354.
28 Footnote Ibid., 358.
29 D. Martin et al., ‘The Incommensurability of Research Risks and Benefits: Practical Help for Research Ethics Committees’, (1995) IRB: Ethics & Human Research, 17(2), 8–10, 9.
30 Footnote Ibid., 8.
31 B. Freedman, ‘Equipoise and the Ethics of Clinical Research’, (1987) New England Journal of Medicine, 317(3), 141–145.
32 C. Weijer, ‘The Ethical Analysis of Risks and Potential Benefits in Human Subjects Research: History, Theory, and Implications for US Regulation’ in National Bioethics Advisory Commission, Ethical and Policy Issues in Research Involving Human Participants. Volume II – Commissioned Papers and Staff Analysis (Bethesda, MD: National Bioethics Advisory Commission), pp. 1–29, p. 24.
33 A. Rid and D. Wendler, ‘Risk-Benefit Assessment in Medical Research – Critical Review and Open Questions’, (2010) Law, Probability and Risk, 9(3–4), 151–177, 157.
34 Footnote Ibid., 158.
35 Footnote Ibid., 164.
38 D. Wendler and F. Miller, ‘Assessing Research Risks Systematically: The Net Risks Test’, (2007) Journal of Medical Ethics, 33(8), 481–486.
39 Council for International Organizations of Medical Sciences, ‘International Ethical Guidelines’, xi, 9.
40 Wendler and Miller, ‘Assessing Research Risks Systematically’, 165.
41 Council for International Organizations of Medical Sciences, ‘International Ethical Guidelines’, 10.
42 European Commission Directorate-General for Research and Innovation, ‘Research and Innovation, Research, Risk-Benefit Analyses, and Ethical Issues’, (European Union, 2013).
43 M. Meyer, ‘Regulating the Production of Knowledge: Research Risk-Benefit Analysis and the Heterogeneity Problem’, (2013) Administrative Law Review, 65(2), 241–242.
44 C. Coleman, ‘Rationalizing Risk Assessment in Human Subject Research’, (2004) Arizona Law Review, 46(1), 1–51.
45 T. Caulfield, ‘Variation in Ethics Review of Multi-Site Research Initiatives’, (2011) Amsterdam Law Forum, 3(1), 85–100.
46 Coleman, ‘Rationalizing Risk Assessment’, 1176–1179.
47 E. Anderson and J. DuBois, ‘Decision-Making with Imperfect Knowledge: A Framework for Evidence-Based Research Ethics’, (2012) Journal of Law, Medicine and Ethics, 40(4), 951–966.
48 Troche et al., ‘Evaluation of Therapeutic Strategies’, 13.
49 S. van Osch and A. Stiggelbout, ‘The Construction of Standard Gamble Utilities’, (2008) Health Economics, 17(1), 31–40.
50 N. Dickert and J. Sugarman, ‘Ethical Goals of Community Consultation in Research’, (2005) American Journal of Public Health, 95(7), 1123–1127.
Regulators must ensure that innovative health research is safe and undertaken in accordance with laws, ethical norms and social values, and that it is translated into clinical outcomes that are safe, effective and ethically appropriate. But they must also ensure that innovative health research and translation (IHRT) is directed towards the most important health needs of society. Through the patent system, regulators provide an incentive-based architecture for this to occur by granting a temporary zone of exclusivity around patented products and processes. Patents thus have the effect of devolving control over IHRT pathways to patentees and to those to whom patentees choose to license their patent rights.
The sage words of Stephen Hilgartner set the backdrop for this chapter: ‘Patents do not just allocate economic benefits; they also allocate leverage in negotiations that shape the technological and social orders that govern our lives’.Footnote 1 Patents have been granted for many – if not all – of the major recent innovations in health research, from the earliest breakthroughs like recombinant DNA technology, the polymerase chain reaction, the Harvard Oncomouse and the BRCA gene sequences, through to a whole variety of viruses, monoclonal antibodies, receptors and vectors, thousands of DNA sequences, embryonic stem cell technology, intron sequence analysis, genome editing technologies and many more.Footnote 2 These innovations have laid the foundations for whole new health research pathways, from basic research, through applied research, to diagnostic and therapeutic end points.Footnote 3 Broad patent rights over these fundamental innovations give patentees the freedom to choose how these research pathways will be progressed. Essentially then, the patent grant puts patentees in a position to assert significant private regulatory control over IHRT.
The first part of this chapter outlines this regulatory role of patents in IHRT. The chapter then considers the ways in which patentees choose to use their patent rights in IHRT, and the scope for government intervention. The chapter then explores recent actions by patentees that indicate a willingness to moderate the use of their patent rights by engaging in self-regulation and other forms of collaborative regulation. Finally, the chapter concludes with a call for greater government oversight of patent use in IHRT. Although self-regulation has merit in the absence of clear governmental direction, it is argued that private organisations should not have absolute discretion in deciding how to employ their patents in areas such as health, but that they must be held to account in exercising their state-sanctioned monopoly rights.
14.2 Patents as a Form of Private Regulation
In many markets, the regulation of market entry, prices, product availability and development is left to the market to varying degrees, there being at least some general consensus that competitive decision-making is a hallmark of market efficiency.Footnote 4 At the same time, granting patent rights removes an element of competition from a market in order to induce innovation and disclosure.Footnote 5 While it is unclear how much innovation is optimal, it has been suggested that there is unlikely to ever be too much from an economic welfare perspective.Footnote 6
Although primary innovators are arguably best placed to organise and control follow-on innovation,Footnote 7 vesting decision-making power in a single private entity has the potential to scuttle efficiency in much the same way as absolute government control. Nonetheless, conferring this power on individual entities through the grant of patents – and accompanying Intellectual property (IP) rights – is generally justified on efficiency grounds.Footnote 8 However, non-efficiency goals such as distributive fairness may also be important drivers of private regulatory arrangements and may be incorporated either consciously or unconsciously in regulatory schemes.Footnote 9
Granting a patent gives a property right in an invention. As Mark Lemley observes, IP constitutes both a form of government regulation and a property right around which parties can contract,Footnote 10 and its confused identity partly explains why policy makers have grappled with exactly how to manage the delicate innovation balance. Studies have provided mixed evidence as to the necessity to grant IP rights: in some technology areas, patents are viewed as necessary in order to recoup research and development investment, but this is by no means universal.Footnote 11
The value of patents in IHRT has not been unequivocally established, although there is some evidence to suggest they are crucial for signalling purposes.Footnote 12 Patent law can be said to form a ‘corrective’ function in the health context, particularly in relation to pharmaceuticals and biotechnology, where the development of clinical products is subject to substantial regulation.Footnote 13 Without patents, it is argued that researchers would not commit the considerable investment required to conduct research with the ultimate aim of a clinical outcome.
14.3 Use of Patent Rights in Innovative Health Research and Translation
Patentees can limit who enters a field by choosing who, if anyone, they will authorise to use their patents. This can create problems for broad breakthrough technologies, where insistence on exclusivity gives patentees and their licensees control over whole research pathways, allowing them to dictate how those pathways develop. Patentees and their licensees could choose to block others completely from using the technology, or restrict access, or charge excessive prices for use. Conversely, they could allow their patented technology to be used widely for minimal costs. The tragedy of the anticommons posited by Michael Heller and Rebecca Eisenberg, adds further complexity, speculating that a proliferation of patents in particular areas of technology exacerbates the problem because no one party has an effective privilege of use.Footnote 14 Rather, agreement with multiple patentees would be required in order to utilise a particular resource.
Fortunately, empirical studies have revealed little evidence of blocking or anticommons effects in IHRT,Footnote 15 suggesting that, on the whole, working solutions employed by researchers have allowed them to work around ‘problematic’ patents so that research and development may progress. ‘Working solutions’ mean strategies such as entering into licence agreements or other collaborative arrangements; inventing around problematic patents; relying on research exemptions; or challenging the validity of patents.Footnote 16 These working solutions can be viewed as facets of the regulatory scheme that encompasses the grant of patent rights. However, solutions that involve entering into a licence agreement or other collaborative arrangement also involve a degree of conformity on the part of a patentee. It may be fruitless to approach a patentee unless they are willing to negotiate, which takes time and effort on their part, as well as on the part of the licensee. Unless these processes can be streamlined, the incentive to license is low.
14.4 Scope for Government Intervention
Arguably, the fruits of all health-related research should be distributed openly, because of its vital social function of improving healthcare. However, this is hardly a realistic option for aspects such as drug development, where the enormous cost of satisfying regulatory requirements for marketing approval must be recoverable. For other aspects of IHRT, however, the case for more open access is compelling, particularly since it generally originates in public research laboratories, funded by governments from the public purse.Footnote 17 Yet the ensuing patents may ultimately be controlled by private parties, whether spin-offs or more established firms. This phenomenon has been referred to by Jorge Contreras and Jacob Sherkow as ‘surrogate licensing’.Footnote 18
Given the public contribution made to IHRT, the argument for open access, at least for research purposes, is appealing. Public funders are within their rights to insist on some form of open dissemination in such circumstances.Footnote 19 But what are the options when patentees or their licensees insist on exclusivity, even for the most fundamental research tools? If governments see patents as providing a broader social function beyond giving monopoly rights to patentees – albeit temporary in nature – they must ensure that, along with incentives to innovate, the patent system provides appropriate incentives to disseminate innovative outputs, or other regulatory mechanisms to compel the provision of access where needed.Footnote 20 Patents provide patentees with significant freedom to decide who can enter a particular field of research, and what they can do. Some jurisdictions do have legislative provisions allowing government or private providers to step in should patentees fail to work the invention.Footnote 21 Most countries exempt from infringement the steps needed for regulatory approval of generic pharmaceuticals and other chemicals.Footnote 22 Some also exempt use of the patent for experimental purposes, although the scope of protected experimental use remains unclear.Footnote 23 However, the reality is that the role of governments in regulating patent use is limited.
14.5 Emergent Self-Regulatory Models for Use of Patent Rights in Innovative Health Research and Translation
Recognising these limitations on government control of patent use, some promising developments are emerging in IHRT that indicate that patentees and their licensees are willing to consider a range of self-regulatory models in ensuring optimal patent utilisation. Some of the more prominent examples are discussed below.
Because foundational research tools are just that – foundational to whole new areas of research – best practice dictates they should be licensed non-exclusively. US funding agencies and universities agree; for example, the US National Institutes of Health released guidance to this effect in 1999 and 2005.Footnote 24 In 2007, the Association of University Technology Managers, recognising that ‘universities share certain core values that can and should be maintained to the fullest extent possible in all technology transfer agreements’, provided nine key points to consider in licensing university patents. Point 5 recommends ‘a blend of field-exclusive and non-exclusive licenses’.Footnote 25
Yet non-exclusive licensing is not cost-free. The problem that it presents to users is that it imposes a fee in return for not being sued for infringement, with little or no additional benefit for the user.Footnote 26 Inclusion of reach through rights to future uses adds to the burden on follow-on researchers.Footnote 27 If governments were really concerned about the toll of research tool patent claims on IHRT they could choose to exclude them, or to require them to be exchanged through some form of statutory licensing scheme, with minimal or no licensing fees and no other restrictive terms. For now, however, governments seem content to leave such decisions to patentees.
We are witnessing some interesting developments in this area, illustrating that government intervention may not yet be necessary. Companies like Addgene and the Biobricks Foundation have been established as intermediaries to facilitate no-cost, non-exclusive patent licensing and sharing of research materials for genome editing and synthetic biology research, respectively.Footnote 28 There are also other examples of these types of intermediary arrangements, or ‘clearinghouses’ as they are sometimes called, in IHRT. Such arrangements appear to provide a valuable social function provided that fees are not excessive and that technology that is of real value to IHRT is included, so that the clearinghouse does not become a ‘market for lemons’.Footnote 29
Realistically, a more nuanced approach over the simple choice of exclusive or non-exclusive licensing is needed, involving a mix of licensing strategies for a single patented technology. Licensing of the clustered regularly interspersed palindromic repeats (CRISPR) patents illustrates this point. CRISPR, as explained in Chapter 34, is a genome editing technology that has captivated the research world because of its ease of use and enhanced safety, owing to reduced incidence of off-target effects.Footnote 30
Already, we are witnessing the adoption of nuanced approaches for licensing CRISPR patents. For example, the Broad Institute, one of the giants of CRISPR technology, non-exclusively licences CRISPR constructs freely for public sector research through Addgene, and charges a fee for use in more commercially-oriented research. Broad exclusively licences to its own spin-off company, Editas, for therapeutic product development. Broad describes this as an ‘inclusive innovation model’.Footnote 31 However, this model has been criticised by Oliver Feeney and colleagues on the basis that the decision whether to allow other uses for therapeutic purposes is left to Editas.Footnote 32 They see this as a ‘significant moral hazard’, because of the potential restrictions it imposes on therapeutic development. While Feeney and colleagues propose government-imposed time limitations on exclusivity as a means of addressing such hazards, it is doubtful, given past history, that governments would be persuaded to incorporate this level of post-grant regulatory intervention within the patent system.
Knut Egelie and colleagues, equally concerned about CRISPR patent licensing, argue that public research organisations should commit more fully to a self-regulatory model that balances social responsibilities with commercial activity.Footnote 33 Their ‘transparent licensing model’ would minimise fees and other restrictions for uses of patented subject matter as research tools, and narrow field-of-use exclusive licences for commercial development. They suggest government intervention as an alternative to this self-regulatory model, referring to some of the recently emerging contractual funding strategies in Europe. However, they themselves criticise both options, the former for lacking public control and the latter for over-regulation and unnecessary bureaucracy. More cooperative and collaborative strategies, involving both public sector and private sector organisations, might provide alternative models.
A greater commitment to social responsibility might be achieved by patentees and their licensees through entry into collaborative IP arrangements.Footnote 34 Patent pools have been used in some high technology areas – particularly information technology – to overcome patent thickets and cluttered patent landscapes.Footnote 35 In IHRT, however, complex arrangements such as patent pools have gained limited traction,Footnote 36 primarily because of the lack of need to date. Simpler strategies such as non-exclusive licensing and clearinghouses appear to be adequate at the present time, and predicted anticommons effects have not yet emerged.Footnote 37
Where patentees are reluctant to engage in collaborative strategies, there is some scope to mandate engagement. Patent pools, for example, have in some instances – especially in the USA – been established by government regulators in order to ease innovative burdens and address competition law concerns.Footnote 38 Mandatory arrangements are rarely optimistically embraced, and prospects for the sustainability of collaborative arrangements is probably significantly greater where they are voluntary. Patent pools are complex structures and involve many legal considerations. Although there has been some success in establishing patent pooling-type arrangements in public health emergencies like HIV/AIDS and other epidemics,Footnote 39 it is difficult to see what would motivate patentees to come together to create such complex structures in IHRT at the present time, particularly given the rapid pace of technological development and change.
Patent aggregation is another increasingly popular strategy, referring to the process of collecting suites of IP required to conduct research and development within a particular field of use. The process of patent aggregation has brought with it some negative press, because of concerns that aggregators could be ‘patent trolls’, whose sole motivation is extracting licensing revenue.Footnote 40 However, not all aggregators have this trolling motivation, but rather license out entire bundles of patents on a non-exclusive basis. To this extent, their role in advancing the research agendas in IHRT can be seen as broadly facilitative.Footnote 41
Aside from the social good associated with self-regulatory models of patent use discussed above, there are other ethical and social considerations that could be addressed through more public-focused approaches to licensing. For example, even where public sector organisations exclusively license to private partners, whether spin-offs or established firms, it is common practice for the license terms to reserve rights for the organisation’s researchers to continue to conduct research using patented subject matter.Footnote 42 Reservation of the right to engage in broader sharing of patented subject matter for non-commercial research purposes might also be included in such agreements, effectively circumventing the lack of a statutory or common law research exemption in some jurisdictions.
Patent pledges and non-assertion covenants can be used to serve essentially the same purpose.Footnote 43 The role of reservation of rights could also extend to humanitarian uses, which has been mooted specifically in the context of agricultural biotechnology. As Alan Bennett notes, these voluntary measures can serve the purpose of meeting the humanitarian and commercial needs of developing countries in the absence of national policies to this effect.Footnote 44 Such measures could be equally effective in the context of humanitarian uses of innovative health technologies, an area which likewise suffers from a lack of clear government policy direction.
There has been recent discussion on the efficacy of introducing ethical terms into patent licences for the new genome editing technologies, particularly CRISPR. The emergence of this technology triggered a range of ethical debates in relation to its applications in agriculture, the natural environment – for example, in pest eradication through a combination of CRISPR and gene drives – and humans – for example, in genetic enhancement, germline genome modification and gene editing research using human embryos.Footnote 45
The Broad Institute, through Editas, and other public research organisations and their licensees, are already using licences that exclude these types of ethically questionable uses, whether in human or non-human contexts. As Christi Guerrini and colleagues note, there are some obvious advantages with this approach, including that: licence terms are enforceable; they can be tailored; and they are negotiated, leading to better buy in.Footnote 46 Given that the regulation of genome editing varies widely across jurisdictions,Footnote 47 the introduction of ethical licensing terms also has the advantage of creating enforceable obligations across the jurisdictions where the patent has been granted and where the licence applies. Potentially, then, ethical licences could impose global standards on uses of CRISPR technology, which is otherwise considerably conjectural if relying on agreement between countries.
Despite the apparent attractiveness of ethical licensing, however, there is likely to be some unease with the notion of devolving decisions about what is or is not ethical to patentees.Footnote 48 In areas such as this, which are highly contentious, community consensus would usually be a precursor to government regulation. Is regulatory failure in this area significant enough to justify private action? Is this a step too far when it amounts to ceding regulation to private entities?
Patents play a key role in the progress of IHRT. By granting patents, governments devolve to patentees considerable decision-making power about who can enter particular fields of IHRT and what they can do. This chapter has shown that patentees can and do choose to exercise this power wisely, by engaging in open and collaborative models for patent use. However, not all choose do so, and governments currently have limited regulatory tools with which to compel such engagement.
Patentees can decide to work collaboratively with other interested parties, or not. They can decide whether to share broadly, or not. They can even decide what types of uses are ethical or unethical. This is a significant set of delegated powers. Regulators have at their disposal various policy levers that could provide them with broad discretion to specify criteria for patent eligibility, periods of exclusivity and access.Footnote 49 Regulatory control can be asserted by governments both pre-grant, influencing the ways in which patents are granted, and post-grant, on the ways in which patents are used. Governments can use these regulatory tools to impose limits on these delegated powers, but these are not being fully utilised at present.
The current situation is that non-enforceable guidelines have been issued in some jurisdictions to assist patentees in deciding how to exercise their powers, but not in others. Internationally, although the OECD has issued licensing guidelines,Footnote 50 for the most part there is no jurisdictional consensus on how best to set limits on the exercise of patent rights. This is not surprising in view of the diversity of technologies and actors involved and given jurisdictional discrepancies. More research is needed to assist governments in finding optimal ways to support, guide and regulate public research organisations and private companies in their use of the patent system in IHRT.
1 S. Hilgartner, ‘Foundational Technologies and Accountability’, (2018) American Journal of Bioethics, 18(12), 63–65.
2 Organisation for Economic Cooperation and Development, ‘Key Biotechnology Indicators’, (OECD, 2019), www.oecd.org/innovation/inno/keybiotechnologyindicators.htm; Nuffield Council on Bioethics, ‘The Ethics of Patenting DNA’, (Nuffield Council on Bioethics, 2002), 39–44; D. Nicol, ‘Implications of DNA Patenting: Reviewing the Evidence’, (2011) Journal of Law, Information and Science 7, 21(1).
3 J. P. Walsh et al., ‘Effects of Research Tool Patents and Licensing on Biomedical Innovation’ in W. M. Cohen and S. A. Merrill (eds), Patents in the Knowledge-Based Economy (The National Academies Press, 2003), pp. 285–340, see particularly pp. 332–335.
4 F. M. Scherer and D. Ross, Industrial Market Structure and Economic Performance (Boston: Houghton Mifflin, 1990), p. 660; K. J. Arrow, ‘Economic Welfare and the Allocation of Resources for Invention’ in The National Bureau of Economic Research (eds), The Rate and Direction of Inventive Activity: Economic and Social Factors (Princeton University Press, 1962),pp. 609–626.
5 R. P. Merges, Justifying Intellectual Property (Cambridge, MA: Harvard University Press, 2011), p. 27; R. Mazzoleni and R. R. Nelson, ‘Economic Theories about the Benefits and Costs of Patents’, (1998) Journal of Economic Issues, 32(4), 1031–1052, 1039.
6 Federal Trade Commission, ‘To Promote Innovation: The Proper Balance of Competition and Patent Law and Policy’, (FTC, 2003), ch 2, their n30.
7 E. Kitch ‘The Nature and Functions of the Patent System’, (1977) Journal of Law and Economics, 20(2), 265–290; R. P. Merges, ‘Of Property Rules, Coase, and Intellectual Property’, (1994) Columbia Law Review, 94(8), 2655–2673, 2661; M. A. Lemley, ‘Ex Ante versus Ex Post Justifications for Intellectual Property’, (2004) University of Chicago Law Review, 71(1), 129–149.
8 R. Feldman, ‘Regulatory Property: The New IP,’ (2016) Columbia Journal of Law & the Arts, 40(1), 53–103; F. K. Hadfield, ‘Privatising Commercial Law’, (2001) Regulation, 24(1), 40–45, 44; O. Feeney et al., ‘Patenting Foundational Technologies: Lessons from CRISPR and Other Core Biotechnologies’, (2018) The American Journal of Bioethics, 18(12), 36–48.
9 S. L. Schwarcz, ‘Private Ordering’, (2002) Northwestern University Law Review, 91(1) 319–350.
10 M. Lemley, ‘The Regulatory Turn in IP’, (2013) Harvard Journal of Law and Public Policy, 36(1), 109–115.
11 R. Levin et al., ‘Appropriating the Returns From Industrial Research and Development’, (1987) Brookings Papers on Economic Activity: Microeconomics, 3, 783–831; W. Cohen et al., ‘Protecting Their Intellectual Assets: Appropriability Conditions and Why US Manufacturing Firms Patent (or Not)’, (2000), Working Paper No. 7552, National Bureau of Economic Research. See also E. Mansfield, ‘Patents and Innovation: An Empirical Study’, (1986) Management Science, 32(2), 173–181.
12 E. Burrone, ‘Patents at the Core: The Biotech Business’, (WIPO, 2006), www.wipo.int/sme/en/documents/patents_biotech_fulltext.html.
13 Lemley, ‘The Regulatory Turn in IP’.
14 M. A. Heller and R. S. Eisenberg, ‘Can Patents Deter Innovation? The Anticommons in Biomedical Research’, (1998) Science, 280(5364), 698–701.
15 Walsh et al., ‘Effects of Research Tool Patents and Licensing’, pp. 285, 335; D. Nicol and J. Nielsen, ‘Patents and Medical Biotechnology: An Empirical Analysis of Issues Facing the Australian Industry’, (2003) Occasional Paper Series (6) Centre for Law and Genetics, 174–193; but note R. S. Eisenberg, ‘Noncompliance, Nonenforcement, Nonproblem? Rethinking the Anticommons in Biomedical Research’, (2008) Houston Law Review, 45(4), 1059–1099.
16 Nicol and Nielsen, ‘Patents and Medical Biotechnology’, 208–225.
17 L. Pressman et al., ‘The Licensing of DNA Patents by US Academic Institutions: An Empirical Study’, (2006) Nature Biotechnology, 24(1), 31.
18 J. L. Contreras and J. S. Sherkow, ‘CRISPR, Surrogate Licensing, and Scientific Discovery’, (2017) Science, 355(6326), 698–700; J. S. Sherkow, ‘Patent Protection for CRISPR: An ELSI Review’, (2017) Journal of Law and the Biosciences, 4(3), 565–576, 570–571.
19 A. K. Rai and B. N. Sampat, ‘Accountability in Patenting of Federally Funded Research’, (2012) Nature Biotechnology, 30(10), 953–956; K. J. Egelie et al., ‘The Ethics of Access to Patented Biotech Research Tools from Universities and Other Research Institutions,’ (2018) Nature Biotechnology, 36(6), 495.
20 Referred to by some commentators as ‘carrots’ and ‘sticks’; see e.g. I. Ayres and A. Kapczynski, ‘Innovation Sticks: The Limited Case for Penalizing Failures to Innovate’, (2015) University of Chicago Law Review, 82(4), 1781–1852.
21 For example, US: 28 USC § 1498(a) (government use) (2011); Australia: Patents Act 1990 (Cth) section 133 (compulsory licensing), section 163 (government use).
22 For example, US: Roche Products Inc. v. Bolar Pharmaceuticals Co., 733 F.2d 858 (Fed. Cir. 1984), 35 USC § 271(e)(1)); Patents Act 1990 (Cth) sections 119A and 119B.
23 R. Dreyfuss, ‘Protecting the Public Domain of Science: Has the Time for an Experimental Use Defense Arrived?’, (2004) Arizona Law Review, 946(3), 457–472; K. J. Strandburg, ‘What Does the Public Get? Experimental Use and the Patent Bargain’, (2004) Wisconsin Law Review, 2004(1), 81–155.
24 US Department of Health and Human Services, National Institutes of Health, ‘Principles and Guidelines for Recipients of NIH Research Grants and Contracts on Obtaining and Disseminating Biomedical Research Resources: Final Notice’, (1999) Federal Register 72090, 64(246); US Department of Health and Human Services, National Institutes of Health, ‘Best Practices for the Licensing of Genomic Inventions: Final Notice’, (2005) Federal Register 18413, 70(68); see also Organisation for Economic Co-Operation and Development, ‘Guidelines for the Licensing of Genetic Inventions’, (OECD, 2006).
25 Association of University Technology Managers, ‘In the Public Interest: Nine Points to Consider in Licensing University Technology’, (Association of University Technology Managers, 2007), www.autm.net/AUTMMain/media/Advocacy/Documents/Points_to_Consider.pdf.
26 A. D. So et al., ‘Is Bayh-Dole Good for Developing Countries? Lessons from the US Experience’, (2008) PLoS Biology, 6(10), e262.
27 J. Nielsen, ‘Reach-Through Rights in Biomedical Patent Licensing: A Comparative Analysis of their Anti-Competitive Reach’, (2004) Federal Law Review, 32(2), 169–204.
28 J. Nielsen et al., ‘Provenance and Risk in Transfer of Biological Materials’, (2018) PLoS Biology, 16(8), e2006031
29 E. van Zimmeren et al., ‘Patent Pools and Clearinghouses in the Life Sciences’, (2011) Trends in Biotechnology, 29(11), 569–576; see also D. Nicol et al., ‘The Innovation Pool in Biotechnology: The Role of Patents in Facilitating Innovation’, (2014) Centre for Law and Genetics Occasional Paper No. 8. 249–250.
30 V. Iyer et al., ‘No Unexpected CRISPR-Cas9 Off-target Activity Revealed by Trio Sequencing of Gene-edited Mice’, (2018) PLoS Genetics, 14(7), p. e1007503.
31 Broad Institute, ‘Information About Licensing CRISPR Genome Editing Systems’, (Broad Institute, 2017), www.broadinstitute.org/partnerships/office-strategic-alliances-and-partnering/information-about-licensing-crispr-genome-edi.
32 Feeney et al., ‘Patenting Foundational Technologies’, 40.
33 K. J. Egelie et al., ‘The Emerging Patent Landscape of CRISPR–Cas9 Gene Editing Technology’, (2016) Nature Biotechnology, 3(10), 1025.
34 A. Krattiger and S. Kowalski, ‘Facilitating Assembly of and Access to Intellectual Property: Focus on Patent Pools and a Review of other Mechanisms’ in A. Krattiger et al. (eds), Intellectual Property Management in Health and Agricultural Innovation: A Handbook of Best Practices (MIHR, Oxford UK and PIPRA Davis California, US, 2007) p. 131; P. Gaulé, ‘Towards Patents Pools in Biotechnology?’, (2006) Innovation Strategy Today, 2, 123; G. Van Overwalle et al., ‘Models for Facilitating Access to Patents on Genetic Inventions’, (2006) Nature Reviews Genetics, 7(2), 143; van Zimmeren et al., ‘Patent Pools and Clearinghouses’; Organisation for Economic Cooperation and Development, ‘Collaborative Mechanisms for Intellectual Property Management in the Life Sciences’, (OECD, 2011); Nicol et al., ‘The Innovation Pool’.
35 R. P. Merges, ‘Institutions for Intellectual Property Transactions: The Case of Patent Pools’ in R. C. Dreyfuss et al. (eds), Expanding the Boundaries of Intellectual Property: Innovation Policy for the Knowledge Society (Oxford University Press; 2001), ch 6.
36 E. van Zimmeren et al., Patent Licensing in Medical Biotechnology in Europe: A Role for Collaborative Licensing Strategies? (Catholic University of Leuven Centre for Intellectual Property Rights; 2011), 82; Nicol et al., ‘The Innovation Pool’, 238–239, 250.
37 Gaulé, ‘Towards Patents Pools in Biotechnology?’, 123, 129; Nicol et al., ‘The Innovation Pool’, 238.
38 D. Serafino, ‘Survey of Patent Pools Demonstrates Variety of Purposes and Management Structures’, (2007) KEI Research Note 6, www.keionline.org/book/survey-of-patent-pools-demonstrates-variety-of-purposes-and-management-structures.
39 UNITAID, ‘The Medicines Patent Pool’, (UNITAID), www.unitaid.org/project/medicines-patent-pool/#en.
40 M. A. Lemley, ‘Are Universities Patent Trolls?’, (2008) Fordham Intellectual Property, Media and Entertainment Law Journal, 18(3), 611–631; A. Layne-Farrar and K. M. Schmidt, ‘Licensing Complementary Patents: “Patent Trolls”, Market Structure, and “Excessive” Royalties’, (2010) Berkeley Technology Law Journal, 25(2), 1121.
41 A. Wang, ‘Rise of the Patent Intermediaries’, (2010) Berkeley Technology Law Journal, 25(1), 159, 167, 173.
42 A. B. Bennett, ‘Reservation of Rights for Humanitarian Uses’ in A. Krattiger et al. (eds), Intellectual Property Management in Health and Agricultural Innovation: A Handbook of Best Practices (Oxford, UK: MIHR; and Davis, USA: PIPRA; 2007), p. 41.
43 J. Contreras, ‘Patent Pledges’, (2015) Arizona State Law Journal, 47(3), 543–608; A. Krattiger, ‘The Use of Nonassertion Covenants: A Tool to Facilitate Humanitarian Licensing, Manage Liability, and Foster Global Access’ in A. Krattiger et al. (eds), Intellectual Property Management in Health and Agricultural Innovation: A Handbook of Best Practices, (Oxford, UK: MIHR; and Davis, USA: PIPRA; 2007), p. 739.
44 Bennett, ‘Reservation of Rights’.
45 Sherkow, ‘Patent Protection for CRISPR’, 565–576, 572–573.
46 C. J. Guerrini et al., ‘The Rise of the Ethical License’, (2017) Nature Biotechnology, 25(1), 22; Sherkow, ‘Patent Protection for CRISPR’.
47 R. Isasi et al., ‘Editing Policy to Fit the Genome?’, (2016) Science, 351(6271), 337–339.
48 N. de Graeff et al., ‘Fair Governance of Biotechnology: Patents, Private Governance, and Procedural Justice’, (2018) American Journal of Bioethics, 18(12), 57–59, 58.
49 D. L. Burk and M. A. Lemley, ‘Policy Levers in Patent Law’, (2003) Virginia Law Review, 89(7), 1575–1696.
50 OECD, ‘Recommendation of the Council on the Licensing of Genetic Inventions’, (OECD/LEGAL/0342, 2007).
Benefit sharing pertains to the distribution of benefits and burdens arising from research. More specifically, it concerns what, if anything, is owed to individuals, communities or even populations that participate in research (benefits to investors, to other populations or the social value of research more generally understood are not the focus of benefit sharing).
Traditionally, health research has been concerned with compensating those participants who have been more or less directly involved. The practice of benefit sharing, especially in agriculture, introduced a perspective that recognised the contributions of communities and populations in safeguarding biological resources.Footnote 1 The issue is further complicated in human genetics as genetic information is by nature shared, and thus implicates individuals and communities who might not have participated in research in the traditional sense. At the same time, contemporary global research activities have increasingly been associated with for-profit companies. Some of their practices – ‘helicopter research’, ethics dumping – have given credence to broader political and social worries that have now been harnessed to the concept of benefit sharing, which was initially used within more limited research settings.
Framing benefit-sharing debates are several central concepts – the duty to avoid exploitation, the rights and interests of all research stakeholders, the requirements of fairness and compensation, and the various principles of distributive justice. In many ways, benefit sharing as an ethics and governance framework attempts to deal with most of those concerns and anxieties. Thus, responses to the question, ‘why is benefit sharing a duty?’ vary. In practical terms, benefit sharing is a thoroughly context-sensitive topic. It matters which risks and harms are involved in research (if any), who the investigators and funders are (for-profit, local, NGOs etc), where research takes place (developed or low- or middle-income countries), who is involved (e.g. vulnerable groups), what local needs are, and whether research is successful.
In what follows, I will give a brief overview of the ethical arguments and historical dynamics behind benefit-sharing practices, then outline major governance frameworks and discuss the potential problems around applying this concept in health research. The overall aim of this chapter is to highlight the complexity of benefit sharing and argue that success hinges on the careful balancing of universal research ethics duties with the particularities of concrete research projects taking place in distinct locations. Benefit sharing is no panacea for solving the inequalities of access and opportunities associated with global health research. Yet it can be a profoundly empowering tool, especially as the framework is shifting from compensation to collaboration.
Looking back, the rationale behind access and benefit-sharing justifications has been dynamic. It was originally employed in the context of agriculture and non-human biological resources (plants, animals, microorganisms). The 1992 UN Convention on Biological Diversity (CBD) acknowledged national sovereignty in all genetic resources and requested ‘fair and equitable sharing of the benefits arising out of the utilization of genetic resources’.Footnote 2 As the majority of the world’s biological diversity is found in developing countries, benefit sharing was seen as a necessary instrument in guaranteeing these countries’ continuing interest in safeguarding this heritage and curbing biopiracy (when indigenous knowledge and resources are patented or otherwise exploited by third parties with no permission or compensation for the locals). The supplementary Nagoya Protocol on Access and Benefit-sharing (2010) is a legal framework that supports the implementation of the objectives of CBD.Footnote 3
Since the 1990s, benefit sharing emerged as an important component of health research and made its appearance in various international documents (in the rest of the chapter, I will focus on benefit sharing in health research only, excluding research on non-human materials and populations). The Human Genome Organisation (HUGO) Ethics Committee Statement on benefit sharing formulates:
A benefit is a good that contributes to the well-being of an individual and/or a given community (e.g. by region, tribe, disease-group …). Benefits transcend avoidance of harm (non-maleficence) in so far as they promote the welfare of an individual and/or of a community. Thus, a benefit is not identical with profit in the monetary or economic sense. Determining a benefit depends on needs, values, priorities and cultural expectations.Footnote 4
Benefits put forward by scientists, as well as the pharmaceutical industry, patients, investors and public health officials, span a wide array of potential valued ‘goods’, from improved health and science to financial gains and wider social benefits.Footnote 5 A fixed definition of what would constitute a benefit would be quite useless, or worse, unfair (an informative list of possible benefits regarding non-human research is available from the annex of the Nagoya Protocol). Potential benefits and harms arising from clinical trials would be rather different from those associated with population biobanks, for example. Benefits can be related to healthcare, but they could also encompass other socially important goals, such as support for infrastructure, development of local research capacities and build-up of community resilience. The kind and scope of potential benefits has few limits, although the minimum threshold for satisfying the ‘reasonable availability’ should surpass the simple licensing of drugs or interventions with market prices.Footnote 6
When is an appropriate time for benefit sharing? These issues deserve consideration from the very earliest phases of research design. It is necessary to find out the characteristics and needs of the potential research sites to ensure that the planned investigations, as well as potential benefits, respond to those needs. Equally, benefit sharing could involve long-term follow up of participants or training and employment of community members that continues for years after research has ended.
The HUGO statement on benefit sharing mapped the following justifications for the concept in human genetic research:
1. Descriptive argument: There is an ‘emerging international consensus’Footnote 7 that benefits should be shared with participants.
2. Common heritage argument – we all share (in one sense) the same genome, so there is a shared interest in genetic heritage of humankind; thus, the Human Genome Project should benefit all humanity.
3. Justice-based arguments – compensatory (compensation in return for contribution), procedural (procedural justice should be adhered to in benefit-sharing) and distributive (equitable allocation and access to resources and goods) justice as important aspects to consider.
4. Solidarity argument on two levels: first, as a potential basis for benefit sharing among a group of research participants (communities, host populations); second, to foster health for wider communities and eventually the whole of humanity, thus benefits should not be limited strictly to those participating in research.Footnote 8
Of these various justifications, the overall concern fuelling benefit-sharing debates has been justice, and the concept itself has been likened to a device in the toolbox of justice.Footnote 9 Yet, justice is notoriously difficult to pin down given that the principles of justice vary – one can refer to equality as fundamental, or point at the importance of merit, and in healthcare contexts the principle of need has often served as central. Decisions about what justice requires (i.e. what principles are important in a particular context) can result in divergent benefit-sharing patterns and practices – how benefits are defined and by whom, as well as with whom the sharing is foreseen.Footnote 10 Certain justifications necessarily exclude or include specific groups or communities. For example, the compensatory logic associated with the principles of merit and desert would benefit those directly involved but could leave out those who did not directly participate but are nevertheless part of the community. Focus on a shared human heritage of genetic resources tends to disregard the needs and deserts of particular communities where research is undertaken. This is why, for example, in the agricultural and plant genetics context, the early employment of the global heritage model was quickly replaced by the nationalisation and property model of genetic resources.Footnote 11 The patenting practices through which the ‘shared free resources’ were turned into private profits and property were eventually rejected and the nationalisation of biological resources took over as the dominant framework.
To conclude, benefit-sharing negotiations always entail choices between some publics over others and upholding of certain principles before others. The above considerations about what justice requires have historically played a role in benefit-sharing discussions and none of them may be discounted as irrational or irrelevant. So how have these justice-related concerns been framed, operationalised, and translated into regulation and governance?
Ethically sound and respectful research practices do not only benefit researchers, participants and science but also support public trust towards research in general.Footnote 12 All approaches to benefit sharing assume the baseline of the usual ethics requirements for research (thus benefit sharing does not substitute some or all ethics principles but is to be considered an additional one). In 1993, the Council for International Organisations of Medical Sciences (CIOMS) argued that ‘any product developed will be made reasonably available to inhabitants of the underdeveloped community in which the research was carried out’.Footnote 13 In the latest updated Guidelines from 2016, exploitative research was defined as the kind of research that did not respond to the health needs of the community where it took place or who would later not be able to access or afford the resulting product.Footnote 14
The prominence of benefit sharing as an ethics requirement in global health research is exemplified by the existence of many nationalFootnote 15 and international documents, statements and opinions. Both national and international health research organisations, policy think tanks and research funders have thought it important to discuss and state their views on the matter. Most discuss benefit sharing in the context of research in developing countries: the European Group on Ethics in Science and New Technologies to the European Commission’s Opinion on Ethical Aspects of Clinical Research in Developing Countries (2003), the Nuffield Council on Bioethics’ The Ethics of Research Related to Healthcare in Developing Countries (first paper in 2002), the US National Bioethics Advisory Commission’s Ethical and Policy Issues in International Research (2001), and the Wellcome Trust’s Statement on Research Involving People Living in Developing Countries: Position Statement and Guidance Notes for Applicants.Footnote 16 Even general health research frameworks have included references to benefit sharing in their more recent drafts – for example the WHO’s Good Clinical Practice, the World Medical Association’s Declaration of Helsinki (2013), and the UNESCO Universal Declaration on Bioethics and Human Rights (2005).Footnote 17
All of the above documents constitute what may be called soft law (i.e. non-binding instruments), yet a number of them have been influential in regulating health research practices (especially the WHO, CIOMS and funders’ guidelines). When applied routinely, such ethics regulations could be considered customary international law,Footnote 18 but there have also been calls to formulate dedicated legal instruments to provide stronger support for benefit-sharing negotiations.Footnote 19 The latest attempt to ensure that benefit sharing constitutes an important normative aspect of research is the Global Code of Conduct for Research in Resource-Poor Settings (2018), which the European Commission endorsed as a reference document for its research funding programme Horizon 2020.Footnote 20
While declarations and guidelines can highlight important principles and values for research, their interpretation and implementation are less straightforward. Over time, the developments in health research practices and the pressures from various stakeholders have resulted in a repeated re-framing of benefit sharing as various competing accounts have been promoted.
The earliest versions advanced a duty to benefit the particular people participating in research or a somewhat wider circle of beneficiaries (communities or populations in the case of Low and Middle Income Countries (LMICs)). This is the ‘reasonable availability model’ espoused by CIOMS, which has traditionally tied the benefits to products or interventions resulting from a particular research project. An ethical prerequisite here is that research should respond to the health needs of the community and therefore any positive results of research are directly relevant to those needs.
A somewhat overlapping concept of post-trial obligations has also been argued for and applied in the context of health research, especially clinical trials. The language of post-trial obligations has its roots in the 2000 edition of the Declaration of Helsinki (§30: ‘At the conclusion of the study, every patient entered into the study should be assured of access to the best proven prophylactic, diagnostic and therapeutic methods identified by the study.’).Footnote 21 Later versions of the Declaration specify this duty further. Post-trial obligations are often formulated as prior agreements that are signed between stakeholders before research is begun and there exist a number of successful examples of post-trial access agreements globally.Footnote 22
The reasonable availability model has been roundly criticised for a variety of reasons.Footnote 23 Most importantly, it is said that the focus on types of benefits arising from particular research projects does not adequately remove the dangers of exploitation and it unnecessarily limits the scope of potential benefits. Thus, the alternative ‘fair benefits’ model was proposed, widening the scope of potential benefits as well as beneficiaries.Footnote 24 Benefits should not be limited to the results of particular research projects, and the distribution of benefits could take place both during as well as after research. Yet, while the increased flexibility in benefit-sharing discussions is a pragmatically useful development, it might also involve adverse side-effects. For example, a community might agree to participate in research that will not target their health needs at all, but will provide other benefits that they need.Footnote 25 This means that some of the fundamental ethical premises of research in LMICs have been effectively replaced. Perhaps this is acceptable – after all, such flexibility can be construed as less paternalistic and respectful of local needs. But it could also hint at the problematic infiltration of commercial bargaining rules into health research, which I discuss further below.
The latest re-framing, driven largely by funders, construes benefit sharing as a comprehensive cooperative tool for capacity-building that is justified via the larger framework of global health research and justice concerns.Footnote 26 In 2002, the Nuffield Council on Bioethics suggested that healthcare-related research in developing countries should proceed through genuine partnerships that provide transfer of knowledge and technology to strengthen the expertise of local partners. More recently, a group of influential research funders (NIH, Wellcome and the African Society of Human Genetics) have launched an H3Africa benefit-sharing vision where the more established avenues of ‘reasonable availability’ and ‘fair benefits’ have been replaced by straightforward requests for capacity building as the objective of collaborative research.Footnote 27 Such activities thus no longer constitute simply one of the options in the extensive list of potential benefits that parties to the benefit-sharing arrangement should consult and pick from. Benefit sharing is here no longer a positive side-effect or even an intended externality to a successful research project. Rather, it has been moved to the very core – it is one of the most important reasons the research collaboration should take place at all. In many ways, this is a welcome development, as benefit sharing has often been misunderstood as disbursement of tangible research ‘results’.
15.4. What, When and How: The Practicalities of Benefit Sharing
Much of the rationale for benefit sharing is articulated in the language of principles and values. Somewhat less guidance is given on the procedural aspects – how these principles and values are to be negotiated, prioritised and enforced. In most cases, a variety of potential benefits and beneficiaries can realistically be considered based on diverse justificatory reasons and local needs. Obviously, the host population needs to be the judge of the value of benefits to itself.Footnote 28 An answer to a practical question of whom does one talk to when negotiating with communities should look for engagement with those who might bear burdens for research, but are not given a voice (this concerns especially the voice of women in LMICs – their meaningful participation in all phases of benefit-sharing negotiations should be requiredFootnote 29). At the same time, one needs to be conscious – and transparent – of the fact that defining and refining participant categories or negotiation partners is already a highly selective, political act.Footnote 30
While community involvement is a crucial part of the benefit-sharing process, the mere fact of participation and consent does not necessarily guarantee the fairness of the agreement.Footnote 31 To ensure transparency and that involved communities and populations do have a fair chance to make up their minds about research participation, an influential statement recommended that publicly accessible repositories of previous benefit-sharing agreements be created.Footnote 32 This would provide a chance for stakeholders to assess the fairness of what they are offered and would support the procedural side of benefit sharing. Critics, however, have claimed that the principles and structures of transparency and fairness that the fair benefits approach supports might turn out to be an ‘ethical Trojan horse’.Footnote 33 The proposed auction-like model could make host communities compete against each other in offering services to global research contract organisations, turning benefit-sharing negotiations into ‘a race to the bottom’.Footnote 34 While the funders of non-profit research or even public–private partnerships could be held accountable for checking the fairness of the reached deals, much of for-profit research lacks such oversight structures.
15.5 Worries and Future Challenges
While benefit sharing is by now a relatively standard and well-established requirement regarding ethical research practices (especially in LMICs), I would like to draw attention to several critical points that problematise the appropriateness and scope of benefit sharing in research settings.
Some of the most discussed worries associated with sharing benefits with research participants concern the dangers of therapeutic misconception and undue inducement. Research has traditionally been about serving future generations and producing generalisable knowledge. Focus on benefiting research participants introduces the risk that they might volunteer because they expect research to benefit them directly. While research participants are often well cared for, this should not be mistaken for therapy.
Undue inducement concerns instances where benefit-sharing negotiations result in overly generous and disproportionate advantages to participants such that their ability to rationally weigh the benefits and harms of participation might be jeopardised. In the LMIC context, the local public health infrastructure might be minimal or lacking; clinical trials and other types of research often offer services that are not otherwise available. Access to medical services might motivate research participation and raise the potential of undue inducement. In these situations, a proper balance between potential risks and benefits is crucial to ensure fairness and to distinguish undue inducement from fair compensation.
A different kind of unease about the extensive employment of benefit-sharing language and practices in health research was voiced already decades ago. Debates then revolved around benefit sharing as a side-effect of unwelcome commercialisation of health research. Often focused on the patenting of the human genome,Footnote 35 the arguments ranged from the consequentialist (threats to scientific progress as it changes the altruistic motivation for scientific research) to the deontological (metaphysical dangers to the ‘ethical self-understanding of the species’Footnote 36). The worry was that benefit sharing as a conceptual framework had opened health research up to the vagaries of global commercial markets and had turned it into a shameless profit-driven activity, where the services of the participants were nothing but tradable commodities.
Over the past decade, we have grown used to the increasing prominence of for-profit health research. The noble idea of volunteering for research to support the project of science that may benefit humankind is no longer easily applicable nor ethically acceptable in the context of global biomedical research where powerful for-profit companies choose to do their research among possibly vulnerable populations in LMICs. While altruistic volunteering and even a gift-relationship dynamic might still be possible for health research within affluent and more sheltered communities, it would be distinctly unfair to insist on this rationale for other contexts. Even in developed countries, fierce battles regarding patenting and access to screening tests have taken place between those who contributed to research and those who were granted a patent (e.g. the Canavan disease controversy in the USA).
A different kind of worry is that if benefit sharing is motivated by the wider concerns of global justice (‘an effective way of helping people in LMIC’Footnote 37), then benefit-sharing practices and procedures are not well-equipped to deal with these much larger and complex challenges arising from global (and local) political, social and economic inequalities. Indeed, numerous funders have explicitly stated that too wide a scope for post-trial or benefit-sharing obligations (bordering on aid) is not to be required of investigators; some of the funders are, in fact, prohibited from funding healthcare provision. Furthermore, while it is clear that in many cases research is undertaken by for-profit companies who may go on to earn substantial benefits, there are also numerous trials and projects that do not translate into profits and may prove unsuccessful. Yet even such research constitutes valuable knowledge that is crucial to guide further research. The framework of benefit sharing as capacity-building gets around that challenge because it no longer focuses nor depends on the tangible results but on the cooperative aspects of research where ‘negative’ results are also valuable for involved local researchers.
Benefit sharing is an attempt to offer the vulnerable and the burdened communities a fair and well-earned chance to improve their situation. This means that benefit sharing can sometimes rightfully be associated with the tendencies to commodify relations and objects that, in a different world, would perhaps be guided by other, more altruistic and less monetised motives. Yet, from the perspective of LMICs, the dynamic of benefit-sharing logic over the past decades has enabled those countries themselves to increasingly have a say in steering benefit sharing. It should no longer be constrained by a particular research project or be seen as contributing towards the local scarcities in a haphazard way of plugging the holes in responding to the most desperate needs. Rather, benefit sharing is increasingly construed as a systematic tool within the wider project of collaboration, of taking control of one’s resources and setting one’s own research and health policies and priorities. In short, it is coming to be seen as crafting a space for a ‘lab of their own’.Footnote 38
Such an interpretation of benefit sharing frames it as part of a more general tendency of rethinking the function and practice of research and science in society. This has been visible, for example, in the European Commission’s funding guidelines. The requirement of transparency in setting research priorities, the democratising of science through involvement of various stakeholder groups (e.g. patients) in the early stages of research, and the rhetoric of responsible research and innovation are all instances of opening up research as a social practice, shifting away from a view of research as a boxed-up end-product. Perhaps some benefit-sharing partnerships might already be viewed as examples of such ‘power sharing’,Footnote 39 although one should remain cautious in terms of the concept’s ability to revolutionise health research around the globe.
Benefit sharing is not immune to the many changes happening in health research: learning healthcare systems are doing away with the once central distinctions between clinical and research ethics; multi-site research makes it difficult to assess the contributions of distinct locations and partners; and it is unclear what the relationship will be between benefit sharing and data sharing in the context of open data and the increased role of health-related data in health research. Certain flexibility that has always been necessary for a successful implementation of benefit-sharing frameworks – the integration of universal ethical principles with the particular research partnerships – needs to continue to ensure that, at least as long as we live in an imperfect world of great inequalities, benefit sharing can successfully be integrated into the evolving practices of health research. Yet we need to be cautious about pinning too many hopes on that one framework.
Benefit sharing in health research is by now a well-established ethical requirement. There are a plethora of documents and established best practices to guide the researchers, funders and regulators, as well as communities and other stakeholders. The rationale for benefit sharing has evolved and continues to do so. Starting from the idea that individuals and communities taking certain risks and accepting potential harms deserve compensation and should not be exploited, we have now reached frameworks that view capacity-building and development support as one of the primary goals of research cooperation.
Benefit sharing is an activity that is grounded in potentially conflicting sets of justifications. While that might seem philosophically problematic (leading to e.g. various inconsistencies, potentially contradictory duties), in pragmatic terms, detailed global agreements are not necessary. It is best to regard benefit sharing as a mandatory ethics frame(work) that is to be applied to all international research collaborations as it highlights certain moral concerns and provides conceptual and governance resources for dealing with those. But the actual agreements need to be contracted by particular stakeholders and the details of the planned research and the distinct context will determine which sets of concerns are paramount, which justifications make sense, what benefits are realistic, and who should be involved. There is a danger of potential relativism involved in such a governance framework, but only combining universal research norms with unique contextual components provides the sensitivity and flexibility that is needed for ethical health research as a collaborative enterprise.
1 Well-known examples of problematic research that motivated the international community to formulate benefit-sharing framework were the Neem tree and Canavan-disease controversies.
2 United Nations ‘Convention on Biological Diversity’, (United Nations, 1992).
3 Secretariat of the Convention on Biological Diversity, ‘Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization to the Convention on Biological Diversity’, (United Nations Secretariat of the Convention on Biological Diversity, 2011).
4 Human Genome Organization Ethics Committee, ‘Genetic Benefit-Sharing’, (2000) Science, 290(5489), 49.
5 K. Simm, ‘Benefit-Sharing: An Inquiry Regarding the Meaning and Limits of the Concept in Human Genetic Research’, (2005) Genomics, Society and Policy, 1(2), 29–40.
6 E. J. Emanuel, ‘Benefits to Host Countries’ in E. J. Emanuel et al. (eds), The Oxford Textbook of Clinical Research Ethics (Oxford University Press, 2008), p. 722.
7 HUGO Ethics Committee, ‘Statement on Benefit-Sharing’, (Human Genome Organisation, 2000).
8 K. Simm, Benefit-Sharing: An Inquiry into Justification, PhD thesis, Tartu University, (2005).
9 D. Schroeder, ‘Benefit-Sharing: It’s Time for a Definition’, (2007) Journal of Medical Ethics, 33(4), 205–209.
10 K. Simm, ‘Benefit-Sharing: A Look at the History of an Ethics Concern’, (2007) Nature Reviews Genetics, 8(7), 496.
11 E. Tsioumani,‘ Beyond Access and Benefit-Sharing: Lessons from the Law and Governance of Agricultural Biodiversity’, (2018) The Journal of World Intellectual Property, 21(3–4), 106–122.
12 C. D. DeAnglis, ‘Conflict of Interest and the Public Trust’, (2000) JAMA, 284(17), 2237–2238.
13 Council for International Organizations of Medical Sciences (CIOMS), ‘International Ethical Guidelines for Biomedical Research Involving Human Subjects’, (CIOMS, 1993), 2nd version.
14 CIOMS, ‘International Ethical Guidelines for Health-Related Research Involving Humans’, (CIOMS, 2016), 4th edition.
15 An early example of national regulation on benefit-sharing comes from the Canadian provinces of Newfoundland and Labrador. E.g. D. Pullman and A. Latus, ‘Benefit-Sharing in Smaller Markets: The Case of Newfoundland and Labrador’, (2003) Community Genetics, 6(3), 178–181.
16 European Group on Ethics in Science and New Technologies to the European Commission (2003), ‘Opinion on Ethical Aspects of Clinical Research in Developing Countries’, (European Group on Ethics in Science and New Technologies to the European Commission, 2003); Nuffield Council on Bioethics, ‘The Ethics of Research Related to Healthcare in Developing Countries’, (Nuffield Council on Bioethics, 2002); US National Bioethics Advisory Commission (NBCA), ‘Ethical and Policy Issues in International Research: Clinical Trials in Developing Countries: Report and Recommendations of the National Bioethics Advisory Commission’, (Rockville, MD: NBAC, 2001), Vol. 1; Wellcome Trust, ‘Research Involving People Living in Developing Countries: Position Statement and Guidance Notes for Applicants’, (Wellcome), www.wellcome.ac.uk/funding/guidance/guidance-notes-research-involving-people-low-and-middle-income-countries.
17 World Health Organization, ‘Handbook for Good Clinical Research Practice’, (WHO, 2002).; WMA, ‘Declaration of Helsinki’, (WMA, 2000); UNESCO, ‘Universal Declaration on Bioethics and Human Rights’, (UNESCO, 2005).
18 P. Andanda et al., ‘Legal Frameworks for Benefit-Sharing: From Biodiversity to Human Genomics’ in D. Schroeder and J. Cook Lucas (eds), Benefit-sharing. From Biodiversity to Human Genetics (Springer, 2013), pp. 33–64.
19 B. Dauda and K. Dierickx, ‘Benefit-Sharing: An Exploration on the Contextual Discourse of a Changing Concept’, (2013) BMC Medical Ethics, 14(1), 36.
20 D. Schroeder et al., ‘Global Code of Conduct for Research in Resource-Poor Settings’ (GlobalCodeofConduct), www.globalcodeofconduct.org/.
21 WMA, ‘Declaration of Helsinki’.
22 E.g. J. M. Lavery, ‘The Obligation to Ensure Access to Beneficial Treatments for Research Participants at the Conclusion of Clinical Trials’ in E. J. Emanuel et al. (eds), The Oxford Textbook of Clinical Research Ethics (Oxford University Press, 2008), pp. 697–708; A. K. Page, ‘Prior Agreements in International Clinical Trials: Ensuring the Benefits of Research to Developing Countries’, (2002) Yale Journal of Health Policy, Law and Ethics, 3(1), 35–66.
23 Participants in the 2001 Conference on Ethical Aspects of Research in Developing Countries, ‘Moral Standards for Research in Developing Countries: From ‘Reasonable availability’ to ‘Fair Benefits’’, (2004) Hastings Center Report, 34(3), 17–27; Emanuel, ‘Benefits to Host Countries’, p. 723
24 Participants, ‘Moral Standards’, 2004.
25 A. J. London and K. J. S. Zollmann, ‘Research at the Auction Block: Problems for the Fair Benefits Approach to International Research’, (2010) Hastings Center Report, 40(4), 36.
26 E.g. F. Mutapi, ‘Africa Should Set Its Own Health-Research Agenda’, (2019) Nature, 575(7784), 567.
27 B. Dauda and S. Joffe, ‘The Benefit-Sharing Vision of H3Africa’, (2018) Developing World Bioethics, 18(2), 165–170.
28 Participants, ‘Moral Standards’, 2004.
29 J. Cook Lucas and F. A Castillo, ‘Fair for Women? A Gender Analysis of Benefit-Sharing’ in D. Schroeder and J. Cook Lucas (eds), Benefit-Sharing. From Biodiversity to Human Genetics (Springer, 2013), pp. 129–152.
30 C. Hayden, ‘Taking as Giving: Bioscience, Exchange, and the Politics of Benefit-Sharing’, (2007) Social Studies of Science, 37(5), 729–758.
31 S. Gbadegesin and D. Wendler, ‘Protecting Communities in Health Research from Exploitation’, (2006) Bioethics 20(5), 252.
32 Participants, ‘Moral Standards’, 2004.
33 London and Zollmann, ‘Research at the Auction Block’, 44.
34 Footnote Ibid., 41.
35 R. Chadwick and A. Hedgecoe, ‘Commercialisation of the Human Genome’ in J. Burley and J. Harris (eds), A Companion to Genethics (Oxford: Blackwell, 2004), pp. 334–345.
36 J. Habermas, The Future of Human Nature (Cambridge: Polity, 2003), p. 71.
37 London and Zollmann, ‘Research at the Auction Block’, 37.
38 R. Benjamin, ‘A Lab of Their Own: Genomic Sovereignty as Postcolonial Science Policy’, (2009) Policy and Society, 28(4), 341–355.
39 D. E. Winickoff, ‘From Benefit-Sharing to Power Sharing: Partnership Governance in Population Genomics Research’ in J. Kaye and M. Stranger (eds), Principles and Practice in Biobank Governance (Routledge, 2016), pp. 53–65.
Failure in health research regulation is nothing new. Indeed, the regulation of clinical trials was developed in response to the Thalidomide scandal, which occurred some fifty years ago.Footnote 1 Yet, health research regulation is at the centre of recent failures.Footnote 2 Metal-on-metal hip replacements,Footnote 3 and, more recently, mesh implants for urinary incontinence and pelvic organ prolapse in women – often referred to as ‘vaginal mesh’ – have been the subject of intense controversy.Footnote 4 Some have even called the latter controversy ‘the new Thalidomide’.Footnote 5 In these cases, previously licensed medical devices were used to demonstrate the safety of supposedly analogous new medical devices, and obviate the need for health research involving humans.Footnote 6
In this chapter, I use health research regulation for medical devices to look at the regulatory framing of harm through the language of technological risk, i.e. relating to safety. My overall argument is that reliance on this narrow discourse of technological risk in the regulatory framing of harm may marginalise stakeholder knowledges of harm to produce a limited knowledge base. The latter may underlie harm, and in turn lead to the construction of failure.
I understand failure itself in terms of this framing of harm.Footnote 7 Failure is taken to be ontologically and normatively distinct from harm, and as implicating the design and functioning of the system or regime itself. Failure is understood as arising when harm is deemed to thwart expectations of safety built into technological framings of regulation. This usually occurs from stakeholder perspectives. Stakeholders include research participants, patients and other interested parties. However, the new force of failure in public discourse and regulation,Footnote 8 apparent in the way it ‘now saturates public life’,Footnote 9 ensures that the language of failure provides a means to integrate stakeholder knowledges of harm with scientific-technical knowledges.
In the next section, I use health research relating to medical devices to reflect on the role of expectations and harm in constructing failure. This sets the scene for the third section, where I outline the roots of failure in the knowledge base for regulation. Subsequently, I explain how the normative power of failure may be used to impel the integration of expert and stakeholder knowledges, improving the knowledge base and, in turn, providing a better basis on which to anticipate and prevent future failures. The chapter thus appreciates how failure can amount to a ‘failure of foresight’, which may mean it is possible to ‘organise’ failure and the harm it describes out of existence.Footnote 10
Failure has long been understood, principally though not exclusively, in Kurunmäki and Miller’s words, ‘as arising from risk rather than sin’.Footnote 11 Put differently, failure can be understood in principally consequentialist, rather than deontological, terms.Footnote 12 This understanding does not exclude legal conceptualisations of failure in tort law and criminal law, in which the conventional idea of liability is one premised on ‘sin’ or causal contribution.Footnote 13 However, within contemporary society and regulation, such deontological understandings are often overlaid with a consequentialist view of failure.Footnote 14
This is apparent in recent work by Carroll and co-authors. Through their study of material objects and failure, they describe failure as ‘a situation or thing as [sic] not being in accord with expectation’.Footnote 15 According to van Lente and Rip, expectations amount to ‘prospective structures’ that inform ‘statements, brief stories and scenarios’.Footnote 16 It is expectation, rather than anticipation or hope, then, that is central to failure. Unlike expectation, anticipation and hope do not provide a sense of how things ought to be, so much as how they could be or an individual or group would like them to be.Footnote 17 Indeed, as Bryant and Knight explain: ‘We expect because of what the past has taught us to expect … [Expectation] awakens a sense of how things ought to be, given particular conditions’.Footnote 18
This normative dimension distinguishes expectation from other future-oriented concepts and furnishes ‘a standard for evaluation’, for whether a situation is ‘good or bad, desirable or undesirable’,Footnote 19 and, relatedly, a failure. Indeed, for Appadurai ‘[t]he most important thing about failure is that it is not a fact but a judgment’.Footnote 20 Expectations rely on the past to inform a normative view of some future situation or thing, such as that it will be safe. When, through the application of calculative techniques that determine compliance with the standard for evaluation, this comes to be seen as thwarted, there is a judgment of failure.Footnote 21 Expectations, and hence a key ground for establishing failure, are built into regulatory framingsFootnote 22 and the targets of regulation.Footnote 23
These insights can be applied and developed through the example of health research regulation for medical devices. In this instance, technological risk, i.e. safety, provides the framing for medical devices within the applicable legislation and engenders an expectation of safety.Footnote 24 However, in respect of metal-on-metal hips and vaginal mesh, harm occurred, and the expectation of safety was thwarted downstream once these medical devices were in use.
Harm was consequent, seemingly in large part, on the classification of metal-on-metal hips and vaginal mesh as Class IIb devices. IIb devices are medium to high-risk devices, which are usually devices installed within the body for thirty days or longer. This meant that it was possible for manufacturers to rely on substantial equivalence to existing products to demonstrate conformity with general safety and performance requirements. These requirements set expectations for manufacturers and regulators to demonstrate safety, both for the device and the person within which it was implanted. Substantial equivalence obviates the need for health research involving humans via a clinical investigation.
It is noted in one BMJ editorial that this route ‘failed to protect patients from substantial harm’.Footnote 25 Heneghan et al. point out that in respect of approvals by the Food and Drug Administration in the USA, which are largely mirrored in the European Union (EU): ‘Transvaginal mesh products for pelvic organ prolapse have been approved on the basis of weak evidence over the last 20 years’.Footnote 26 This study traced the origins of sixty-one surgical mesh implants to just two original devices approved in the USA in 1985 and 1996. The reliance on substantial equivalence meant that safety and performance data came from implants that were already on the market, sometimes for decades, and that were no longer an accurate predicate. In other words, on the basis of past experience – specifically, of ‘substantially equivalent’ medical devices – there was an unrealistic expectation that safety would be ensured through this route, and that further research involving human participants was unnecessary.
Stakeholders reported adverse events including: ‘Pain, impaired mobility, recurrent infections, incontinence/urinary frequency, prolapse, fistula formation, sexual and relationship difficulties, depression, social withdrawal or exclusion/loneliness and lethargy’.Footnote 27 On this basis, stakeholders, including patient groups, demanded regulatory change. Within the EU, new legislation was introduced, largely in response to these events. The specific legislation applicable to the examples considered in this chapter, the Medical Devices Regulation (MDR),Footnote 28 came into force on 26 May 2020 (Article 123(2) MDR).
In respect of metal-on-metal hips and vaginal mesh, the legislation reclassifies them as Class III. Class III devices are high risk and invasive long-term devices. Future manufacturers of these devices will, in general, have to carry out clinical investigations to demonstrate conformity with regulatory requirements (Recital 63 MDR). The EU’s new legislation takes up a whole chapter on clinical investigations and thus safety. The legislation is deemed to provide a ‘fundamental revision’ to ‘establish a robust, transparent, predictable and sustainable regulatory framework for medical devices which ensures a high level of safety and health whilst supporting innovation’ (Recital 1 MDR). One interpretation of the legislation is that it is a direct response to problems in health research for medical devices, and intended to provide ‘a better guarantee for the safety of medical devices, and to restore the loss of confidence that followed high profile scandals around widely used hip, breast, and vaginal mesh devices’.Footnote 29
As regards metal-on-metal hips and vaginal mesh, however, there has been little or no suggestion of failure by those formally responsible, and who might be held accountable if there were – perhaps especially if it could be said there were any plausible causal contribution by them towards harm. Instead, the example of medical devices demonstrates how the construction of failure does not necessarily hinge on official accounts of harm as amounting to ‘failure’. This is apparent in the various quotations from non-regulators noted above. As Hutter and Lloyd-Bostock put it, these are ‘terms in which events are construed or described in the media or in political discourse or by those involved in the event’. As they continue, what matters is an ‘event’s construction, interpretation and categorisation’.Footnote 30
Failure is an interpretation and judgment of harm. Put differently, ‘failure’ arises through an assessment of harm undertaken through calculative techniques and judgments. Harm becomes refracted through these. At a certain point, the expectations of safety built into framing are understood by stakeholders as thwarted, and the harm becomes understood as a failure.Footnote 31 Official discourses are significant, not least because they help to set expectations of safety. But these discourses do not necessarily control stakeholder interpretations and knowledge of harm, or how they thwart expectations of safety, and lead to the construction of failureFootnote 32
In what follows, I shift attention to the lacunae and blind spots in the knowledge base for the regulation of medical devices, which are made apparent by the harm and failure just described. I outline these missing elements before turning to discuss the significance of failure for improving health research regulation.
16.3 Using Failure to Address the Systemic Causes of Harm
Failure, at its root, emerges from the limited knowledge base for health research regulation: for medical devices, and other areas framed by technological risk, it is derived from an archive of past experience and scientific-technical knowledge. The focus on performance (i.e. the device performs as designed and intended, in line with a predicate) marginalised attention to effectiveness (i.e. producing a therapeutic benefit) and patient knowledge on this issue. Moreover, in relation to vaginal mesh implants, female knowledges and lived experiences of the devices implanted within them have tended to be sidelined or even overlooked. The centrality of the male body within research and models of pain, and gender-based presumptions about pain,Footnote 33 help to explain the time taken to recognise a safety problem in respect of medical devices, and the gaping hole in research and knowledge.
Another part of the explanation for the latter problem is that there was a lengthy delay in embodied knowledge and experiences of pain being reported and recognised – effectively sidelining and ignoring those experiences. New guidance on vaginal mesh in the United Kingdom (UK) has faced criticism on gender-based lines. Safety concerns are cited and it is recommended that vaginal mesh should not be used to treat vaginal prolapse. However, as the UK Parliament’s All Party Parliamentary Group on Surgical Mesh Implants said, the guidelines: ‘disregard mesh-injured women’s experiences by stating that there is no long-term evidence of adverse effects’.Footnote 34
The latter may amount to epistemic injustice, what Fricker describes as a ‘wrong done to someone specifically in their capacity as a knower’.Footnote 35 More than a harm in itself, epistemic injustice may limit stakeholder ability to contribute towards regulation, leading to other kinds of harm and failure. This is especially true in the case of health research regulation, where stakeholders may be directly or indirectly harmed by practices and decisions that are grounded on a limited knowledge base. Moreover, even in respect of the EU’s new legislation on medical devices, doubts remain whether these will prevent future harms and thus failures similar to those mentioned above. Indeed, the only medical devices that are required to evidence therapeutic benefit or efficacy in controlled conditions before marketing are those that incorporate medicinal products.Footnote 36
A deeper explanation for the marginalisation of stakeholder knowledges of harm, and a key underpinning for failure, lies in the organisation of knowledge production. Hurlbut describes how: ‘Framed as epistemic matters – that is, as problems of properly assessing the risks of novel technological constructions – problems of governance become questions for experts’.Footnote 37 This framing constructs a hierarchy of knowledge that privileges credentialised knowledge and expertise, while marginalising those deemed inexpert or ‘lay’. Bioethics plays a key role here. As a field, bioethics tends to focus on technological development within biomedicine and principles of individual ethical conduct or so-called ‘quandary ethics’, rather than systemic issues related to epistemic – or social – justice. Consequently, bioethics often privileges and bolsters scientific–technical knowledge, erases social context and renders ‘social’ elements as little more than ‘epiphenomena’.Footnote 38 In this setting, stakeholder knowledges and forms of expertise relating to harm are, as Foucault explained, ‘disqualified … [as] naïve knowledges, hierarchically inferior knowledges, knowledges that are below the required level of erudition or scientificity’.Footnote 39
The specific contemporary cultural resonance of the language of failure means that it can be used as a prompt to overcome this marginalisation and improve the knowledge base for regulation. Specifically, the language of failure can be used to generate a risk to organisational standing and reputation. Adverse public perceptions may cast failure as regulatory failure, effectively framing regulators as ‘part of the cause of disasters and crises’.Footnote 40 A perception of regulatory failure thus has key implications for the accountability and legitimacy of regulation and regulators – and such perception is therefore to be avoided by them. Relatedly, regulators want to avoid the shaming and blaming that often accompany talk of failure. Blaming can even amplifyFootnote 41 or extend the duration of an institutional risk to standing and reputation. This may produce a crisis for regulation, including for its legitimacy, quite apart from any interpretation and judgment of failure or regulatory failure.
The risk posed by failure to standing and reputation may prompt the integration of stakeholder knowledges with the scientific–technical knowledges that currently underpin regulation. The potential to use failure in this way is already apparent in the examples above, and perhaps especially vaginal mesh. Stakeholders have been largely successful in presenting their knowledges of harm, placing a spotlight on health research regulation and demanding change to prevent future failure.
Despite the limitations within much bioethics scholarship, there is a growing plethora of approaches to injustice, most recently and notably vulnerability, within which embodied risk and experiential knowledge are central.Footnote 42 These approaches are buttressed by a developing scientific understanding of the significance of environmental factors to genetic predisposition to vulnerability and embodied risk.Footnote 43 Further, within such approaches, the centrality of the human body and experience is foregrounded precisely to recast the objects of bioethical concern. The goal: to prompt a response from the state to fulfil its responsibilities in respect of rights.Footnote 44 In the context of health research, this research can be leveraged to counter the lack of alertness and communicative failures for which institutions and powerful people must take responsibility,Footnote 45 and expand the knowledges that count in regulation.
There are mechanisms to facilitate the integration of stakeholder with scientific–technical knowledges and improve health research for medical devices. Further attention to effectiveness could yield important additional data (i.e. on producing a therapeutic benefit) on top of performance (i.e. the device performs as designed and intended). Similar to clinical trials for medicines, which produce data to demonstrate safety, quality and efficacy, this would require far more involvement and data from device recipients. Recipient involvement and data could come pre- or post-marketing – or both. Involvement pre-marketing seems both desirable and possible:
The manufacturers’ argument that [randomised controlled trials] are often infeasible and do not represent the gold standard for [medical device] research is clearly refuted. As high-quality evidence is increasingly common for pre-market studies, it is obviously worthwhile to secure these standards through the [Medical Devices Regulation] in Europe and similar regulations in other countries.Footnote 46
One proposed model for long-term implantable devices, such as those discussed in this chapter, involves providing limited access to them through temporary licences that restrict use to within clinical evaluations, with long follow-up at a minimum of five years. Wider access could be provided once safety, performance and efficacy have been adequately demonstrated. In addition, wider public access to medical device patient registries, including the EU’s Eudamed database, could be provided so as to ensure transparency, open up public discourse around safety and tackle epistemic injustice.Footnote 47
In this chapter, I described how failure is constructed and becomes recognised through processes that determine whether harm has thwarted the expectation of safety built into technological framings of regulation. Laurie is one of the few scholars to illuminate, not only how health research regulation transforms its participants into instruments, but how this may underlie failure:
if we fail to see involvement in health research as an essentially transformative experience, then we blind ourselves to many of the human dimensions of health research. More worryingly, we run the risk of overlooking deeper explanations about why some projects fail and why the entire enterprise continues to operate sub-optimally.Footnote 48
By looking at the organisation of knowledge that supports regulatory framings of medical devices, it becomes clear how the marginalisation of stakeholder knowledge may provide a deeper explanation for harm and failure. Failure can be used to prompt the take-up of stakeholder knowledges of harm in regulation, by recasting regulation or using its mechanisms differently in light of those knowledges, so as to better anticipate and prevent future harm and failure, and enable success. See further on users’ experiences, Harmon, Chapter 39, this volume.
Why, then, has more not been done to ensure epistemic integration as a way to enhance regulatory capacities to anticipate and prevent failure? Epistemic integration would involve bringing stakeholders within regulation via their knowledges. As such, epistemic integration would seem to undermine the dominant position of those deemed expert within extant processes. Knowledge of harm becomes re-problematised: what knowledges from across society are required by regulation in order to ensure its practices are ethical and legitimate? Integration of diverse knowledges might reveal to society at large the limits of current regulation to deal with risk and uncertainty. More deeply, epistemic integration would challenge modernist values on the import of empirically derived knowledge, and the efficacy of society’s technological ‘fixes’ in addressing its problems. However, scientific–technical knowledge and expertise would still be necessary in order to discipline ‘lay’ knowledges and ensure their integration within the epistemic foundations of decision-making. To resist epistemic integration is, therefore, essentially to bolster extant power relations. As the analysis in this chapter suggests, these relations are actually antithetical to addressing failure and maintaining the protections that are central to ethical and legitimate health research and regulation more generally.
* Many thanks to all those with whom I have discussed the ideas set out in this chapter, especially the editors and Ivanka Antova, Richard Ashcroft, Daithi Mac Sithigh, Katharina Paul and Barbara Prainsack. The discussion in this chapter is developed further in: Mark L Flear, ‘Epistemic Injustice as a Basis for Failure? Health Research Regulation, Technological Risk and the Epistemic Foundations of Harm and Its Prevention’, (2019) European Journal of Risk Regulation 10(4), 693–721.
1 In the United Kingdom, the scandal resulted in the Medicines Act 1968 and its related licensing authority. See E. Jackson, Law and the Regulation of Medicines (London: Hart Publishing, 2012), pp. 4–5.
2 Relatedly, see S. Macleod and S. Chakraborty, Pharmaceutical and Medical Device Safety (London: Hart Publishing, 2019).
3 C. Heneghan et al., ‘Ongoing Problems with Metal-On-Metal Hip Implants’, (2012) BMJ, 344(7846), 23–24.
4 See the articles comprising ‘The Implant Files’, (The Guardian), www.theguardian.com/society/series/the-implant-files.
5 H. Marsden, ‘Vaginal Mesh to Treat Organ Prolapse Should Be Suspended, Says UK Health Watchdog’, (The Independent, 15 December 2017).
6 The famous Poly Implant Prothése silicone breast implants scandal concerned fraud rather than the kinds of problems with health research regulation discussed in this chapter – see generally C. Greco, ‘The Poly Implant Prothése Breast Prostheses Scandal: Embodied Risk and Social Suffering’, (2015) Social Science and Medicine, 147, 150–157; M. Latham, ‘“If It Ain’t Broke Don’t Fix It”: Scandals, Risk and Cosmetic Surgery’, (2014) Medical Law Review, 22(3), 384–408.
7 This may extend beyond physical harm to social harm, environmental harm ‘and so on’ – see R. Brownsword, Rights, Regulation and the Technological Revolution (Oxford University Press, 2008), p. 119. Also see pp. 102–105.
8 For definition of ‘regulation’ see the Introduction to this volume.
9 L. Kurunmäki and P. Miller, ‘Calculating Failure: The Making of a Calculative Infrastructure for Forgiving and Forecasting Failure’, (2013) Business History, 55(7), 1100–1118, 1100. Emphasis added. More broadly, for comment on the ‘stream of failures’ since the 1990s, see M. Power, Organised Uncertainty (Oxford University Press, 2007), p. 5.
10 B. Turner, Man-Made Disasters (Wykeham 1978). For application to organisations, see B. Hutter and M. Power (eds), Organisational Encounters with Risk (Cambridge University Press, 2005), p. 1. Some failures are ‘normal accidents’ and cannot be organised out of existence – see C. Perrow, Normal Accidents: Living with High-Risk Technologies (New York: Basic Books, 1984).
11 Kurunmäki and Miller, ‘Calculating Failure’, 1101. Emphasis added.
12 For discussion, see R. Brownsword and M. Goodwin, Law and the Technologies of the Twenty-First Century: Text and Materials (Cambridge University Press, 2012), p. 208.
13 Indeed, Poly Implant Prothése silicone breast implants and vaginal mesh have been the subject of litigation – for discussion of each see, Macleod and Chakraborty, Pharmaceutical and Medical Device Safety, pp. 232–234 and pp. 259–263, respectively. For a recent case on vaginal mesh involving a class action against members of the Johnson & Johnson group in which the court found in favour of the claimants, see Gill v. Ethicon Sarl (No. 5)  FCA 1905.
14 A. Appadurai, ‘“Introduction” to Special Issue on “Failure”’, (2016) Social Research, 83(3), xx–xxvii.
15 T. Carroll et al., ‘Introduction: Towards a General Theory of Failure’ in T. Carroll et al. (eds), The Material Culture of Failure: When Things Go Wrong (Bloomsbury, 2018), pp. 1–20, p.15. Emphasis added.
16 H. van Lente and A. Rip, ‘Expectations in Technological Developments: An Example of Prospective Structures to be Filled in by Agency’ in C. Disco and B. van der Meulen (eds), Getting New Technologies Together: Studies in Making Sociotechnical Order (Berlin: De Gruyter, 1998), p. 205.
17 R. Bryant and D. Knight, The Anthropology of the Future (Cambridge University Press, 2019), p. 28 for anticipation and p. 134 for hope.
18 Ibid., p. 58. Emphasis added.
19 Ibid., p. 63.
20 Appadurai, ‘Introduction’, p. xxi. Emphasis added. Also see A. Appadurai, Banking on Words: The Failure of Language in the Age of Derivative Finance (University of Chicago Press, 2016).
21 Beckert lists past experience among the social influences on expectations – see J. Beckert, Imagined Futures: Fictional Expectations and Capitalist Dynamics (Cambridge, MA: Harvard University Press, 2016), p. 91.
22 Brownsword, Rights, Regulation and the Technological Revolution; K. Yeung, ‘Towards an Understanding of Regulation by Design’ in R. Brownsword and K. Yeung (eds), Regulating Technologies: Legal Futures, Regulatory Frames and Technological Fixes (London: Hart Publishing, 2008), pp. 79–107.
23 T. Dant, Materiality and Society (Open University Press, 2005);D. MacKenzie and J. Wajcman (eds), The Social Shaping of Technology, 2nd Edition (Buckingham: Open University Press, 1999); L. Winner, ‘Do Artefacts Have Politics?’, (1980) Daedalus, 109(1), 121–136.
24 Medical devices are defined by their intended function, as determined by the manufacturer, for medical purposes – see Article 2(1) of the Medical Devices Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No. 178/2002 and Regulation (EC) No. 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC OJ 2017 L 117/1. On the classification of medical devices, see Point 1.3, Annex VIII.
25 C. Allan et al., ‘Europe’s New Device Regulations Fail to Protect the Public’, (2018) BMJ, 363, k4205, 1.
26 Carl J. Heneghan et al., ‘Trials of Transvaginal Mesh Devices for Pelvic Organ Prolapse: A Systematic Database Review of the US FDA Approval Process’, (2017) BMJ Open, 7(12), e017125, 1. Emphasis added.
27 Macleod and Chakraborty, Pharmaceutical and Medical Device Safety, p. 238.
28 Medicine Devices Regulation (EU) 2017/745. Implementation of this legislation is left to national competent authorities.
29 Allan et al., ‘Europe’s New Device Regulations’, 1. Emphasis added.
30 B. Hutter and S. Lloyd-Bostock, Regulatory Crisis: Negotiating the Consequences of Risk, Disasters and Crises (Cambridge University Press, 2017), p. 3. On understandings of failure, see S. Firestein, Failure. Why Science Is So Successful (Oxford University Press, 2016), pp. 8–9.
31 Kurunmäki and Miller, ‘Calculating Failure’, 1101. Cf I. Hacking, Historical Ontology (Cambridge, MA: Harvard University Press, 2002) – applied in e.g. B. Allen, ‘Foucault’s Nominalism’ in S. Tremain (ed.), Foucault and the Government of Disability (University of Michigan Press, 2018); D. Haraway, The Haraway Reader (New York: Routledge, 2004); D. Roberts, ‘The Social Immorality of Health in the Gene Age: Race, Disability and Inequality’ in J. Metzl and A. Kirkland (eds), Against Health (New York University Press, 2010), pp. 61–71.
32 Kurunmäki and Miller, ‘Calculating Failure’, 1101. Cf Hutter and Lloyd-Bostock, Regulatory Crisis, pp. 9–18 and pp. 19–21 for framing and routines.
33 See, for example, R. Hurley and M. Adams, ‘Sex, Gender and Pain: An Overview of a Complex Field’, (2008) Anesthesia & Analgesia, 107(1), 309–317. Also see M. Fox and T. Murphy, ‘The Body, Bodies, Embodiment: Feminist Legal Engagement with Health’ in M. Davies and V. E. Munro (eds), The Ashgate Research Companion to Feminist Legal Theory (London: Ashgate, 2013), pp. 249–265.
34 National Institute for Health and Care Excellence (NICE), ‘Urinary Incontinence and Pelvic Organ Prolapse in Women: Management, NICE Guideline [NG123]’, (NICE, 2019). This guidance was issued in response to the NHS England Mesh Working Group – see ‘Mesh Oversight Group Report’, (NHS England, 2017). Also see ‘Mesh Working Group’, (NHS), www.england.nhs.uk/mesh/. For criticism, see H. Pike, ‘NICE Guidance Overlooks Serious Risks of Mesh Surgery’, (2019) BMJ, 365, l1537.
35 M. Fricker, Epistemic Injustice: Power and the Ethics of Knowing (Oxford University Press, 2007), p. 1. Emphasis added. Also see I. J. Kidd and H. Carel, ‘Epistemic Injustice and Illness’, (2017) Journal of Applied Philosophy, 34(2), 172–190.
36 For discussion, see C. J. Heneghan et al., ‘Transvaginal Mesh Failure: Lessons for Regulation of Implantable Devices’, (2017) BMJ, 359, j5515.
37 J. B. Hurlbut, ‘Remembering the Future: Science, Law, and the Legacy of Asilomar’ in S. Jasanoff and S. Kim, Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power (University of Chicago Press, 2015), p. 129. Original emphasis.
38 On ‘quandary ethics’, see P. Farmer, Pathologies of Power: Health, Human Rights, and the New War on the Poor (University of California, 2003), pp. 204–205. Also see D. Callaghan, ‘The Social Sciences and the Task of Bioethics’, (1999) Daedalus, 128(4), 275–294, 276. On bioethics and social context, see J. Garrett, ‘Two Agendas for Bioethics: Critique and Integration’, (2015) Bioethics, 29(6), 440–447; A. Hedgecoe, ‘Critical Bioethics: Beyond the Social Science Critique of Applied Ethics’, (2004) Bioethics, 18(2), 120–143, 125. Also see B. Hoffmaster (ed.), Bioethics in Social Context (Philadelphia: Temple University Press, 2001).
39 M. Foucault, Society Must Be Defended (London: Penguin Books, 2004), p. 7.
40 Hutter and Lloyd-Bostock, ‘Regulatory Crisis’, p. 8. Emphasis added. For discussion, see M. Lodge, ‘The Wrong Type of Regulation? Regulatory Failure and the Railways in Britain and Germany’, (2002) Journal of Public Policy, 22(3), 271–297; R. Schwartz and A. McConnell, ‘Do Crises Help Remedy Regulatory Failure? A Comparative Study of the Walkerton Water and Jerusalem Banquet Hall Disasters’, (2009) Canadian Public Administration, 52(1), 91–112.
41 For discussion, see A. Boin et al. (eds), The Politics of Crisis Management: Public Leadership Under Pressure (Cambridge University Press, 2005); C. Hood, The Blame Game: Spin, Bureaucracy, and Self-Preservation in Government (Princeton University Press, 2011); N. Pidgeon et al., The Social Amplification of Risk (Cambridge University Press, 2003).
42 M. Fineman, ‘The Vulnerable Subject and the Responsive State’, (2010) Emory Law Journal, 60(2), 251–275. Also see work on: precarity (J. Butler, Precarious Life: The Power of Mourning and Violence (London: Verso, 2005)); the capabilities approach (M. Nussbaum, Creating Capabilities (Cambridge, MA: Harvard University Press, 2011); A. Sen, ‘Equality of What?’ in S. McMurrin (ed.), Tanner Lectures on Human Values, Volume 1 (Cambridge University Press, 1980), pp. 195–220); and a feminist approach to flesh (C. Beasley and C. Bacchi, ‘Envisaging a New Politics for an Ethical Future: Beyond Trust, Care and Generosity – Towards an Ethic of Social Flesh’, (2007) Feminist Theory, 8(3), 279–298).
43 This includes understanding in epigenetics and neuroscience – see N. Rose and J. Abi-Rached, Neuro: The New Brain Sciences and the Management of the Mind (Princeton University Press, 2013); D. Wastell and S. White, Blinded by Science: The Social Implications of Epigenetics and Neuroscience (Bristol: Policy Press, 2017).
44 Most notably, see Fineman, ‘The Vulnerable Subject’. For application to bioethics, see M. Thomson, ‘Bioethics & Vulnerability: Recasting the Objects of Ethical Concern’, (2018) Emory Law Journal, 67(6), 1207–1233.
45 For discussion, see A. Boin et al. (eds), The Politics of Crisis Management, especially p. 215 and p. 218. This responsibility is grounded in virtue theory. For discussion see Fricker, Epistemic Injustice.
46 S. Sauerland et al., ‘Premarket Evaluation of Medical Devices: A Cross-Sectional Analysis of Clinical Studies Submitted to a German Ethics Committee’, (2019) BMJ Open, 9(2), 6. Emphasis added. For a review of approaches to the collection of data, see D. B. Kramer et al., ‘Ensuring Medical Device Effectiveness and Safety: A Cross-National Comparison of Approaches to Regulation’, (2014) Food Drug Law Journal, 69(1), 1–23. The EU’s new legislation on medical devices has sought to improve inter alia post-marketing data collection, such as through take-up of the Unique Device Identification. This is used to mark and identify medical devices within the supply chain. For discussion of this and other aspects of the EU’s new legislation, see A. G. Fraser et al., ‘The Need for Transparency of Clinical Evidence for Medical Devices in Europe’, (2018) Lancet, 392(10146), 521–530.
47 On licensing, see Heneghan et al., ‘Transvaginal Mesh Failure’. Also see B. Campbell et al., ‘How Can We Get High Quality Routine Data to Monitor the Safety of Devices and Procedures?’, (2013) BMJ, 346(7907), 21–22. On access to data, see M. Eikermann et al., ‘Signatories of Our Open Letter to the European Union. Europe Needs a Central, Transparent, and Evidence Based Regulation Process for Devices’, (2013) BMJ, 346, f2771; Fraser et al., ‘The Need for Transparency’.
48 G. Laurie, ‘Liminality and the Limits of Law in Health Research Regulation: What Are We Missing in the Spaces In-Between?’ (2016) Medical Law Review, 25(1), 47–72, 71. Emphasis added.
In this chapter I consider some important implications of adopting rules, principles and supplementary guidance-based approaches to the regulation and governance of health research. This is a topic that has not yet received sufficient attention given how impactful different regulatory approaches can be on health research. I suggest that each approach has strengths and limitations to be factored-in when considering how we shape health research practices. I argue that while principles-based approaches can be well suited to typically complex health research landscapes, additional guidance is often required. I explore why this is so, highlighting in particular the added value of best practice and noting that incorporating additional guidance within regulatory approaches demands its own important considerations, which are laid out in the final section.
17.2 The Significance of Regulatory Approaches
Determining which regulatory/governance approach (RGA)Footnote 1 to adopt is a recurring predicament spanning the diverse spectrum of health research activities. For example, a key challenge concerning emerging technologies is regulatory lapse – law’s inability to keep up with the fast pace of technological development and adoption.Footnote 2 Novel practices/technologies may be subsumed under pre-existing frameworks through processes of commensuration such as legislative analogy. Alternatively, it may be determined that entirely new frameworks are required.Footnote 3 Pre-existing frameworks may be too rigid and restrictive, or conversely, overly flexible and permissive.Footnote 4 Content included within RGAs may deviate substantially from what takes place ‘on the ground’, raising problems for those charged with interpreting regulation, leading to theory-practice gaps.Footnote 5 Similarly, current approaches may fail to reflect embodied experiences of the subjects affected by regulation.Footnote 6
Universally, then, important questions arise relating to what form RGAs should take. Should they manifest as specific prescriptive norms, which often appear in the form of rules? Or would high-level and more abstract norms, such as those typically communicated through principles be more effective? Is additional guidance needed alongside rules and principles? If so, what form should this take? Each approach can have repercussions for the patients, researchers, regulators, developers, manufacturers, technologies and other key actors, subjects and objects constituting health research ecosystems. It is imperative that prior to adopting a particular RGA, the respective benefits and limitations of different potential approaches are granted due regard.
Many spheres of health research are widely populated by rules, principles and supplementary guidance. These manifest in diverse forms including: international instruments, primary and secondary legislation, ethical frameworks, professional guidance, codes of conduct, best practice instantiations, recommendations and standards. Consider, for instance, use of patient health data for research purposes. UK-based researchers wishing to access such data must consider the requirements laid out within (among others) the General Data Protection Regulation (Regulation (EU) 2016/679) (GDPR), the UK Data Protection Act 2018 and the NHS Act 2006. Additionally, they must consider guidance from the Information Commissioner’s Office, and adhere to the Caldicott Principles and applicable professional guidance. Technical standards such as those set out by the International Organization for Standardization (ISO) must also be observed. Researchers may be required to obtain research ethics committee approval, and demonstrate due consideration and mitigation of the risks and privacy impacts of data uses and whether or not such uses carry social value and are in the public interest. Many additional spheres of health research can prove similarly labyrinthine.
Navigating such complex regulatory frameworks and interpreting provisions included within them is challenging. A balance must be sought between offering clear articulations of what is required, permitted and prohibited, while retaining sufficient flexibility to guarantee applicability across a wide array of contexts. This tension between specificity and flexibility is a recurring dilemma for regulation. Regardless of the technologies/activities under consideration, an additional balance must be sought of providing adequate coverage of the range of pre-existing activities associated with a specific type of research and simultaneously avoiding the risk of becoming obsolete when new applications of, or progressions in, those research practices/technologies appear. For example, one of the driving factors for the introduction of the GDPR was the drastic transformations in how data are used today as compared to when its predecessor, the European Directive 95/46/EC, was drafted. Given the challenges of navigating health research regulation, we ought also to consider how best to communicate norms, while supporting decision-makers in the inevitable exercise of discretion. These concerns lead us to engage with two dominant RGA approaches in health research: rules-based and principles-based approaches. The next section considers these, placing an emphasis on principles-based approaches, which, I argue, can be especially helpful for complex regulatory landscapes.
17.3 Rules and Principles-Based Approaches
To understand what rules-based and principle-based approaches are, we should briefly define rules and principles. It may be more meaningful to talk of ‘rule and principle-type features’ than attempt to provide hard and fast definitions of rules and principles. These mean different things to different people in different contextsFootnote 7 and grey areas exist where differentiation based on any sole ‘typical’ characteristic is unhelpful. For instance, reliance upon high specificity of language as the identifying feature of rules is problematic because rules can be articulated in general terms. Conversely, principles are not always communicated through abstract language, despite frequently being described as such. Consider the Nuremberg Code 1947, the norms included within it, referred to as principles, are articulated through prescriptive language: ‘The experiment should be conducted only by scientifically qualified persons. The highest degree of skill and care should be required through all stages of the experiment of those who conduct or engage in the experiment’.Footnote 8 Another example is the CIOMS International Ethical Guidelines for Health-related Research Involving Humans, its content being collectively referred to as ‘rules and principles’. Upon closer inspection, it is unclear which of the guidelines are rules and which principles.Footnote 9 Given these definitional challenges, reference is made here to typical but not unequivocal features of rule and principle-like norms.
Rules are typically specific, prescriptive and fixed iterations of what to do.Footnote 10 They may be conceptualised in terms of rigidity, enforceability and whether they carry legal obligations. They can be characterised according to their pedigree or the manner in which they were adopted or developed.Footnote 11 Examples include rules contained within the GDPR and the UK Data Protection Act 2018. According to legal theorist Alexy, ‘rules are norms which are always either fulfilled or not. If a rule validly applies, then the requirement is to do exactly what it says, neither more nor less. In this way rules contain fixed points in the field of the factually and legally possible’.Footnote 12 Rules can be considered as norms that are applicable in an all-or-nothing fashion, i.e. barring an exception to a rule, they either apply to a scenario or they do not.Footnote 13 A rule-based approach (RBA) to regulation is dominated by such rule-like norms.
In contrast, principles are frequently characterised as high-level, general and abstract norms.Footnote 14 These may be ethical and/or legal, conceptualised as broad iterations of individual or sets of ethical values, such as those included within Beauchamp and Childress’ Four Principles (Principlism).Footnote 15 Accordingly, respect for beneficence and non-maleficence implies that health research should aim to provide benefit and to minimise foreseeable harm. The Four Principles are considered prima facie in nature, implying that they must be satisfied barring conflict between the principles. Within legal theory, it has been suggested that principles are optimisation requirements, i.e. norms that can be satisfied to varying degreesFootnote 16 as opposed to the ‘valid or not’ quality of rule-like norms. Principles can be articulated in more general and less legally enforceable terms but equally, breach of principles can lead to legal repercussions. For example, infringement of any of the seven principles included within the GDPR renders organisations subject to fines of up to €20 million or 4 per cent total worldwide annual turnover. Regardless of whether enshrined within legislative provisions or guidance documents, principles have the potential to shape behaviour within the health research setting, given the various commitments – legal, moral, political and other – to which they can give rise. A principle-based approach (PBA) is dominated by such principle-like norms.
Choosing to adopt a PBA dominant path in preference to RBA or vice versa carries important repercussions for health research landscapes and actors navigating them. The specific question of RBA v PBA received attention within the context of financial market regulation during the shift from RBA to PBA in the 1990s.Footnote 17 Different categories of PBA were identified, including full, polycentric,Footnote 18 formal and substantive.Footnote 19 Their commonality lies in a preference towards broad principle-like standards over detailed prescriptive and specific rules for setting standards of behaviour.Footnote 20 In contrast, discussions within bioethics have centred on: (1) specific content of particular rules/principles; (2) how principles ought to be balanced against each other when conflict arises – including whether certain principles ought to take priority over others; (3) which particular rules/principles ought to be in/excluded from ethical frameworks; and (4) how to extract action-guiding content from abstract principles.Footnote 21
Within health research regulation more specifically, consideration of PBA in contrast to RBA has been more limited. Some contributions exist in the contexts of regulating the use of stem cellsFootnote 22 and health dataFootnote 23 in health research. In those arenas, PBA has been preferred over RBA in recognition of the value of principles and limitations of rules but without concluding that approaches dominated by principles obviate the need for rules. Rules play pivotal roles in delineating ‘boundaries beyond which research ought not to stray and therefore over which society requires closer regulatory oversight’.Footnote 24 Rule-like norms may provide certainty to decision-makers given their typically detailed and prescriptive nature. The value of hard and fast rules, particularly manifested via legislation is not under dispute. Rather, as will become apparent, I suggest that their rigidity can leave RBA-dominated frameworks ill-suited to the demands of complex regulatory landscapes, especially rapidly-evolving health research terrains. In contrast, PBA affords the flexibility fast-paced technological change often necessitates. Principles can create and leave space for interpretation and the exercise of discretion, essential when dealing with difficult decisions and ensuring applicability across a variety of contexts.Footnote 25 The remainder of this section therefore focuses on PBA and, through exploration of several key functions principles can perform (often in contrast to rules), explains how they may be especially useful to health research regulation.
Principles can protect against over/under-inclusiveness of activities or subjects of regulation, in contrast to rules. Where specific and prescriptive rules are employed, there is a risk that by virtue of their rigidity, rules either fail to capture relevant activities within them, or are applied to activities that ought not to fall under their purview. It is impossible to legislate for every eventuality, particularly at the cutting edge of scientific research. Consider the proliferation of data-driven technologies that revealed the European Data Protection Directive 95/46/EC was no longer fit for purpose. Its replacement, the GDPR, seeks to better reflect the status quo viz potential data use and applications, albeit that ongoing and rapid developments in Artificial Intelligence (AI), computing and analytics are generating new regulatory concerns.Footnote 26 The GDPR is, however, underpinned by seven high-level ‘principles’ to be factored-in to all interpretations of activities falling within its scope. These may have more longevity and reach than prescriptive rules because principles are less likely to be as detailed and technology-specific as rules tend to be. Of course, principles may also necessitate revision, for example to reflect changes in consensus around what the overarching principles ought to be, but they are more likely to outlast the technological changes that can frequently make prescriptively drafted rules obsolete.
A further strength of principles is their interpretive/guiding function, in communicating the spirit with which more specific norms – including rules – ought to be applied, especially where tensions exist within law, e.g. simultaneously restricting, banning and promoting behaviour. This function can be observed in the approach of the UK Human Fertilisation and Embryology Authority (HFEA). It includes within its Code of Practice a series of regulatory principles to be adhered to when licensed activities are carried out under the Human Fertilisation and Embryology Act 1990, as per S8(1(ca). For example, the first principle states licensed centres must ‘treat prospective and current patients and donors fairly, and ensure that all licensed activities are conducted in a non-discriminatory way’.Footnote 27 Such overarching principles can guide and assist decision-makers in all of their related activities.
The paramountcy of stakeholder engagement is a strong theme within this volume.Footnote 28 High-level principles can provide an effective dialogical tool for engagement with different stakeholders, enabling ongoing moral debate, and identifying interests and values at stake. As I argue elsewhere, PBA are more conducive to fostering meaningful dialogue because they avoid prescribing specifically (as rules often do), what ought to be done. Further, they ‘promote reflection precisely on this point … and in particular, they offer us the opportunity to lay out the core values which matter to us in the specific context. Rules, in contrast, can do the opposite, they can either prohibit something that might not be problematic or … grant licence where there is little’.Footnote 29 The UK care.data debacle illustrates the danger of overreliance on rules and failure to effectively engage in discussion of core principles of concern to stakeholders.Footnote 30
Relatedly, the legitimacy of RGAs that are not co-produced alongside individuals/groups affected by them is problematic. For example, dominant policy framings can tend to portray innovation per se in a positive light,Footnote 31 but some innovation is high risk. Appropriate frameworks must be developed as simultaneously responsive to potential value and dangers of innovation.Footnote 32 Key to this is explicit acknowledgement from the outset of the imperative to shape the trajectory of research and innovation alongside and for society. Lipworth and Rexler call for development of a bioethics of innovation, which necessitates dialogue and engagement with stakeholders. The framework for Responsible Research and InnovationFootnote 33 advanced by Stilgoe and colleagues contains four ‘dimensions’ (anticipation, reflexivity, inclusion and responsiveness) that are akin to high-level principles and can serve as a helpful framing device through which to engage in dialogue. Indeed, as Devaney notes, PBA may have the capacity and potential to reflect, encompass and be facilitative of the process of innovation itself.Footnote 34
In this section, I have laid out some key strengths of PBA, illustrating why they may be better suited to complex health research landscapes than RBA. The discussion now advances to consider an equally important aspect of developing appropriate RGAs: the need for additional tools to support decision-making.
17.4 Rules and Principles: Necessary but Not Sufficient
PBA and RBA have limitations and additional tools are necessary to guide decision-makers. For instance, resolving conflict between principles is challenging. Balancing, whereby each principle is assigned a weight, is a methodology through which it is suggested competing principles may be prioritised. However, balancing implies commensurability (that each principle can be assigned a weight),Footnote 35 which is obviously problematic at a practical level. Further, balancing can give rise to subjectivism, decisionism and intuitionism where decision-makers justify weighting according to preconceived prejudices.Footnote 36 For instance, Principlism has been criticised for prioritising respect for autonomy over other principles. Balancing is also a challenge for RBA: inter-rule conflict may equally arise as ‘[r]ules look more certain when they stand alone; uncertainty is crafted in the juxtaposition with other rules’.Footnote 37 Conflict between competing norms may be inevitable and balancing requires both judgement and justification. Opting for one interpretation/resolution to a decision is only legitimate if it comes with well-reasoned and justifiable bases. Nonetheless, decision-makers require support in determining how to approach balancing. Likewise, concrete examples are required to elucidate how conflict between principles and rules ought to be addressed in practice.
Another criticism of high-level norms is that their abstract nature leaves too much interpretative space; extracting meaningful, action-guiding content becomes challenging. For instance, the Declaration of Helsinki states: ‘Groups that are underrepresented in medical research should be provided appropriate access to participation in research’.Footnote 38 This does not suggest what ‘appropriate access’ entails. Even rules, – particularly when articulated broadly – are open to challenges of interpretative uncertainty, given the ‘open texture’ of language.Footnote 39 Content included within rules/principles may be interpreted overly-cautiously in fear of potential regulatory repercussions, stifling important research which may actually be legally and ethically permissible, as has been the case in some data sharing contexts.Footnote 40 Alternatively, interpretative latitude can leave room for creative compliance and exploitation of excessively abstract norms. Further, any potential certainty derived by RBA or PBA articulated in prescriptive language still necessitates shared understandings of the content – especially key terminology – of rules/principles and the overall objectives to be pursued, again suggesting the need for supplementary guidance to aid interpretation.
Theory-practice gaps and the need for context-sensitivity are also significant. Failure to adequately reflect the practical realities of conducting research ‘on the ground’ risks rendering norms ineffective. For instance, health research activities during global health emergencies have revealed disparities between what regulations demand and what is practically feasible or context-appropriate. Requirements to obtain timely ethical approval and adhere to randomised control trial protocols are not always possible/appropriate in time-sensitive settings and where proven therapeutics are lacking. Traditional distinctions between medical care/practice/treatment and research/innovation activities are blurred.Footnote 41 Discerning between practices primarily seeking to benefit the individual patient (treatment) and those aimed towards generating generalisable knowledge (research) is difficult, particularly regarding ‘innovative’ practices.
It is apparent from this discussion that rules and principles, while indispensable as regulatory tools, possess weaknesses that can limit their effectiveness in decision-making. As considered next, more is often required to support decision-makers, particularly in interpreting relevant norms, offering context-sensitivity and reflecting practical realities.
17.5 Supplementary Guidance and the Added Value of Best Practice
Additional supplementary guidance alongside RBA and PBA exists across many health research domains. This appears in myriad forms including: standards; guidelines; codes of practice; good practice; and, as will receive special attention below, best practice.
Clinical guidance in the form of guidelines proliferated from the 1970s onwards in the UK to achieve technical, procedural and administrative standardisation in medical practice and to maintain professional autonomy.Footnote 42 Good Clinical Practice Guidelines and evidence-based guidance from the National Institute for Health and Care Excellence are UK-based examples. Internationally, the World Health Organization (WHO) continually issues and updates guidelines on a variety of health topics, each designed to ensure the appropriate use of evidence in health policies and interventions, in accordance with the standards set out for guideline development.Footnote 43
The role of guidance and its role in shaping health research practices also necessitates attention. Numerous international guidance documents exist, including the CIOMS Guidelines, which are supported by a ‘commentary’. Likewise, the fourteen guidelines included within WHO Guidance for Managing Ethical Issues in Infectious Disease Outbreaks are each accompanied by questions illustrating the scope of ethical issues and ‘a more detailed discussion that articulates the rights and obligations of relevant stakeholders’.Footnote 44
Given that regulatory landscapes are frequently complex and pre-saturated with rules and principles (in their various forms), it is paramount that the introduction of supplementary guidance is approached with caution. Arguably, more guidance alone could suffer from the same criticisms as PBR and RBR; simply more norms requiring interpretation. Important considerations must be factored-in to the design and implementation of guidance to ensure its effectiveness. For example, the legitimacy of guidance is interlinked with the sources from which it has been generated and we must ask who this gives power to. As mentioned above, guidance may represent a means for actors to preserve autonomy and freedom from external interference/control. This raises questions of fairness, justice and transparency. Consider legal and ethical issues associated with current/anticipated uses of data and AI within health research. Concerns have been raised of technology firms developing their own guidance, facilitating creative compliance and self-regulation.Footnote 45 Even where guidance is drafted by independent committees, diverse interests must be balanced. For example, in the contexts of Big Data and AI, trade-offs are apparent in the UK between protection of privacy rights of individual citizens and national ambitions of economic growth, international competitiveness and participation in the fourth industrial revolution.
Additionally, the form guidance takes carries important ramifications for legitimacy and uptake. Distinct categories of guidance exist, at times operating at different levels. Guidance comprises a broad category ranging from anecdotal to evidence-based guidelines. At the time of writing, COVID-19 is causing a global pandemic. As health systems around the world struggle to treat patients, a plethora of new guidance documents are emerging from multiple sources, based on varying degrees of evidence. These include guidelines to support decision-making around public health responses to containment and guidance to support frontline health workers in resource allocation. Where guidance diverges across different sources, challenges arise as to which guidance to follow and why. Due regard must be given to how guidelines interact with pre-existing regulation. Another important consideration is what the repercussions, if any, might be for non-observation of, or derogation from guidance.
Relatedly, this generates considerations around how compliance with guidance is to be measured, incentivised or even enforced. It also follows that fundamental questions arise around the processes involved in drafting, endorsing, disseminating and implementing guidance. Central to these, is the question of where public voices are in each of these processes, as considered elsewhere in this volume.Footnote 46
I have argued previously that best practice (BP), a form of supplementary guidance, can be particularly helpful for decision-makers within health researchFootnote 47 and in ways that are distinct from other forms of guidance. For example, BP, as co-produced through inclusive and consultative processes, can provide a platform for inclusion of public(s) and additional stakeholder perspectives.Footnote 48
One example of such an approach can be viewed again in the context of AI. In recognition of the widespread development and adoption of AI applications, the European Commission has developed Guidelines for Trustworthy AI. The Commission has taken a phased approach to piloting these, including wide consultation with various stakeholders. It is notable that alongside the guidelines, explanatory notes are offered and the Commission has established a Community of Best Practices for Trustworthy AI.Footnote 49 Through the European AI Alliance, registered participants could share their own best practices on achieving trustworthy AI.
Further, a recent report on the regulation and governance of health research lamented the ‘disconnect between those making high-level decisions on how regulations should be applied and those implementing them on the ground’.Footnote 50 BP instantiations also offer concrete examples of principles or rules ‘in action’, based on lessons learned from those experienced in interpreting and applying the relevant norms. BP can thus serve an important function of helping to bridge such problematic policy-practice divides.
BP instantiations can also support decision-makers in interpreting relevant legislative provisions and/or ethical frameworks and related obligations. They provide more detailed explanatory notes on the legislative or normative intent behind overarching principles/rules. They provide a mechanism through which to make explicit to intended users of guidance what the status of such guidance is, and how it relates to pre-existing rules, principles and additionally relevant guidance. BP also guides decision-makers in approaching resolution of conflicting principles or rules, which I identified earlier as a key challenge for PBA and RBA.
Finally, I noted earlier that rules can close down conversations. BP also carries such risks; reference to BP may decontextualise and thwart discussion/use of other practices. Arguably, use of the term ‘best practice’ or ‘good practice’ suggests a superlative and that derogations from BP interpretations are suboptimal. But best practices (as construed here) are subject to constant review and revision, thus by definition, always seeking what is ‘best’ in a given context. In turn, in order to remain fit for purpose, best practices require us to constantly revisit the underlying rules/principles to which they correspond. In this regard they drive a symbiotic relationship between all of the norms in play towards an optimal system of regulatory and governance approaches.
In this chapter I have outlined key considerations in adopting rules, principles and supplementary guidance-based approaches to health research regulation. In particular, I have laid out the suitability of principles for guiding decision-makers across complex regulatory landscapes. I suggested that the introduction of supplementary guidance could tend to limitations of PBA and RBA. But, in turn, I stressed that generating new guidance must be approached with caution and due regard to additional concerns. Finally, I highlighted the added value that best practice – to be distinguished from other forms of supplementary guidance – can bring to complex regulatory landscapes.
1 I collectively refer to regulatory and governance approaches (RGA) in recognition of the fact that rules, principles and other guidance may manifest as legislation typically associated with regulation as well as other forms of guidance associated with governance. For more discussion on the relationships between regulation and governance, see the Introduction of this volume.
2 L. B. Moses, ‘Recurring Dilemmas: the Law’s Race to Keep Up with Technological Change’, (2007) University of Illinois Journal of Law, Technology and Policy, 2007(2), 239– 285; R. Brownsword and M. Goodwin, Law and the Technologies of the Twenty-First Century (Cambridge University Press, 2012).
3 A. Faulkner and L. Poort, ‘Stretching and Challenging the Boundaries of Law: Varieties of Knowledge in Biotechnologies Regulation’, (2017) Minerva, 55(2), 209–228.
4 Multiple examples are offered throughout this volume. See, for example, Kaye and Prictor’s (Chapter 10) discussion on the challenges of digital transformation for consent.
5 N. Sethi, ‘Research and Global Health Emergencies: On the Essential Role of Best Practice’, (2018) Public Health Ethics, 11(3), 237–250.
6 As explored by Flear in this volume (see Chapter 16).
7 K. Wildes, ‘Principles, Rules, Duties and Babel: Bioethics in the Face of Postmodernity’, (1992) Journal of Medicine and Philosophy, 17(5), 483–485.
8 Nuremberg Code, 1949.
9 CIOMS, ‘International Ethical Guidelines for Health-related Research Involving Humans’, (Council for the International Organization of Medical Sciences, 2016), xii.
10 S. Arjoon, ‘Striking a Balance Between Rules and Principles-based Approaches for Effective Governance: A Risk-based Approach’, (2006) Journal of Business Ethics, 68(1), 53–82; J. Braithwaite, ‘Rules and Principles: A Theory of Legal Certainty’, (2002) Australian Journal of Legal Philosophy, 27, 47–82; T. Beauchamp and J. Childress, Principles of Biomedical Ethics, 7th Edition (Oxford University Press, 2013).
11 As considered in the longstanding Hart-Dworkin debate on legal positivism. See H. Hart, The Concept of Law, 2nd Edition, P. Bulloch (ed.), (Oxford: Clarendon Press, 1994) andR. Dworkin, ‘The Model of Rules’, (1967) University of Chicago Law Review, 35(1), 14–46.
12 R. Alexy, A Theory of Constitutional Rights (Oxford University Press, 2002), p. 4.
13 Dworkin, ‘Model’; M. Redondo, ‘Legal Reasons: Between Universalism and Particularism’, (2005) Journal of Moral Philosophy, 2(1), 47–68.
14 D. Clouser and B. Gert, ‘A Critique of Principlism’, (1990) The Journal of Medicine and Philosophy, 5(2), 219–236; J. Raz, ‘Legal Principles and the Limits of the Law’, (1972) Yale Law Journal, 81(5), 823–854; Beauchamp and Childress, Principles; Dworkin, ‘Model’.
15 Beauchamp and Childress, Principles.
16 Alexy, Theory.
17 J. Black et al., ‘Making a Success of Principles-Based Regulation’, (2007) Law and Financial Markets Review, 1(3), 191–206.
18 J. Black, The Rise, Fall and Fate of Principles Based Regulation, (2010), LSE Law Society and Economy Working Papers (17/2010).
19 K. Alexander and N. Moloney, Law Reform and Financial Markets (Cheltenham: Edward Elgar Publishing, 2011).
20 Black et al., ‘Making a Success’.
21 H. Richardson, ‘Specifying, Balancing and Interpreting Bioethical Principles’, (2000) Journal of Medicine and Philosophy, 25(3), 285–307.
22 S. Devaney, ‘Regulate to Innovate: Principles-Based Regulation of Stem Cell Research’, (2011) Medical Law International, 11(1), 53–68.
23 G. Laurie and N. Sethi, ‘Towards Principles-Based Approaches to Governance of Health-Related Research Using Personal Data’, (2013) European Journal of Risk Regulation, 4(1), 43–57.
24 Devaney, ‘Innovate’, 60.
25 Black et al., ‘Success’.
26 For example, discussion within House of Lords Select Committee on Artificial Intelligence 2017–2019, ‘AI in the UK; Ready, Willing and Able?,’ 16 April 2018 HL Paper 100; G. Hinton, ‘Deep Learning – A Technology with the Potential to Transform Health Care’, (2018) JAMA, 320(11), 1101–1102.
27 HFEA Code of Practice, Edition 9.0 (2019).
28 See, for example, Choung and O'Doherty, Chapter 12, this volume.
29 N. Sethi, ‘Reimagining Regulatory Approaches: On the Essential Role of Principles in Health Research Regulation’, (2015) SCRIPTed, 12 (2), 91–116, 110.
30 P. Carter et al., ‘The Social Licence for Research: Why care.data Ran into Trouble’, (2015) Journal of Medical Ethics, 41(5), 404–409; M. Quiroz-Aitken et al., ‘Consensus Statement on Public Involvement and Engagement with Data-Intensive Health Research’, (2019) International Journal of Population Data Science, 4(1). See also Burgess in Chapter 25 of this volume.
31 W. Lipworth, and R. Axler, ‘Towards a Bioethics of Innovation’, (2016) Journal of Medical Ethics, 42(7), 445–449.
32 Special issue, ‘Regulating Innovative Treatments: Information, Risk Allocation and Redress’, (2019) Law Innovation and Technology, 11(1).
33 J. Stilgoe et al., ‘Developing a Framework for Responsible Innovation’, (2013) Research Policy, 42(9), 1568–1580.
34 Devaney, ‘Innovate’.
35 H. Richardson, ‘Specifying, Balancing, and Interpreting Bioethical Principles’, (2000) Journal of Medicine and Philosophy, 25(3), 285–307.
36 R. Veatch, ‘Resolving Conflicts among Principles: Ranking, Balancing and Specifying’, (1995) Kennedy Institute of Ethics Journal, 5(3), 199–218.
37 Braithwaite, ‘Rules and Principles’.
38 WMA General Assembly, ‘Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Subjects’, (WMA, 1964, as amended).
39 Hart, Concept.
40 N. Sethi and G. Laurie, ‘Delivering Proportionate Governance in the Era of eHealth: Making Linkage and Privacy Work Together’, (2013) Medical Law International, 13(2–3), 168–204.
41 A. Ganguli-Mitra and N. Sethi, ‘Conducting Research in the Context of GHEs: Identifying Key Ethical and Governance Issues’, (Nuffield Council on Bioethics, 2016); N. Sethi, ‘Regulating for Uncertainty: Bridging Blurred Boundaries in Medical Innovation, Research and Treatment’, (2019) Law, Innovation and Technology 11(1), 112–133.
42 G. Weisz et al., ‘The Emergence of Clinical Practice Guidelines’, (2007) Milbank Quarterly, 85(4), 691–727.
43 World Health Organization, Handbook for Guideline Development, 2nd Edition (WHO, 2014).
44 World Health Organization, ‘Guidance for Managing Ethical Issues in Infectious Disease Outbreaks’, (WHO, 2016).
45 P. Nemitz, ‘Constitutional Democracy and Technology in the Age of Artificial Intelligence’, (2018) Philosophical Transactions of the Royal Society, Series A 376(2133), 1–14.
47 N. Sethi, ‘Research and Global Health Emergencies: On the Essential Role of Best Practice’, (2018) Public Health Ethics 11(3), 237–250.
48 Laurie and Sethi ‘Approaches’.
49 High-Level Expert Group on Artificial Intelligence, ‘Ethics Guidelines for Trustworthy AI’, (European Commission, 2019), www.ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
50 Academy of Medical Sciences, ‘Regulation and Governance of Health Research: Five Years on’, (The Academy of Medical Sciences, 2016).
Across most jurisdictions today, researchers who propose to involve humans, their tissue and/or their data in a health research project must first submit an application form, which includes the research protocol and attendant documents (e.g. information sheets and consent forms), to one or several committees of experts and lay persons, who then assess the ethics of the proposed research. In some jurisdictions, this review, known as research ethics review, is mandated by law. In these cases, the law may be generalFootnote 1 or it may apply to specific kinds of health research, such as clinical trials of an investigational medicinal productFootnote 2 or health research involving adults lacking capacity.Footnote 3 In other jurisdictions, and depending on the type of research project, research ethics review may be required or expected by ‘softer’ forms of regulation, such as guidelines, policy or custom, with the processes for the review consequently less standardised – and more flexible – than in a rules-based regime.
The principal aim of these research ethics committees (RECs), also known as institutional review boards (IRBs) and research ethics boards (REBs),Footnote 4 is to protect the welfare and interests of prospective (and current) participants and to minimise risk of harm to them. Another aim is to promote ethical and socially valuable research. This phenomenon of evaluating the ethics of proposed health research and determining whether the research may proceed – and on what grounds – has been in existence largely since the 1960s.Footnote 5 Originally designed for review of clinical research involving healthy human volunteers, research ethics review has since expanded to cover all fields of health research, including social science-driven health research such as qualitative studies investigating patient experiences with a disease or treatments that they receive. Given their central role in determining the bounds of ethical research, it is unsurprising to learn that RECs have been subject to sustained scrutiny; in many quarters, this has resulted in criticism within the health research and academic community that, among other things, the process of research ethics review is not fit for purpose. The cumulative charge is that research ethics review by committees promotes a wicked combination of inexpert review, inconsistent opinions, duplicative work, mission creep and heavy-handed regulation of health research.
This chapter places this charge at the focal point. In what follows, I chart the process of research ethics review with a view towards arguing that RECs have become regulatory entities in their own right and very much are a form of social control of science. As I detail, while RECs are far from perfect in terms of regulatory design and performance, they do perform, at least in principle, a valuable role in helping to steward research projects towards an ethical endpoint. In what follows, I analyse the nature and aims of research ethics review and the body of academic research regarding research ethics review. In so doing, this chapter also offers a critique of existing work and suggests some future directions for both the regulatory design of research ethics review and also researching the field itself.
Many scholars have long viewed the notion of evaluation of the ethics issues of a proposed research project by a committee of people qualified in some way to assess the project’s ethics as necessary, but not necessarily sufficient, for the successful functioning of, and securing of public trust in, health research. RECs, it is said, reflect a pragmatic system of ‘social control’ by researchers’ academic and community peers. As William May opined in 1975: ‘The primary guarantee of protection of subjects against needless risk and abuse is in the review before the work is undertaken. […] [I]t is the only stage at which the subject can be protected against needless risk of injury, discomfort, or inconvenience’.Footnote 6 John Robertson similarly concluded in 1979: ‘The [REC] is an important structural innovation in the social control of science, and similar forms are likely to be developed for other such controversial areas’.Footnote 7 By influencing research in an event-licensing capacity – that is, by offering opinion on and approval (or rejection) of a research project before it commences – RECs are seen to mitigate risks to researchers, participants and society. To this extent, research ethics review can be cast as a regulatory process.
As RECs have become more entrenched in the regulatory apparatus of health research over the past half-century, they have come to hold tremendous power over how research is shaped – and thus, influence over what knowledge is produced – as well as how the relationship between a researcher and a research participant is circumscribed. As Laura Stark observes, ethics committees ‘are empowered to turn a hypothetical situation (this project may be acceptable) into shared reality (this project is acceptable). […] [T]hey change what is knowable’.Footnote 8
But it remains unclear what exactly constitutes research ethics review. Indeed, we might ask whether RECs engage in ethics deliberation at all – and, just as critically, whether this matters to fulfilling their putative regulatory role of assessing the relevant ethics issues in a project. Perhaps the challenge lies with the term ‘research ethics review’. This suggests less of a focus on formulaic, bureaucratic – arguably synonymous with ‘regulatory’ – answers to questions (e.g. ‘Is there informed consent?’; ‘Have they used our consent form template?’) and more of a focus on seeking deeper, more philosophically engaged answers to penetrating questions, such as: ‘Do we really need informed consent here?’; ‘What sort of alternative and preferable safeguards might there be and why?’; ‘Is this research in the public interest?’; or ‘What public good might come from this research and is the financial and social cost commensurate?’.
What is reasonably clear is that a REC provides a favourable opinion only if it is assured that the ethics issues in the proposed research are appropriately addressed by the researcher – and sponsor – before the project proceeds. As the issues will vary depending on the research in question, REC members receive training and guidance about the issues they should consider, both in general and in particular cases. For example, according to the Governance Arrangements for Research Ethics Committees (GAfREC), which is a formal governance document for National Health Service (NHS) RECs in the UK: ‘The training and guidance reflect recognised standards for ethical research, such as the Declaration of Helsinki, and take account of applicable legal requirements’.Footnote 9 If REC members learn about what research ethics is supposed to entail according to ‘recognised standards’ and take account of ‘applicable legal requirements’, we might reasonably ask whether the REC meetings themselves reflect a kind of instantiated deliberative decision-making ethics – that is, ethics as input, process, and outcome – where members individually and collectively evaluate and come to decide on the ethical acceptability of research proposals by invoking and deliberating on standards and requirements more than (ethical) norms or principles. If this is so, the REC, as a form of a decision-making body, need not necessarily ‘do ethics’ at all.
Though clearly RECs are making firm recommendations to researchers in these [previously discussed] examples of both inconsistent and consistent advice, the source of ethical authority for the REC in coming to their conclusions is rarely explicit in the letters. GAfREC – which provides the framework within which RECs are expected to work – is not referred to in any of the letters in our sample. Specific ethical principles or even guidelines are rarely invoked explicitly, and when they are, it is to authenticate or legitimise the decisions of the committee […].Footnote 10
If the REC opinion letter is a reasonably accurate reflection of the contents of a REC meeting’s discussion, then there is some doubt as to whether ethical rules, norms or principles are openly discussed. Other empirical research has affirmed this doubt.Footnote 11
Yet, the names bestowed upon these bodies by many jurisdictions (‘ethics committees’ or ‘ethics boards’), and the related expectation that they should engage in research ethics review – and related criticism that they do not do enough of this – may, in fact, be somewhat misplaced. I have suggested through my own empirical research that as RECs become institutionalised and professionalised, acting as multi-faceted and multidisciplinary micro-regulators of health research, and as further national and international regulations come into force that impact health research, RECs might be expected to act more as risk-assessing ‘health research regulatory committees’ writ large.Footnote 12 Somewhat similarly, based on her own recent empirical research, Sarah Babb makes the case that IRBs in the USA have transformed from academic committees to ‘compliance bureaucracies’, where specialised administrative staff members define and apply federal regulations.Footnote 13 Even if RECs do not engage in something approaching truly substantive ethics deliberation, and this is (partly) accepted as an outcome of practical constraints (e.g. limited resources and pressed time), might they still be able to fulfil their aim of targeting areas of health research that pose moral concern, and might they still be able to mitigate the manifestation of those concerns?
Indeed, I would argue that it is not necessarily problematic to acknowledge that RECs rarely engage in deep ethics deliberation. RECs are a valuable regulator in health research, and so if there are criticisms of them, we should look to those criticisms that speak to their regulatory functions – procedures, performance and so on – more than the absence or presence of ethics deliberation per se. By focusing here, we may come to see that concerns about efficiency, effectiveness, proportionality, reduced burden and so on, must be addressed more directly. Acknowledging this is not to say that RECs cannot spot and deal with thorny ethics issues when or if they arise, but it does allow us to be arguably more accurate and honest to cast them for what they are: regulators with a gatekeeping and promotional role about getting safe and good science done.
18.3 REC Criticisms: Poor Design and Performance and the Fetishisation of Consent
For as long as they have existed, RECs have been subject to opprobrium from the research community and academic commentators, mainly because they are seen as under-, over- or simply mis-regulated bureaucratic bulwarks against otherwise ethical, minimally risky or non-risky research. For years, research into RECs has revealed a high level of variation of decision-making processes in RECsFootnote 14 and dissatisfaction from various stakeholders.Footnote 15 These criticisms can be grouped into concerns about (a) design and performance and (b) the fetishisation of consent.
Many of the problems scholars have identified with research ethics review have been due both to weak regulation – which contributes to procedural and substantive inconsistency of decision-making – and also over-regulation – which contributes to duplicative review and cumbersome and complex thickets of disproportionate regulation for research that presents minimal risk.Footnote 16 In their review of US IRBs, Emanuel and colleagues identified fifteen ‘problems’ and grouped them into three broad categories: (1) structural problems deriving from the organisation of the system as established by the US federal regulations, (2) procedural problems stemming from the ways in which individual IRBs operate, and (3) performance assessment problems resulting from the absence of systemic assessment of current protections.Footnote 17 Arguably, many of these structural, procedural and performance assessment problems also could be identified in RECs in other jurisdictions.
Indeed, the main design and performance concerns with the research ethics review process commentators have identified over the years include:
inconsistency in procedures and substantive decisions within and across committees;
delays or impediments to research due to slow-moving RECs that have no built-in efficiency incentive;
cumbersome bureaucratisation and standardisation of application forms that are ill-suited to different types of research, that slow and muddy the process of ethics review and that lead to heavy administrative burdens for researchers;
distortion of research methods imposed by RECs who may not be trained in research methods and are not qualified (or expected) to judge the scientific merit of applications;
over- or exclusive reliance on prior (ex ante) review that inadequately assures that the actual conduct of research is in accordance with ethical standards;
imposition of inappropriate consent requirements in certain types of research projects (e.g. surveys, behavioural intervention studies) that can lead to potential selection bias in participation and responses; and
increased risk of unethical research, in part due to the ever-growing length of information sheets that participants do not bother to read, and also in part due to lengthy application forms that researchers and REC members alike either may not adequately read or quickly complete – in other words, the insidious growth of a ‘tick-box mentality’.Footnote 18
The cumulative account of these concerns suggests that better regulation is needed to improve the efficiency and effectiveness of research ethics review by RECs, and this may entail, among other things, streamlining existing regulation, enacting robust standard operating procedures (SOPs), designing templates tailored to the specific type of research project, and embedding in regulation and policy the emerging notion of stewardship. But before I address these ways forward, I now turn to a second persistent criticism of RECs, namely their fetishisation of consent.
Another major criticism of RECs centres on their putatively over-bearing emphasis on consent forms and information sheets, and minute wordsmithing of both, that leads to inevitable elongation of the documents and thereby increased risk of non- or miscomprehension by participants, which ironically may lead to other harms not related to the research, such as stigmatisation or disrespect. Since at least the 1960s,Footnote 19 commentators have argued that consent cannot and should not act as a stand-alone rampart to prevent unethical research. Yet many consider that RECs disproportionately fixate on consent as a locus for determining and setting researchers’ ethical behaviour, demonstrating ‘the acme of self-defeating ritual compliance’.Footnote 20 Perhaps it is because ‘these [consent] documents constitute one of the few aspects of researcher interactions with subjects – a very downstream process – that committees feel they can control.’Footnote 21
This bureaucratic addiction to procedure and process, coupled no doubt with an uptick in legal – albeit siloed – regulation of health research, has led to a legalisation in the workings of RECs, which is to say: a fetishisation for more forms, longer forms and ongoing insistence on boiler-plate language tacked on to information sheets and consent forms so that RECs and institutions protect themselves and others from liability. Consent is treated as a panacea for all ethical concerns,Footnote 22 a kind of Pollyanna-ish hope that, ‘If only we can inject all possible risks and relevant information into the form, then participants can truly exercise their autonomy’. This is not the ‘good kind’ of REC legalisation William Curran envisioned in 1969, replete with a common law-like generalisable body of precedents and principles of procedure and substance that allow the process of deliberation to flourish.Footnote 23 Instead, it is the troubling kind: rigid and overly standardised, treating ethics as a tick-box, form-ridden, technocratically structured event. Once again, this militates against ethics committees actually ‘doing ethics’ in the genuine sense that is understood of that discipline.
Given the groundswell of criticisms over the years, what, then, might be the future directions for research ethics review as a core process in health research regulation, and what might be the future directions for researching research ethics review to assess what is working well and not so well?
While many support the underlying idea of ex ante ethics review by a competent committee as a means to protect and promote the rights, interests and welfare of participants, as this chapter has observed, many also have expressed dissatisfaction with the structure and function of the ethics review system and the individual processes of RECs. Multiple regulatory techniques and instruments have been employed over the years in the hopes of remedying the myriad problems attributed to RECs, foremost the concerns of inefficiency and ineffectiveness.
Scholars have proposed a number of changes to the regulatory design of research ethics review. For the purposes of this chapter, I want to focus on three that have gained attention recently and may be among the most promising: streamlining, standardisation and stewardship.
A number of jurisdictions are now streamlining the process of research ethics review in at least two ways. First, they have introduced proportionate review systems, whereby a research project that is deemed by assessors to present no (or limited) material ethics issues undergoes a lighter-touch review. In the UK, for example, under the Health Research Authority’s (HRA) Proportionate Review Service, such projects are reviewed via email correspondence, teleconference or at a face-to-face meeting by a sub-committee – comprising experienced expert and lay members – rather than at a full meeting of a REC.Footnote 24 The final decision is notified to the applicant by email within twenty-one calendar days of receipt of a valid application, which is a faster turn-around time than an application that goes to a full-committee review. Second, a group of efforts are underway internationally to streamline multiple REC review of multi-site research projects, which is seen as duplicative and disproportionate.Footnote 25 Since 2004, the UK requires only one NHS REC opinion per research project, even if the project involves multiple sites in the country. In the USA, since 2020, a revised rule in the Federal Policy for the Protection of Human Subjects – better known as the ‘Common Rule’ – generally requires US-based institutions that receive federal funding and are engaged in cooperative research projects (i.e. projects covered by the Common Rule that involve more than one institution in the USA) to use a single IRB for that portion of the research that takes place within the USA if certain requirements are met.Footnote 26 This ‘sIRB rule’ reflects a growing effort by regulators and policymakers in countries around the world – including Uganda,Footnote 27 CanadaFootnote 28 and AustraliaFootnote 29 – to reduce the procedural inefficiencies, redundancies, delays and research costs that have become synonymous with the absence of research ethics review mechanisms designed for multi-site health research projects.Footnote 30
A number of jurisdictions are also working on standardisation of the processes involved in ethics reviews, with the aim of achieving more consistent outcomes in review and fairness to applicants. The Care Act 2014 in the UK, for example, requires the HRA to co-operate with several other regulatory authorities in the exercise of their respective functions relating to health or social care research, ‘with a view to co-ordinating and standardising practice relating to the regulation of such research’.Footnote 31 Standardisation is accomplished through various means, including the introduction and maintenance of:
SOPs to ensure procedural consistency across RECs;
template research application forms – including information sheets, consent forms and research protocols – for researchers to devise more thorough and ethically robust applications;
template review forms for REC members to complete when reviewing applications; and
systems of accreditation, qualification or certification of RECs to encourage mutual trust in each REC’s processes of review.
It must be said, though, that while many commentators support standardisation as a way to drive consistency and fairness in ethics review, others blame standardisation for the growth of an undesirable ‘tick-box’ approach that many see as defining REC work today. This, however, might be a product of the continued confusion about whether we see RECs as philosophically attuned ethics deliberation entities rather than as regulatory assessors situated within a wider health research ecosystem. I have argued above that the latter view is more accurate.
Third, the emerging concept of regulatory stewardship may have resonance in reforming the regulatory design of research ethics review to better account for the network of actors involved in bringing an application through the various regulatory thresholds in the research lifecycle. A key finding from recent empirical investigationFootnote 32 is the ability of actors within the health research regulatory space to serve as ‘regulatory stewards’. Research suggests that regulatory stewardship involves different actors – including RECs and others involved in the regulation of health research – helping researchers and sponsors navigate complex regulatory pathways and work through the thresholds of regulatory approvals. Collective responsibility, as a component of regulatory stewardship, requires relevant actors to work together to design and conduct research that is ethical and socially and scientifically valuable and that ultimately aims to improve human health. This can only be accomplished if a framework delineates how and when regulators and regulatees should communicate with one another and makes clear who has what responsibility and role to be played (if any) at each stage in the research lifecycle.
The regulatory environment for research ethics review could be designed to provide clearer channels for RECs – and members within them who may have closer contact with researchers and sponsors – and their own managing regulators (e.g. institutions, ministries, regulatory authorities) to engage with researchers and sponsors in improving the quality of research protocols and applications, and in working through law, regulation, and regulatory approvals. These communicative channels may include online toolkits and more personalised support via email, telephone, or digital meetings.
All of this would have the added advantage of engaging multiple actors in earlier stages of the research design process, including on the actual ethics issues (or not) that arise. Where these are considerable, the further downstream ethics review will still have a role to play; however, where these are minimal or negligible, they might be addressed sooner in the regulatory pathway, leaving the REC to undertake its regulatory role more efficiently and effectively.
Further empirical evidence is needed to investigate questions about extant research ethics review processes and to test new models that seek to improve REC efficiency and effectiveness. There have been few in-depth qualitative studies of RECs focusing on assessment of regulatory design. This undermines effective regulation, as policymakers and regulators – through state actors or otherwise – increasingly seek to develop regulation through intricately documented evidence of problems and the effects of regulation on society. There is a need for qualitative research that explores how and why RECs make the decisions they do, and how the nested dynamics of RECs and central ‘managing’ regulators play into decisions.Footnote 33
Documented problems of RECs have largely relied on evidence and anecdote proffered by researchers. While there is a welcome growing corpus of empirical literature on RECs,Footnote 34 more evidence is needed from regulatory scholars who can go inside RECs to test new models via pilot studies or randomised controlled trials; or who can examine how RECs, both as individual members and as a body, see themselves and their committee in a changing regulatory environment, and can go inside regulatory bodies to gather the regulators’ perspective on the roles of a REC within the health research regulatory space. Research ethics review thus remains an area ripe for investigation.
In this chapter, I have argued that RECs have become regulatory entities in their own right, governed by – depending on the jurisdiction – institutions, central regulatory agencies, administrative staff and offices, standardised forms and communications, and lengthy governance arrangements and SOPs. Just as some legal scholars speak of ‘juridification’,Footnote 35 which is an encroachment of law into ever more aspects of our society, so too might we speak of ethics review increasingly ‘colonising’ the health research regulatory space, structured according to the logic of its codes and customs. When RECs were first coming into being in the 1960s, Harvard Law Professor Louis Jaffe opined that ‘[a] general statutory requirement requiring institutional committees in any “experiment” would raise monstrous problems of interpretation, would unduly complicate medical practice, and would add unnecessary steps to experiments where the risks to the subject or patient are trivial.’Footnote 36
Yet this is where we stand today, with REC review required formally by law or informally by policy for an array of health research, from the trivial to the complex and risky, albeit with more proportionate review processes than occurred previously. Over time, like all of health research, the regulatory space in which RECs are situated has expanded, along with the paperwork and resources researchers must dedicate in order to pass over the ‘ethics hurdle’.
At the same time, scholars remind us that: ‘The role of the Research Ethics Committee is to advise. It does not itself authorise research. This is the responsibility of [another] body under whose auspices the research will take place’.Footnote 37 While technically accurate – at least in many jurisdictions – this fails to appreciate the power of a REC to control what knowledge can be produced and how that knowledge is shaped. RECs, as noted previously, are a form of social control of science. The ‘advisory’ role of a REC masks its profound ability to impact health research, which is precisely why RECs have faced such criticism and undergone reform. They are not minor actors in the health research regulatory space; on the contrary, they may be among the most important. And, as I have stressed, the obligations imposed on RECs have only increased over time as myriad regulation is brought to bear on them. Ethics and regulation must go hand-in-hand – indeed, one might say that the process of research ethics review must be co-produced with regulation, and regulation and ethical judgement are co-dependent. It is crucial that we appreciate the respective roles of each when it comes to entities such as the REC. This chapter has sought to reveal how we can better understand and deliver these dual roles.
1 See e.g. CC 810.30 Federal Act of 30 September 2011 on Research involving Human Beings (Switzerland).
2 See e.g. The Medicines for Human Use (Clinical Trials) Regulations 2004 No. 1031 (UK); Food and Drug Regulations (CRC, c 870), C.05 (Division 5 – Drugs for Clinical Trials Involving Human Subjects) (Canada).
3 See e.g. Mental Capacity Act 2005 (England and Wales) and Adults with Incapacity (Scotland) Act 2000.
4 Henceforth in this chapter I will use the terminology ‘REC’ as shorthand.
5 L. Stark, Behind Closed Doors: IRBs and the Making of Ethical Research (University of Chicago Press, 2012); A. Hedgecoe, Trust in the System: Research Ethics Committees and the Regulation of Biomedical Research (Manchester University Press, 2020).
6 W. May, ‘The Composition and Function of Ethical Committees’, (1975) Journal of Medical Ethics, 1(1), 23–29, 24.
7 J. Robertson, ‘Ten Ways to Improve IRBs’, (1979) Hastings Center Report, 9(1), 29–33, 29.
8 Stark, Behind Closed Doors, p. 5.
9 Health Research Authority, ‘Governance Arrangements for Research Ethics Committees’, (2020), para 5.3.1.
10 M. Dixon-Woods et al., ‘Written Work: The Social Functions of Research Ethics Committee Letters’, (2007) Social Science & Medicine, 65(4), 792–802, 796.
11 M. Fitzgerald et al., ‘The Research Ethics Review Process and Ethics Review Narratives’, (2006) Ethics & Behavior, 16(4), 377–395.
12 E. Dove, Regulatory Stewardship of Health Research: Navigating Participant Protection and Research Promotion (Cheltenham: Edward Elgar, 2020).
13 S. Babb, Regulating Human Research: IRBs from Peer Review to Compliance Bureaucracy (Palo Alto, CA: Stanford University Press, 2020).
14 See e.g. B. Barber et al., Research on Human Subjects: Problems of Social Control in Medical Experimentation (New York: Russell Sage Foundation, 1973). See also Dixon-Woods et al., ‘Written Work’, 796.
15 See e.g. G. Alberti, ‘Local Research Ethics Committees: Time to Grab Several Bulls by the Horns’, (1995) BMJ, 311(7006), 639–640; K. Jamrozik, ‘The Case for a New System for Oversight of Research on Human Subjects’, (2000) Journal of Medical Ethics, 26(5), 334–339; C. Warlow, ‘Clinical Research Under the Cosh Again’, (2004) BMJ, 329(7460), 241–242.
16 G. Laurie and S. Harmon, ‘Through the Thicket and Across the Divide: Successfully Navigating the Regulatory Landscape in Life Sciences Research’, in E. Cloatre and M. Pickersgill (eds), Knowledge, Technology and Law (London: Routledge, 2014), pp. 121–136.
17 E. Emanuel et al., ‘Oversight of Human Participants Research: Identifying Problems to Evaluate Reform Proposals’, (2004) Annals of Internal Medicine, 141(4), 282–291.
18 Many of these criticisms are explored in R. Klitzman, The Ethics Police? The Struggle to Make Human Research Safe (Oxford University Press, 2015).
19 H. Beecher, ‘Ethics and Clinical Research’, (1966) New England Journal of Medicine, 274(24), 1354–1360.
20 S. Burris and J. Welsh, ‘Regulatory Paradox in the Protection of Human Research Subjects: A Review of Enforcement Letters Issued by the Office for Human Research Protection’, (2007) Northwestern University Law Review, 101(2), 643–685, 678.
21 Klitzman, The Ethics Police, p. 139.
22 See e.g. S. Burris and K. Moss, ‘US Health Researchers Review Their Ethics Review Boards: A Qualitative Study’, (2006) Journal of Empirical Research on Human Research Ethics, 1(2), 39–58.
23 W. Curran, ‘Governmental Regulation of the Use of Human Subjects in Medical Research: The Approach of Two Federal Agencies’, (1969) Daedalus, 98(2), 542 –594.
24 Health Research Authority, ‘Proportionate Review: Information and Guidance for Applicants’, www.hra.nhs.uk/documents/1022/proportionate-review-information-guidance-document.pdf.
25 See e.g. E. Dove et al., ‘Ethics Review for International Data-Intensive Research’, (2016) Science, 351(6280), 1399–1400.
26 The Federal Policy for the Protection of Human Subjects (‘Common Rule’), 45 C.F.R. § 46, Subpart A; The Federal Policy for the Protection of Human Subjections, 82 FR 7149, at 7265 (19 January 2017).
27 Uganda National Council for Science and Technology, ‘National Guidelines for Research involving Humans as Research Participants’, (UNCST, 2014), s. 4.5.5, para. c
28 Clinical Trials Ontario, www.ctontario.ca/.
29 Victoria State Government, ‘National mutual acceptance’, (health.vic, 2018) www2.health.vic.gov.au/about/clinical-trials-and-research/clinical-trial-research/national-mutual-acceptance.
30 E. Dove, ‘Requiring a Single IRB for Cooperative Research in the Revised Common Rule: What Lessons Can Be Learned from the UK and Elsewhere?’, (2019) Journal of Law, Medicine & Ethics, 47(2), 264–282.
31 Care Act 2014, s. 111(1).
32 Dove, ‘Regulatory Stewardship’; see also G. Laurie et al., ‘Charting Regulatory Stewardship in Health Research: Making the Invisible Visible’, (2018) Cambridge Quarterly of Healthcare Ethics, 27(2), 333–347.
33 S. Nicholls et al., ‘A Scoping Review of Empirical Research Relating to Quality and Effectiveness of Research Ethics Review’, (2015) PLOS ONE, 10(7), e0133639; see also, for a US example of research in this area, AEREO: The Consortium to Advance Effective Research Ethics Oversight, www.med.upenn.edu/aereo/.
34 For empirical studies of IRBs in the USA, see e.g. Stark, Behind Closed Doors; Babb, Regulating Human Research; Klitzman, The Ethics Police; J. F. Jaeger, ‘An Ethnographic Analysis of Institutional Review Board Decision-Making’ (PhD thesis, University of Pennsylvania 2006). For empirical studies of RECs in the UK, see e.g. A. Hedgecoe et al., ‘Research Ethics Committees in Europe: Implementing the Directive, Respecting Diversity’, (2006) Journal of Medical Ethics, 32(8), 483–486; J. Neuberger, Ethics and Health Care: The Role of Research Ethics Committees in the United Kingdom (King’s Fund Institute, 1992).
35 G. Teubner, ‘Juridification: Concepts, Aspects, Limits, Solutions’ in G. Teubner (ed.), Juridification of Social Spheres (Berlin: Walter de Gruyter & Co, 1987).
36 L. Jaffe, ‘Law as a System of Control’, (1969) Daedalus, 98(2), 406–426, 412.
37 I. Kennedy and P. Bates, ‘Research Ethics Committees and the Law’ in S. Eckstein (ed.), Manual for Research Ethics Committees, 6th Edition (Cambridge University Press, 2003), pp. 15–17, p. 16.
Enabling researchers’ access to large volumes of health data collected in both research and healthcare settings can accelerate improvements in clinical practice and public health. Because the source and subject of those data are people, data access governance has been of concern to scientists, ethics and regulatory scholars, policymakers and citizens worldwide. While researchers have long provided colleagues access to data in an ad hoc fashion, many research funders – e.g. US National Institutes of Health, Wellcome, Bill and Melinda Gates Foundation, United Kingdom Research and Innovation, European Research Council – journals,Footnote 1 professional societies and associations,Footnote 2 and regulators now systematically promote the deposit of research data in repositories that aim to provide responsible and timely access to data. Data sharing aims to enable meta-analyses and creative (re)uses, reduce duplicative effort in data generation, and improve reproducibility through validation studies, so as to support data-intensive research and thereby improve human health. In many countries, routinely collected healthcare data is also increasingly being made available to researchers. In both research and healthcare contexts, technical and governance strategies for promoting responsible data sharing and access continue to evolve.
The broad sharing of health research data promises many benefits, but it can also involve risks. Health research data can reveal sensitive information about individuals (in legal terms, data subjects) and their relatives, posing risks to privacy and of discrimination and stigmatisation. Broad sharing of health research data can also raise professional concerns for the researchers or organisations who produce data in terms of receiving adequate credit and recognition for their efforts in collecting, curating and analysing data.Footnote 3 Likewise, commercial research companies may be concerned their data will be appropriated or misused by competitors. Data access governance aims to promote organisational, scientific and societal interests in data re-use, while protecting the rights and interests of the range of stakeholders with an interest in data. Data access governance manages who has access to data, for what purposes, and under what conditions. Governance mechanisms include policies, due diligence processes, data access agreements and monitoring. Data access governance is closely linked to the concept of data stewardship, where organisations aim to ensure data are shared widely in the interest of science and society, while also mitigating associated ethical, societal and privacy risks.Footnote 4
In contemporary data-driven science, data access governance often involves Data Access Committees (DACs) as the key institutional setting in which access decisions are made. DACs are diverse and may be composed of individuals with a range of relevant expertise, including familiarity with the scientific area, privacy and security, and research ethics.Footnote 5 As Lowrance notes, ‘…[s]ome DACs are formally constituted and appointed, while some are more casual. Some publish their criteria, decisions and decision rationales, but most don’t. Some directly advise the data custodians, who then make the yes/no (or revise-and-reapply) access decisions. But many DACs make binding decisions’.Footnote 6
Against this backdrop, this chapter examines the topic of data access governance. We discuss the underlying values and goals of data access governance, focusing in particular on the scientific and social implications for open access and data sharing, on the rights and interests of data subjects as well as those of data producers, and on the ethical conduct of data sharing. We contrast the general structural and normative components of open and controlled data access. We then present existing data access arrangements of organisations and repositories that exemplify varying modes of good practice. We argue these models exemplify the tension between promoting open access to databases on the one hand, and, on the other, protecting the rights and interests of the parties involved, including data subjects, researchers, funding organizations and commercial entities. We suggest that principles of transparency, fairness and proportionality in consideration of all stakeholders’ interests and values is key to achieving this balance. We conclude by discussing existing challenges in data access governance, including potential conflicts between various stakeholders’ views and interests, resource issues, (mis)coordination between oversight bodies, and the need for better harmonisation of access policies and procedures.
19.2 Goals of Data Access Governance
Key goals of data access governance aim to strike a balance between protecting data subjects and data producers’ rights and interests, while also promoting broad access to data to advance scientific research in the public interest.
Data access governance supports research ethics principles for research involving human subjects. Minimising privacy risks to participants, respecting participant autonomy, and holding researchers accountable for the scientific validity and ethical conduct of research through research ethics committee (REC) approval and oversight, are key goals of governance of data access.Footnote 7 These goals are increasingly furthered by engaging communities in the design of governance.
Privacy and security: Data access governance can protect participant privacy in several ways. Data access agreements, which are signed by data custodians and data users, typically include requirements regarding protecting privacy and security. Privacy safeguards include restrictions on unauthorised individual-level linkage of datasets, which may increase the re-identifiability of data, or prohibitions on attempting to re-identify participants. The greater the combinations of individual-level data for any given individual, the more likely re-identification becomes. Privacy rules in access processes are therefore often designed to control the level of individual-level data linkage. Security safeguards may include general or specific requirements to adopt physical, organisational, and technical protections, as well as data breach reporting obligations.
Respect for the provisions of ethical approvals: Data access governance models often aim to ensure users respect high standards of scientific integrity, and meet the ethical requirements related to compatibility of downstream use of data with the original consent obtained from the participants at the time of enrolment to a study and data collection. Where researchers have stated that data will only be used for certain kinds of research – e.g. disease-specific – this condition will inform the review of an access proposal by the relevant oversight bodies, notably DACs. Data access review may be informed by the following questions:Footnote 8 Does the application violate – or potentially violate – any of the ethical permissions granted to the study or any of the consent forms signed by the study participants or their guardians? Does the application run a significant risk of upsetting or alienating study participants or thereby reducing their willingness to remain as active participants in the research? Does the application run a significant risk of bringing disrepute to study, repository or steward and thereby reducing participant trust and willingness to remain as active participants in the research?
Respect for communities and relevant stakeholders: Responding to relevant stakeholders including communities’ concerns and seeking to strike a balance between the views of different groups is fundamental to respecting these communities. This may mean championing the rights of less powerful groups and taking steps to seek out their views and actively responding to those views. In the context of data access, stakeholders include study participants and communities who provide the data, study managers and the researchers who develop the data and related resources, researchers who wish to access those data, the funders who support the studies which produce the data and the public who are the ultimate funders as well as beneficiaries of research. Each of these groups has a legitimate and vested interest in the responsible and respectful uses of data and provide a unique perspective on how such governance can be achieved. For example, study participants and community representatives sitting on oversight committees such as DACs can provide a unique insight into what other study participants may view as acceptable uses of data.
One goal of access controls is to protect the rights and interests of the researchers or institutions generating data. Academically, researchers compete for high-impact publications and, in turn, for academic positions and promotions. Commercially, researchers and research institutions may compete to develop commercial applications from research findings. These considerations are often addressed through publication and commercialisation clauses in data access agreements.
Data access governance may include publication policies that seek to ensure that data producers are appropriately recognised for their contribution to science. Given that publication remains the major currency in academia, there may be a tendency for data producers to request co-authorship as a condition of access. This is discouraged for reasons of scientific freedom and accountability. Having independent DAC members adjudicating access is one remedy to the potential conflicts of interest in such practices. A compromise position is sometimes used whereby the data producer has a right to review manuscripts before publication, or to at least to be informed in advance of forthcoming publications based on (re)analysis of shared datasets. Commercialisation policies aim to ensure that the data producer benefits from, or at least does not have its competitive position harmed by, downstream use of data.
Finally, responsible data access governance requires transparency, fairness and proportionality towards participants and other stakeholders. Transparency can be improved by the publishing of policies and procedures, as well as publication of approved data recipients and plain language summaries or abstracts of approved uses. Moreover, ensuring timely and consistent access review without imposing unnecessary constraints on data access are of salient importance with regard to fairness. Where data governance seeks to achieve competing goals of openness and privacy protection, as well as meeting social and participant expectations of data use, a proportionate balance needs to be struck. Proportionality may call for different types of access controls to be applied to different types of data. Increasingly, there is emphasis that the balance between public benefit and individual risks be evidence-based.Footnote 9
19.3 Data Access Governance: Policies, Processes, Agreements and Oversight
The values and goals of data access governance are operationalised through the policies and practices of DACs and various models of data access.
The nature of data – and the associated ethical, policy and legal issues – largely determines the access model, which can range from open to controlled to closed. Open access models generally make data available to any user, anywhere, over the internet, without financial or technical constraints. The Human Genome Project, for example, which sequenced the entire human genome, shared the sequence data openly. Subsequent publicly-funded projects sequenced more individuals and combined these data with richer social, demographic and clinical data, prompting concerns about the privacy of data subjects. Controlled access models emerged to ensure data could still be shared broadly with qualified and trusted researchers, while also protecting the privacy of data subjects and sometimes also the interests of researchers producing data. In controlled access, access is managed by a REC or increasingly by a specialised DAC, which reviews requests for data access. In this regard, DACs often carry out a due diligence review of access requests and may hold deliberations over the scientific, feasibility and ethical aspects of the request. This is in line with the recommendations issued by the Organisation for Economic Co-operation and Development’s (OECD) Council on Health Data Governance that review and approval processes should involve an evidence-based assessment and adhere to principles of transparency, objectivity and fairness. In addition, the OECD’s recommendations underline the importance of independent multi-disciplinary review with an ultimate aim of risk mitigation for individuals and society.Footnote 10
One component of both controlled and open access models is the data agreement (termed ‘data transfer’, ‘data access’ or ‘data use’ agreement), which establishes the conditions governing the accessing researcher’s use of the data. The terms of data access agreements typically address data subject protections, including prohibition on unauthorised linkage of individual-level data and attempts to re-identify participants, respect for consent-based use conditions and ensuring appropriate security safeguards are in place. The terms may also include protections for the rights and interests of the researchers producing data, such as publication embargoes to allow data producers the first attempt at publication or intellectual property clauses governing ownership of downstream commercialisation. Benefit-sharing clauses are important in countries with emergent research infrastructures. Other clauses may serve multiple stakeholders, such as obligations to only use data for specified purposes. Still other clauses may address the interests of science and society, such as requirements for open access publication, or to share analysis code or derived datasets. While data access agreements are legally binding if designed properly, their practical enforceability, especially across borders, is largely untested and remains a concern.Footnote 11 Especially where terms are associated with open access data, they are typically meant more as a means of communicating community norms to users.
DACs may additionally develop tools and mechanisms to maintain ongoing oversight of downstream data uses. For instance, data users may be required to provide periodic reports regarding the projects in which data are being used. In addition, data users may be asked to report to the DAC the publications resulting from the data use, or issues arising from special conditions of access, e.g. risk management strategies for sensitive or potentially ‘sensational’ research, or return of incidental findings. Such oversight may enable the DACs to check compliance of the data uses, but implementation requires infrastructure and human resources that may be burdensome for DACs that do not have dedicated funding. There may also be important burdens – e.g. reporting or transparency obligations – placed on data users that discourage frivolous use. Research teams releasing data or DACs may have little ability to monitor data users or to directly sanction them for misuse, except by withdrawing or refusing access in the future. Some level of accountability is available via community reporting and norms. Research institutions, funders, journals and databases themselves may have mechanisms to hold researchers accountable for respecting their commitments.Footnote 12
The constitution of DACs shape how policies and governance mechanisms are implemented in practice. DACs are the site around which tensions between the competing interests of stakeholders may play out and therefore, examining how they do or do not maintain transparency allows scrutiny of those governance processes. DAC members may be part of the scientific team that generated the data, though the independence of members is often advocated in order to avoid conflicts of interest. Real or perceived conflicts of interest may arise where the researcher who collected the data restricts access to potential competitors, described as data ‘hugging’ or hoarding by those advocating data sharing.Footnote 13 And yet, data producers have important expertise: they know the affordances and limits of the data as well as its provenance. In some DACs, this expertise is recognised by including members of the study team in an advisory role.Footnote 14 Furthermore, all stakeholders should have some representation in governance of data access including as decision-making members of DACs. Stakeholder engagement may also comprise forms of transparency, for example through publication of high-quality plain language summaries to communicate how study data are, or will be, used.
Depending on the organisation or its specific needs, data access governance can emphasise different governance-related values and goals.
An example of the local access management model is the collection of study DACs under the framework of the EGA. EGA is a database of all types of ‘sequence and genotype experiments, including case-control, population, and family studies, hosted at the European Bioinformatics Institute’.Footnote 15 According to the EGA website: ‘The EGA will serve as a permanent archive that will archive several levels of data including the raw data (which could, for example, be re-analysed in the future by other algorithms) as well as the genotype calls provided by the submitters.’Footnote 16 Data submitters via EGA maintain control over the downstream uses of datasets via DACs located in the original study or consortium. An advantage of local data access review is that data generators who are familiar with the dataset can stay involved in the process of review and inform the access review procedure. The disadvantage of this model is that the access control is entirely left to the local committees, making it hard if not impossible to track/audit whether all data access requests are being handled in a timely manner.
In contrast, dbGaP exemplifies a centralised approach to managing data access requests. The dbGaP is designed by the National Institutes of Health (NIH) to archive and distribute the results of studies that have investigated the interaction of genotype and phenotype. Within this database, sixteen DACs ‘review requests for consistency with any data use limitations and approve, disapprove or return requests for revision’, except for large studies in which a local DAC leads access review.Footnote 17 The centralised access model seems advantageous for smaller research groups who lack resources to establish their own data access review infrastructure. However, the handling of data access requests centrally may lead to latency in data access, due to complex administrative arrangements.
The International Cancer Genome Consortium (now called the 25K Initiative) was a large-scale genomics research initiative aiming to generate and share 25,000 whole genome sequences from fifteen jurisdictions to better understand the genetic changes occurring in different forms of cancer.Footnote 18 The International Cancer Genome Consortium (ICGC) adopted a tiered access approach, with open access for data unlikely to be linked to other data that could re-identify individual participants, and controlled access for more sensitive data such as raw sequence and genotype files – though the exact data types in these two categories evolved over time.Footnote 19 These more sensitive data can only be accessed through the Data Access Committee Office (DACO) to protect the privacy and reasonable expectations of study participants, uphold scientific community norms of attribution and publication priority, and ensure the impartiality of access decisions. The DACO reviews the purpose and relevance of research proposals, and the trustworthiness of applicants to protect participant privacy and data security. The ICGC adopted a plain language access agreement restricting users from establishing parasitic intellectual property on primary data or attempting to re-identify individual participants, with signatures from the principal investigator and institutional signing official. Recognising that requirements for ethics review vary from country to country, the DACO asks applicants to indicate if their study of ICGC data requires local ethics approval.
19.4.4 Independent, Interdisciplinary Access Involving Stakeholder Participation in Decisions: METADAC (Managing Ethical, Socio-Technical and Administrative Issues in Data Access)Footnote 20
METADAC provides data access governance for only the most sensitive data and data combinations (as well as sample access). While separating access in this way produces a complex data governance setting for researchers, the devolvement to different degrees of scrutiny for differently risky data allows resources for human-mediated decision making, where this is necessary and allows administrative or algorithm-based decisions for low risk data types. The human-mediated decisions made by METADAC include a proportionate review process for routine-but-sensitive data access applications and full committee decision-making for the remaining sensitive data access applications. The METADAC committee comprises a highly multidisciplinary committee, including study-facing members (currently drawn from the participants of longitudinal studies not regulated by METADAC), with non-voting representation from the studies (including their technical teams) and the funders of these studies. Data access under METADAC does not require additional ethical approval as data sharing is based on tissue bank approval under the Human Tissue Act 2004,Footnote 21 study ethical approval and/or explicit participant consent to sharing. METADAC’s key criteria for access follow precisely the questions outlined in ‘Respect for the provisions of ethical agreements’ above. The METADAC committee does not review the scientific merit of data access applications except in the case of finite resources (i.e. samples).
ClinicalStudyDataRequest.com is a portal facilitating access to patient-level data from clinical studies carried out by pharmaceutical companies and academic researchers.Footnote 22 The portal involves independent review of proposals as well as protections for participant privacy and confidentiality. A major differentiator of this access model from the publicly funded genomic research context is protection of commercial interests. For pharmaceutical company-sponsored trials, the data sharing agreement requires users to keep all information provided confidential, in part to protect commercially sensitive information.Footnote 23 The user must also agree to give the sponsor an exclusive licence to any new intellectual property generated from the study. The agreement also requires users to publish or otherwise publicly disclose their results, which helps to ensure research is pursued for verification rather than commercial purposes.
In the late 2000s, in what would be an example of reflexive data access governance,Footnote 24 the UK Biobank revised its Ethics and Governance Framework (to address challenges that were current at the time). More specifically, the UK Biobank had originally committed to destroy the data of participants who chose to withdraw from the biobank. However, it soon realised that it could not uphold this commitment due to technical issues.Footnote 25 These issues included the establishment of IT systems that made it impossible to destroy data completely in order ‘to protect the integrity and security of those people who have taken part’.Footnote 26 One year after identifying these issues, the UK Biobank discussed and agreed with its Ethics and Governance Council to amend the scope of its commitment: rather than destroying participant data, the biobank would commit to ensure these data would be made completely unusable. UK Biobank subsequently revised both the participant information materials and governance frameworks not only to reflect this change, but to also describe the underlying reasons. In effect, such transparency and reflexiveness could increase participant trust, and ultimately, participation in biobanks.
Not all research teams or repositories have the guidance, resources or expertise to establish responsible data access governance. Adequate support from funding agencies and institutions is key. This support may include establishing community data repositories to store and manage access on behalf of researchers.
Concerns regarding the workload of DACs in manually reviewing data access requests are the basis for emerging innovations around automation of at least some parts of the data access review.Footnote 27 One example of such efforts has been to automate the review of the conformity of the proposed data use with any use restrictions attached to the dataset – e.g. a consent agreement restricting use to non-commercial or disease specific research. In this regard, a recent initiative supported by the Global Alliance for Genomics and Health (GA4GH) developed a matrix for machine-readable consent forms. While these technical approaches will support the work substantially, there will likely always be a need for human review of the most sensitive or disclosive data access requests.
Oversight of access to biomedical databases would benefit considerably from further coordination between the relevant oversight bodies, such as DACs and RECs.Footnote 28 A single data-intensive research project may require access to multiple resources governed by multiple DACs, meaning multiple forms, reviews and delays. Multi-study DACs, such as METADAC, address the problem of repeated and time-consuming access processes. Requirements for multiple approvals from both ethics committees and DACs are dealt with in different ways. In the UK, for example, ethics review under the Human Tissue Act 2004 provides for broad approval for data sharing at the biobank level if relevant consents and other ethical safeguards are in place; permission for specific data access requests then only needs approval from the relevant DAC. Where national legislation is not in place, local or consortia arrangements are possible. The ICGC have disentangled ethics review from data access request review. Indeed, the ICGC’s DACO consistently maintains that its DAC is not an ethics review committee and that it should not evaluate the consent forms of users or their research protocols, relying instead ‘on the local ethics processes of the data users without imposing another layer of ethics review requirements on them’.Footnote 29
Interoperability of data access governance supports an important goal of data science, which is to combine similar datasets together to increase statistical power and thereby produce greater scientific insight. Access arrangements are currently fragmented, differing across countries, institutions and databases. These fragmented access arrangements have the potential to undermine usability of databases and produce data silos as users battle to conform to a variety of – sometimes contradictory – access requirements and conditions. Undertaking multiple roughly similar access processes to access different databases is not only burdensome, it also does not necessarily improve participant/data subject protections. Different aspects of access review can be streamlined so that they do not have to be repeated every time a researcher seeks access. Interoperability and predictability can be improved where different data stewards adopt standard access criteria. Central access portals could accept single requests to multiple data resources. This may be possible even where there are differences between the access conditions applying to the datasets. A step further would be to delegate certain aspects of access review. A common authentication body, for example, could be responsible for establishing the identity and affiliation of researchers, who could then present a single set of credentials to different access bodies.Footnote 30
Data access governance has an ultimate goal of taking into account and maintaining balance between the rights and interests of various stakeholders involved in data sharing. A central aim of data access governance, of course, is to promote broad access to data to advance knowledge and improve human health. In doing so, it is essential to have a comprehensive overview of the rights and interests of the involved parties that might be in contrast with each other when establishing rules for data access reviews and approvals.
In view of increasing data sharing among researchers, it is crucial to ensure the DACs and RECs have sufficient resources to achieve the ultimate goals of access review, namely transparency, fairness and proportionality. In doing so, adopting a number of already proposed approaches would be advantageous, including – partly – automating the process of access review and introducing light-touch forms of review when sharing non-sensitive data.
Technological advancements could lead to heightened risks of re-identification of individuals when sharing sensitive health related data. Therefore, it is important to ensure the adopted governance mechanisms include adequate safeguards when sharing data. In addition, in establishing governance mechanisms, attention should be paid to the social values underpinning data sharing. Thereby, the focus of data governance should not be limited to only protecting the individual rights and interests of the involved parties, but also to fostering social values that can arise from promoting responsible data sharing.
1 D. Taichman et al., ‘Sharing Clinical Trial Data: A Proposal from the International Committee of Medical Journal Editors’, (2016) Annals of Internal Medicine, 164(7), 505–506.
2 ACMG Board of Directors, ‘Laboratory and Clinical Genomic Data Sharing Is Crucial to Improving Genetic Health Care: A Position Statement of the American College of Medical Genetics and Genomics’, (2017) Genetics in Medicine, 19(7), 721– 722.
3 M. Murtagh et al., ‘International Data Sharing in Practice: New Technologies Meet Old Governance’, (2016) Biopreservation and Biobanking, 14(3), 231–240.
4 The Expert Panel on Timely Access to Health and Social Data for Health Research and Health System Innovation, ‘Accessing Health and Health-Related Data in Canada’, (Council of Canadian Academies, 2015).
5 Murtagh et al., ‘Better Governance, Better Access: Practising Responsible Data Sharing in the METADAC Governance Infrastructure’, (2018) Human Genomics, 12(1), 24.
6 W. W. Lowrance, Privacy, Confidentiality, and Health Research (Cambridge University Press, 2012).
7 M. Aitken et al., ‘Consensus Statement on Public Involvement and Engagement with Data-Intensive Health Research,’ (2019) International Journal of Population Data Science, 4(1).
8 Murtagh et al., ‘METADAC Governance Infrastructure’, 24.
9 M. Shabani et al., ‘Who Should Have Access to Genomic Data and How Should They Be Held Accountable? Perspectives of Data Access Committee Members and Experts’, (2016) European Journal of Human Genetics, 24(12), 1671–1675; P. Burton et al., ‘Policies and Strategies to Facilitate Secondary Use of Research Data in the Health Sciences’, (2017) International Journal of Epidemiology, 46(6), 1729–1733.
10 OECD, ‘Recommendations on Health Data Governance’, (OECD), www.oecd.org/els/health-systems/health-data-governance.htm.
11 Burton et al., ‘Policies and Strategies for Secondary Data Use of Data’; Global Alliance for Genomics & Health, ‘GA4GH Accountability Policy 2016’, (Global Alliance for Genomics & Health, 2016).
12 Global Alliance for Genomics & Health, ‘GA4GH Accountability Policy 2016’.
13 Murtagh et al., ‘International Data Sharing’.
14 Murtagh et al., ‘METADAC Governance Infrastructure’.
15 European Genome-phenome Archive (EGA), ‘Introduction’, (EGA, 2019), www.ega-archive.org/about/introduction.
17 D. Paltoo et al., ‘Data Use Under the NIH GWAS Data Sharing Policy and Future Directions’, (2014) Nature Genetics, 46(9), 934–938, 934.
18 International Cancer Genome Consortium, ‘About Us’, (International Cancer Genome Consortium, 2018), www.icgc.org/about-us.
19 Y. Joly et al., ‘Data Sharing in the Post-Genomic World: The Experience of the International Cancer Genome Consortium (ICGC) Data Access Compliance Office (DACO)’, (