Skip to main content Accessibility help
×
Hostname: page-component-8448b6f56d-tj2md Total loading time: 0 Render date: 2024-04-23T17:45:36.679Z Has data issue: false hasContentIssue false

Part II - Reimagining Health Research Regulation

Published online by Cambridge University Press:  09 June 2021

Graeme Laurie
Affiliation:
University of Edinburgh
Edward Dove
Affiliation:
University of Edinburgh
Agomoni Ganguli-Mitra
Affiliation:
University of Edinburgh
Catriona McMillan
Affiliation:
University of Edinburgh
Emily Postan
Affiliation:
University of Edinburgh
Nayha Sethi
Affiliation:
University of Edinburgh
Annie Sorbie
Affiliation:
University of Edinburgh

Summary

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2021
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC 4.0 https://creativecommons.org/cclicenses/

Section IIA Private and Public Dimensions of Health Research Regulation Introduction

Laurie Graeme

It is a common trope in discussions of human health research, particularly as to its appropriate regulation, to frame the analysis in terms of the private and public interests that are at stake. Too often, in our view, these interests are presented as being in tension with each other, sometimes irreconcilably so. In this section, the authors grapple with this (false) dichotomy, both by providing deeper insights into the nature and range of the interests in play, as well as by inviting us to rethink attendant regulatory responses and responsibilities. This is the common theme that unites the contributions.

The section opens with the chapter from Postan (Chapter 23) on the question of the return of individually relevant research findings to health research participants. Here an argument is made – adopting a narrative identity perspective – that greater attention should be paid to the informational interests of participants, beyond the possibility that findings might be of clinical utility. Set against the ever-changing nature of the researcher–participant relationship, Postan posits that there are good reasons to recognise these private identity interests, and, as a consequence, to reimagine the researcher as interpretative partner of research findings. At the same time, the implications of all of this for the wider research enterprise are recognised, not only in resource terms but also with respect to striking a defensible balance of responsibilities to participants while seeking to deliver the public value of research itself.

As to the concept of public interest per se, this has been tackled by Sorbie in Chapter 6, and various contributions in Section IB have addressed the role and importance of public engagement in the design and delivery of robust health research regulation. In this section, several authors build on these earlier chapters in multiple ways. For example, Taylor and Whitton (Chapter 24) directly challenge the putative tension between public and private interests, arguing that each is implicated in the other’s protection. They offer a reconceptualisation of privacy through a public interest lens, raising important questions for existing laws of confidentiality and data protection. Their perspective requires us to recognise the common interest at stake. Most uniquely, however, they extend their analysis to show how group privacy interests currently receive short shrift in health research regulation, and they suggest that this dangerous oversight must be addressed adequately because the failure to recognise group privacy interests might ultimately jeopardise the common public interest in health research.

Starkly, Burgess (Chapter 25) uses just such an example of threats to group privacy – the care.data debacle – to mount a case for mobilising public expertise in the design of health research regulation. Drawing on the notion of deliberative public engagement, he demonstrates how this process cannot only counter asymmetries of power in the structural design of regulation but also how the resulting advice about what is in the public interest can bring both legitimacy and trustworthiness to resultant models of governance. This is of crucial importance, because as he states: ‘[i]t is inadequate to assert or assume that research and its existing and emerging regulation is in the public interest’. His contribution allows us to challenge any such assertion and to move beyond it responsibly.

The last two contributions to this section continue this theme of structural reimagining of regulatory architectures, set against the interests and values in play. Vayena and Blassime (Chapter 26) offer the example of Big Data to propose a model of adaptive governance that can adequately accommodate and respond to the diverse and dynamic interests. Following principles-based regulation as previously discussed by Sethi in Chapter 17, they outline a model involving six principles and propose key factors for their implementation and operationalisation into effective governance structures and processes. This form of adaptive governance mirrors the discussions by Kaye and Prictor in Chapter 10. Importantly, the factors identified by the current authors – of social learning, complementarity and visibility – not only lend themselves to full and transparent engagement with the range of public and private interests, they require it. In the final chapter of this section, Brownsword (Chapter 27) invites us to address an overarching question that is pertinent to this entire volume: ‘how are the interests in pushing forward with research into potentially beneficial health technologies to be reconciled with the heterogeneous interests of the concerned who seek to push back against them?’ His contribution is to push back against the common regulatory response when discussing public and private interests: namely, to seek a ‘balance’. While not necessarily rejecting the balancing exercise as a helpful regulatory device at an appropriate point in the trajectory of regulatory responses to a novel technology, he implores us to place this in ‘a bigger picture of lexically ordered regulatory responsibilities’. For him, morally and logically prior questions are those that ask whether any new development – such as automated healthcare – poses threats to human existence and agency. Only thereafter ought we to consider a role for the balancing exercise that is currently so prevalent in human health research regulation.

Collectively, these contributions significantly challenge the public/private trope in health research regulation, but they leave it largely intact as a framing device for engaging with the constantly changing nature of the research endeavour. This is helpful in ensuring that on-going conversations are not unduly disrupted in unproductive ways. By the same token, individually these chapters provide a plethora of reasons to rethink the nature of how we frame public and private interests, and this in turn allows us to carve out new pathways in the future regulatory landscape. Thus:

  • Private interests have been expanded as to content (Postan) and extended as to their reach (Taylor and Whitton).

  • Moreover, the implications of recognising these reimagined private interests have been addressed, and not necessarily in ways resulting in inevitable tension with appeals to public interest.

  • The content of public interest has been aligned with deliberative engagement in ways that can increase the robustness of health research regulation as a participative exercise (Burgess).

  • Systemic oversight that is adaptive to the myriad of evolving interests has been offered as proof of principle (Vayena and Blassime).

  • The default of seeking balance between public and private interests has been rightly questioned, at least as to its rightful place in the stack of ethical considerations that contribute to responsible research regulation (Brownsword).

23 Changing Identities in Disclosure of Research Findings

Emily Postan
23.1 Introduction

This chapter offers a perspective on the long-running ethical debate about the nature and extent of responsibilities to return individually relevant research findings from health research to participants. It highlights the ways in which shifts in the research landscape are changing the roles of researchers and participants, the relationships between them, and what this might entail for the responsibilities owed towards those who contribute to research by taking part in it. It argues that a greater focus on the informational interests of participants is warranted and that, as a corollary to this, the potential value of findings beyond their clinical utility deserves greater attention. It proposes participants’ interests in using research findings in developing their own identities as a central example of this wider value and argues that these could provide grounds for disclosure.

23.2 Features of Existing Disclosure Guidance

This chapter is concerned with the questions of whether, why, when and how individually relevant findings, which arise in the course of health research, should be offered or fed-back to the research participant to whom they directly pertain.Footnote 1 Unless otherwise specified, what will be said here applies to findings generated through observational and hands-on studies, as well as those using previously collected tissues and data.

Any discussion of ethical and legal responsibilities for disclosure of research findings must negotiate a number of category distinctions relating to the nature of the findings and the practices within which they are generated. However, as will become clear below, several lines of demarcation that have traditionally structured the debate are shifting. A distinction has historically been drawn between the intended (pertinent, or primary) findings from a study and those termed ‘incidental’ (ancillary, secondary, or unsolicited). ‘Incidental findings’ are commonly defined as individually relevant observations generated through research, but lying outwith the aims of the study.Footnote 2 Traditionally, feedback of incidental findings has been presented as more problematic than that of ‘intended findings’ (those the study set out to investigate). However, the cogency of this distinction is increasingly questioned, to the extent that many academic discussions and guidance documents have largely abandoned it.Footnote 3 There are several reasons for this, including difficulties in drawing a bright line between the categories in many kinds of studies, especially those that are open-ended rather than hypothesis-driven.Footnote 4 The relevance of researchers’ intentions to the ethics of disclosure is also questioned.Footnote 5 For these reasons, this chapter will address the ethical issues raised by the return of individually relevant research results, irrespective of whether they were intended.

The foundational question of whether findings should be fed-back – or feedback offered as an option – is informed by the question of why they should. This may be approached by examining the extent of researchers’ legal and ethical responsibilities to participants – as shaped by their professional identities and legal obligations – the strength of participants’ legitimate interests in receiving feedback, or researchers’ responsibilities towards the research endeavour. The last of these includes consideration of how disclosure efforts might impact on wider public interests in the use of research resources and generation of valuable generalisable scientific knowledge, and public trust in research. These considerations then provide parameters for addressing questions of which kinds of findings may be fed-back and under what circumstances. For example, which benefits to participants would justify the resources required for feedback? Finally, there are questions of how, including how researchers should plan and manage the pathway from anticipating the generation of such findings to decisions and practices around disclosure.

In the past two decades, a wealth of academic commentaries and consensus statements have been published, alongside guidance by research funding bodies and professional organisations, making recommendations about approaches to disclosure of research findings.Footnote 6 Some are prescriptive, specifying the characteristics of findings that ought to be disclosed, while others provide process-focused guidance on the key considerations for ethically, legally and practically robust disclosure policies. It is not possible here to give a comprehensive overview of all the permutations of responses to the four questions above. However, some prominent and common themes can be extracted.

Most strikingly, in contrast to the early days of this debate, it is rare now to encounter the bald question of whether research findings should ever be returned. Rather the key concerns are what should be offered and how.Footnote 7 The resource implications of identifying, validating and communicating findings are still acknowledged, but these are seen as feeding into an overall risk/benefit analysis rather than automatically implying non-disclosure. In parallel with this shift, there is less scepticism about researchers’ general disclosure responsibilities. In the UK, researchers are not subject to a specific legal duty to return findings.Footnote 8 Nevertheless, there does appear to be a growing consensus that researchers do have ethical responsibilities to offer findings – albeit limited and conditional ones.Footnote 9 The justifications offered for these responsibilities vary widely, however, and indeed are not always made explicit. This chapter will propose grounds for such responsibilities.

When it comes to determining what kinds of findings should be offered, three jointly necessary criteria are evident across much published guidance. These are captured pithily by Lisa Eckstein et al. as ‘volition, validity and value’.Footnote 10 Requirements for analytic and clinical validity entail that the finding reliably measures and reports what it purports to. Value refers to usefulness or benefit to the (potential) recipient. In most guidance this is construed narrowly in terms of the information’s clinical utility – construed as actionability and sometimes further circumscribed by the seriousness of the condition indicated.Footnote 11 Utility for reproductive decision-making is sometimes included.Footnote 12 Although some commentators suggest that ‘value’ could extend to the non-clinical, subjectively determined ‘personal utility’ of findings, it is generally judged that this alone would be insufficient to justify disclosure costs.Footnote 13 The third necessary condition is that the participant should have agreed voluntarily to receive the finding, having been advised at the time of consenting to participate about the kinds of findings that could arise and having had the opportunity to assent to or decline feedback.Footnote 14

Accompanying this greater emphasis on the ‘which’ and ‘how’ questions is an increasing focus upon the need for researchers to establish clear policies for disclosing findings, that are explained in informed consent procedures, and an accompanying strategy for anticipating, identifying, validating, interpreting, recording, flagging-up and feeding-back findings in ways that maximise benefits and minimise harms.Footnote 15 Broad agreement among scholars and professional bodies that – in the absence of strong countervailing reasons – there is an ethical responsibility to disclose clinically actionable findings is not, however, necessarily reflected in practice, where studies may still lack disclosure policies, or have policies of non-disclosure.Footnote 16

Below I shall advance the claim that, despite a greater emphasis upon, and normalisation of, feedback of findings, there are still gaps, which mean that feedback policies may not be as widely instituted or appropriately directed as they should be. Chief among these gaps are, first, a continued focus on researchers’ inherent responsibilities considered separately from participants’ interests in receiving findings and, second, a narrow conception of when these interests are engaged. These gaps become particularly apparent when we attend to the ways in which the roles of researchers and participants and relationships between them have shifted in a changing health research landscape. In the following sections, I will first highlight the nature of these changes, before proposing what these mean for participants’ experiences, expectations and informational interests and, thus, for ethically robust feedback policies and practices.

23.3 The Changing Health Research Landscape

The landscape of health research is changing. Here I identify three facets of these changes and consider how these could – and indeed should – have an effect on the practical and ethical basis of policies and practices relating to the return of research findings.

The first of these developments is a move towards ‘learning healthcare’ systems and translational science, in which the transitions between research and care are fluid and cyclical, and the lines between patient and participant are often blurred.Footnote 17 The second is greater technical capacities, and appetite, for data-driven research, including secondary research uses of data and tissues – sourced from patient records, prior studies, or biobanks – and linkage between different datasets. This is exemplified by the growth in large-scale and high-profile of genomic studies such as the UK’s ‘100,000 Genomes’ project.Footnote 18 The third development is increasing research uses of technologies and methodologies, such as functional neuroimaging, genome-wide association studies, and machine-learning, which lend themselves to open-ended, exploratory inquiries rather than hypothesis-driven ones.Footnote 19 I wish to suggest that these three developments have a bearing on disclosure responsibilities in three key respects: erosion of the distinction between research and care; generation of findings with unpredictable or ambiguous validity and value; and a decreasing proximity between researchers and participants. I will consider each of these in turn.

Much of the debate about disclosure of findings has, until recently, been premised on there being a clear distinction between research and care, and what this entails in terms of divergent professional priorities and responsibilities, and the experiences and expectations of patient and participants. Whereas it has been assumed that clinicians’ professional duty of care requires disclosure of – at least – clinically actionable findings, researchers are often seen as being subject to a contrary duty to refrain from feedback if this would encourage ‘therapeutic misconceptions’, or divert focus and resources from the research endeavour.Footnote 20 However, as health research increasingly shades into ‘learning healthcare’, these distinctions become increasingly untenable.Footnote 21 It is harder to insist that responsibilities to protect information subjects’ interests do not extend to those engaged in research, or that participants’ expectations of receiving findings are misconceived. Furthermore, if professional norms shift towards more frequent disclosure, so the possibility that healthcare professionals may be found negligent for failing to disclose becomes greater.Footnote 22 These changes may well herald more open feedback policies in a wider range of studies. However, if these policies are premised solely on the duty of care owed in healthcare contexts to participants-as-patients, then the risk is that any expansion will fail to respond adequately to the very reasons why findings should be offered at all – to protect participants’ core interests.

Another consequence of the shifting research landscape, and the growth of data-driven research in particular, lies in the nature of findings generated. For example, many results from genomic analysis or neuroimaging studies are probabilistic rather than strongly predictive, and produce information of varying quality and utility.Footnote 23 And open-ended and exploratory studies pose challenges precisely because what they might find – and thus their significance to participants – are unpredictable and, especially in new fields of research, may be less readily validated. These characteristics are of ethical significance because they present obstacles to meeting the requirements (noted above) for securing validity, value and ascertaining what participants wish to receive. And where validity and value are uncertain, robust analysis of the relative risks and benefits of disclosure is not possible. Given these challenges, it is apparent that meeting participants’ informational interests will require more than just instituting clear disclosure policies. Instead, more flexible and discursive disclosure practices may be needed to manage unanticipated or ambiguous findings.

Increasingly, health research is conducted using data or tissues that were collected for earlier studies, or sourced from biobanks or patient records.Footnote 24 In these contexts, in contrast to the closer relationships entailed by translational studies, researchers may be geographically, temporally and personally far-removed from the participants. This poses a different set of challenges when determining responsibilities for disclosing research findings. First, it may be harder to argue that researchers working with pre-existing data collections hold a duty of care to participants, especially one analogous to that of a healthcare professional. Second, there is the question of who is responsible for disclosure: is it those who originally collected materials, manage this resource or generate the findings? Third, if consent is only sought when the data or tissues are originally collected, it is implausible that a one-off procedure could address in detail all future research uses, let alone the characteristics, of all future findings.Footnote 25 And finally, in these circumstances, disclosure may be more resource-intensive where, for example, much time has elapsed or datasets have been anonymised. These observations underscore the problems of thinking of ‘health research’ as a homogenous category in which the respective roles and expectations of researchers and participants are uniform and easily characterised, and ethical responsibilities attach rigidly to professional identities.

Finally, it is also instructive to attend to shifts in wider cultural and legal norms surrounding our relationships to information about ourselves and the increasing emphasis on informational autonomy, particularly with respect to accessing and controlling information about our health or genetic relationships. There is increased legal protection of informational interests beyond clinical actionability, including the interest in developing ones identity, and in reproductive decision-making.Footnote 26 For example, European human rights law has recognised the right to access to one’s health records and the right to know one’s genetic origins as aspects of the Article 8 right to respect for private life.Footnote 27 And in the UK, the legal standard for information provision by healthcare professionals has shifted from one determined by professional judgement, to that which a reasonable patient would wish to know.Footnote 28

When taken together, the factors considered in this section provide persuasive grounds for looking beyond professional identities, clinical utility and one-off consent and information transactions when seeking to achieve ethically defensible feedback of research findings. In the next section, I will present an argument for grounding ethical policies and practices upon the research participants’ informational interests.

23.4 Re-focusing on Participants’ Interests

What emerges from the picture above is that the respective identities and expectations of researchers and participants are changing, and with them the relationships and interdependencies between them. Some of these changes render research relationships more intimate, akin to clinical care, while other makes them more remote. And the roles that each party fulfils, or are expected to fulfil, may be ambiguous. This lack of clarity presents obstacles to relying on prior distinctions and definitions and raises questions about the continued legitimacy of some existing guiding principles.Footnote 29 Specifically, it disrupts the foundations upon which disclosure of individually relevant results might be premised. In this landscape, it is no longer possible or appropriate – if indeed it ever was – simply to infer what ethical feedback practice would entail from whether not an actor is categorised as ‘a researcher’. This is due not only to ambiguity about the scope of this role and associated responsibilities. It also looks increasingly unjustifiable to give only secondary attention to the nature and specificity of participants’ interests: to treat these as if they are a homogenous group of narrowly health-related priorities that may be honoured, provided doing so does not get in the way of the goal of generating generalisable scientific knowledge. There is a need to revisit the nature and balance of private and public interests at stake. My proposal here is that participants’ informational interests, and researchers’ particular capacities to protect these interests, should comprise the heart of ethical feedback practices.

There are several reasons why it seems appropriate – particularly now – to place participants’ interests at the centre of decision-making about disclosure. First, participants’ roles in research are no less in flux than researchers’. While it may be true that the inherent value of any findings to participants – whether they might wish to receive them and whether the information would be beneficial or detrimental to their health, well-being, or wider interests – may not be dramatically altered by emerging research practices, their motivations, experiences and expectations of taking part may well be different. In the landscape sketched above, it is increasingly appropriate to think of participants less as passive subjects of investigation, but rather as partners in the research relationship.Footnote 30 This is a partnership grounded in the contributions that participants make to a study and in the risks and vulnerabilities incurred when they agree to take part. The role of participant-as-partner is underscored by the rise of the idea that there is an ethical ‘duty to participate’.Footnote 31 This idea has escaped the confines of academic argument. Implications of such a duty are evident in in public discourse concerning biobanks and projects such as 100,000 Genomes. For example, referring to that project, the (then) Chief Medical Officer for England has said that to achieve ‘the genomic dream’, we should ‘agree to use of data for our own benefit and others’.Footnote 32 A further compelling reason for placing the interests of participants at the centre of return policies is that doing so is essential to building confidence and demonstrating trustworthiness in research.Footnote 33 Without this trust there would be no participants and no research.

In light of each of these considerations, it is difficult to justify the informational benefits of research accruing solely to the project aims and the production of generalisable knowledge, without participants’ own core informational interests inviting corresponding respect. That is, respect that reflects the nature of the joint research endeavour and the particular kinds of exposure and vulnerabilities participants incur.

If demonstrating respect was simply a matter of reciprocal recognition of participants’ contributions to knowledge production, then it could perhaps be achieved by means other than feedback. However, research findings occupy a particular position in the vulnerabilities, dependencies and responsibilities of the researcher relationship. Franklin Miller and others argue that researchers have responsibilities to disclose findings that arise from a particular pro tanto ethical responsibility to help others and protect their interests within certain kinds of professional relationships.Footnote 34 These authors hold that this responsibility arises because, in their professional roles, researchers have both privileged access to private aspects of participants’ lives, and particular opportunities and skills for generating information of potential significance and value to participants to which they would not otherwise have access.Footnote 35 I would add to this that being denied the opportunity to obtain otherwise inaccessible information about oneself not only fails to protect participants from avoidable harms, it also fails to respect and benefit them in ways that recognise the benefits they bring to the project and the vulnerabilities they may incur, and trust they invest, when doing so.

None of what I have said seeks to suggest that research findings should be offered without restriction, or at any cost. The criteria of ‘validity, value and volition’ continue to provide vital filters in ensuring that information meets recipients’ interests at all. However, providing these three conditions are met, investment of research resources in identifying, validating, offering and communicating individually relevant findings, may be ethically justified, even required, when receiving them could meet non-trivial informational interests. One question that this leaves unanswered, of course, is what counts as an interest of this kind.

23.5 A Wider Conception of Value: Research Findings as Narrative Tools

If responsibilities for feedback are premised on the value of particular information to participants, it seems arbitrary to confine this value solely to clinical actionability, unless health-related interests are invariably more critical than all others. It is not at all obvious that this is so. This section provides a rationale for recognising at least one kind of value beyond clinical utility.Footnote 36

It is suggested here that where research findings support a participant’s abilities to develop and inhabit their own sense of who they are, significant interests in receiving these findings will be engaged. The kinds of findings that could perform this kind of function might include, for example, those that provide diagnoses that explain longstanding symptoms – even where there is no effective intervention – susceptibility estimates that instigate patient activism, or indications of carrier status or genetic relatedness that allow someone to (re)assess of understand their relationships and connections to others.

The claim to value posited here goes beyond appeals to ‘personal utility’, as commonly characterised in terms of curiosity, or some unspecified, subjective value. It is unsurprising that, thus construed, personal utility is rarely judged to engage sufficiently significant interests to warrant the effort and resources of disclosing findings.Footnote 37 However, the claim here – which I have more fully discussed elsewhereFootnote 38 – is that information about the states, dispositions and functions of our bodies and minds, and our relationships to others (and others’ bodies) – such as that conveyed by health research findings – is of value to us when, and to the extent that, it provides constitutive and interpretive tools that help us to develop our own narratives about who we are – narratives that constitute our identities.Footnote 39 Specifically, this value lies not in contributing to just any identity-narrative, but one that makes sense when confronted by our embodied and relational experiences and supports us in navigating and interpreting these experiences.Footnote 40 These experiences include those of research participation itself. A coherent, ‘inhabitable’ self-narrative is of ethical significance, because such a narrative is not just something we passively and inevitably acquire. Rather, it is something we develop and maintain, which provides the practical foundations for our self-understanding, interpretive perspective and values, and thus our autonomous agency, projects and relationships.Footnote 41 If we do indeed have a significant interest in developing and maintaining such a narrative, and some findings generated in health research can support us in doing so, then my claim is that these findings may be at least as valuable to us as those that are clinically actionable. As such, our critical interests in receiving them should be recognised in feedback policies and practices.

In response to concern that this proposal constitutes an unprecedented incursion of identity-related interests into the (public) values informing governance of health research, it is noted that the very act of participating in research is already intimately connected to participants’ conceptions of who they are and what they value, as illustrated by choices to participate motivated by family histories of illness,Footnote 42 or objections to tissues or data being used for commercial research.Footnote 43 Participation already impacts upon the self-understandings of those who choose to contribute. Indeed, it may often be seen as contributing to the narratives that comprise their identities. Seen in this light, it is not only appropriate, but vital, that the identity-constituting nature of research participation is reflected in the responsibilities that researchers – and the wider research endeavour – owe to participants.

23.6 Revisiting Ethical Responsibilities for Feeding Back Findings

What would refocusing ethical feedback for research findings to encompass the kinds of identity-related interests described above mean for the responsibilities of researchers and others? I submit that it entails responsibilities both to look beyond clinical utility to anticipate when findings could contribute to participants’ self-narratives and to act as an interpretive partner in discharging responsibilities for offering and communicating findings.

It must be granted that the question of when identity-related interests are engaged by particular findings is a more idiosyncratic matter than clinical utility. This serves to underscore the requirement that any disclosure of findings is voluntary. And while this widening of the conception of ‘value’ is in concert with increasing emphasis on individually determined informational value in healthcare – as noted above – it is not a defence of unfettered informational autonomy, requiring the disclosure of whatever participants might wish to see. In order for research findings to serve the wider interests described above, they must still constitute meaningful and reliable biomedical information. There is no value without validity.Footnote 44

These two factors signal that the ethical responsibilities of researchers will not be discharged simply by disclosing findings. There is a critical interpretive role to be fulfilled at several junctures, if participants’ interests are to be protected. These include: anticipating which findings could impact on participants’ health, self-conceptions or capacities to navigate their lives; equipping participants to understand at the outset whether findings of these kinds might arise; and, if participants choose to receive these findings, ensuring that these are communicated in a manner that is likely to minimise distress, and enhance understanding of the capacities and limitations of the information in providing reliable explanations, knowledge or predictions about their health and their embodied states and relationships. This places the researcher in the role of ‘interpretive partner’, supporting participants to make sense of the findings they receive and to accommodate – or disregard – them in conducting their lives and developing their identities.

This role of interpretive partner represents a significant extension of responsibilities from an earlier era in which a requirement to report even clinically significant findings was questioned. The question then arises as to who will be best placed to fulfil this role. As noted above, dilemmas about who should disclose arise most often in relation to secondary research uses of data.Footnote 45 These debates err, however, when they treat this as a question focused on professional and institutional duties abstracted from participants’ interests. When we attend to these interests, the answer that presents itself is that feedback should be provided by whoever is best placed to recognise and explain the potential significance of the findings to participants. And it may in some cases be that those best placed to do this are not researchers at all, but professionals performing a role analogous to genetic counsellors.

Even though the triple threshold conditions for disclosure – validity, value and volition – still apply, any widening of the definition of value implies a larger category of findings to be validated, offered and communicated. This will have resource implications. And – as with any approach to determining which findings should be fed-back and how – the benefits of doing so must still be weighed against any resultant jeopardy to the socially valuable ends of research. However, if we are not simply paying lip-service to, but taking seriously, the ideas that participants are partners in, not merely passive objects of, research, then protecting their interests – particularly those incurred through participation – is not supererogatory, but an intrinsic part of recognising their contribution to biomedical science, their vulnerability, trust and experiences of contributing. Limiting these interests to receipt of clinically actionable findings is arbitrary and out of step with wider ethico-legal developments in the health sphere. Just because these findings arise in the context of health research is not on its own sufficient reason for interpreting ‘value’ solely in clinical terms.

23.7 Conclusion

In this chapter, I have argued that there are two shortcomings in current ethical debates and guidance regarding policies and practices for feeding back individually relevant findings from health research. These are, first, a focus on the responsibilities of actors for disclosure that remains insufficiently grounded in the essential questions of when and how disclosure would meet core interests of participants; and, second, a narrow interpretation of these interests in terms of clinical actionability. Specifically, I have argued that participants have critical interests in accessing research findings where these offer valuable tools of narrative self-constitution. These shortcomings have been particularly brought to light by changes in the nature of health research, and addressing them becomes ever more important as the role participants evolves from one of an object of research, to active members of shared endeavours. I have proposed that in this new health research landscape, there are not only strong grounds for widening feedback to include potentially identity-significant findings, but also to recognise the valuable role of researchers and others as interpretive partners in the relational processes of anticipating, offering and disclosing findings.

24 Health Research and Privacy through the Lens of Public Interest A Monocle for the Myopic?

Mark Taylor and Tess Whitton
24.1 Introduction

Privacy and public interest are reciprocal concepts, mutually implicated in each other’s protection. This chapter considers how viewing the concept of privacy through a public interest lens can reveal the limitations of the narrow conception of privacy currently inherent to much health research regulation (HRR). Moreover, it reveals how the public interest test, applied in that same regulation, might mitigate risks associated with a narrow conception of privacy.

The central contention of this chapter is that viewing privacy through the lens of public interest allows the law to bring into focus more things of common interest than privacy law currently recognises. We are not the first to recognise that members of society share a common interest in both privacy and health research. Nor are we the first to suggest that public is not necessarily in opposition to private, with public interests capable of accommodating private and vice versa.Footnote 1 What is novel about our argument is the suggestion that we might invoke public interest requirements in current HRR to protect group privacy interests that might otherwise remain out of sight.

It is important that HRR takes this opportunity to correct its vision. A failure to do so will leave HRR unable to take into consideration research implications with profound consequences for future society. A failure will undermine legitimacy in HRR. It is no exaggeration to say that the value of a confidential healthcare system may come to depend on whether HRR acknowledges the significance of group data to the public interest. It is group data that shapes health policies, evaluates success, and determines the healthcare opportunities offered to members of particular groups. Individual opportunity, and entitlement, is dependent upon group classification.

The argument here is three-fold: (1) a failure to take common interests into account when making public interest decisions undermines the legitimacy of the decision-making process; (2) a common interest in privacy extends to include group interests; (3) the law’s current myopia regarding group privacy interests in data protection law and the duty of confidence law can be corrected, to a varying extent, through bringing group privacy interests into view through the lens of public interest.

24.2 Common Interests, Public Interest and Legitimacy

In this section, we seek to demonstrate how a failure to take the full range of common (group) interests into account when making public interest decisions will undermine the legitimacy of those decisions.

When Held described broad categories into which different theories of public interest might be understood to fall, she listed three: preponderance or aggregative theories, unitary theories and common interest theories.Footnote 2 When Sorauf earlier composed his own list, he combined common interests with values and gave the category the title ‘commonly-held value’.Footnote 3 We have separately argued that a compelling conception of public interest may be formed by uniting elements of ‘common interest’ and ‘common value’ theories of public interest.Footnote 4 It is, we suggest, through combining facets of these two approaches that one can overcome the limitations inherent to each. Here we briefly recap this argument before seeking to build upon it.

Fundamental to common interest theories of the public interest is the idea that something may serve ‘the ends of the whole public rather than those of some sector of the public’.Footnote 5 If one accepts the idea that there may be a common interest in privacy protection, as well as in the products of health research, then ‘common interest theory’ brings both privacy and health research within the scope of public interest consideration. However, it cannot explain how – in case of any conflict – they ought to be traded-off against each other – or other common interests – to determine the public interest in a specific scenario.

In contrast to common interest theories, commonly held value theories claim the ‘public interest emerges as a set of fundamental values in society’.Footnote 6 If one accepts that a modern liberal democracy places a fundamental value upon all members of society being respected as free and equal citizens, then any interference with individual rights should be defensible in terms that those affected can both access and have reason to endorseFootnote 7 – with discussion subject to the principles of public reasoning.Footnote 8 Such a commitment is enough to fashion a normative yardstick, capable of driving a public interest determination. However, the object of measurement remains underspecified.

It is through combining aspects of common interest and common value approaches that a practical conception of the public interest begins to emerge: any trade-off between common interests ought to be defensible in terms of common value: for reasons that those affected by a decision can both access and have reason to endorse.Footnote 9

An advantage of this hybrid conception of public interest is its connection with (social) legitimacy.Footnote 10 If a decision-maker fails to take into account the full range of interests at stake, then not only do they undermine any public interest claim, but also the legitimacy of the decision-making process underpinning it.Footnote 11 Of course, this does not imply that the legitimacy of a system depends upon everyone perceiving the ‘public interest’ to align with their own contingent individual or common interests. Public-interest decision-making should, however, ensure that when the interests of others displace any individual’s interests, including those held in common, it should (ideally) be transparent why this has happened and (again, ideally) the reasons for displacement should be acceptable as ‘good reasons’ to the individual.Footnote 12 If the displaced interest is more commonly held, it is even more important for a system practically concerned with maintaining legitimacy, to transparently account for that interest within its decision-making process.

Any failure to account transparently for common interests will undermine the legitimacy of the decision-making process.

24.3 Common Interests in (Group) Privacy

In this section, the key claim is that a common interest in privacy extends beyond a narrow atomistic conception of privacy to include group interests.

We are aware of no ‘real definition’ of privacy.Footnote 13 There are, however, many stipulative or descriptive definitions, contingent upon use of the term within particular cultural contexts. Here we operate with the idea that privacy might be conceived in the legal context as representing ‘norms of exclusivity’ within a society: the normative expectation that some states of information separation are, by default, to be maintained.Footnote 14 This is a broad conception of privacy extending beyond the atomistic one that Bennet and Raab observe to be the prevailing privacy paradigm in many Western societies.Footnote 15 It is not necessary to defend a broad conception of privacy in order to recognise a common interest in privacy protection. It is, however, necessary to broaden the conception in order to bring all of the possible common interests in privacy into view. As Bennet and Raab note, the atomistic conception of privacy

fails to properly understand the construction, value and function of privacy within society.Footnote 16

Our ambition here is not to demonstrate an atomistic conception to be ‘wrong’ in any objective or absolute sense; but, rather to recognise the possibility that a coherent conception of privacy may extend its reach and capture additional values and functions. In 1977, after a comprehensive survey of the literature available at the time, Margulis proposed the following consensus definition of privacy

[P]rivacy, as a whole or in part, represents control over transactions between person(s) and other(s), the ultimate aim of which is to enhance autonomy and/or to minimize vulnerability.Footnote 17

Nearly thirty years after the definition was first offered, Margulis recognised that his early attempt at a consensus definition

failed to note that, in the privacy literature, control over transactions usually entailed limits on or regulation of access to self (Allen, 1998), sometimes to groups (e.g., Altman, 1975), and occasionally to larger collectives such as organisations (e.g., Westin, 1967).Footnote 18

The adjustment is important. It allows for a conception of privacy to recognise that there may be relevant norms, in relation to transactions involving data, that do not relate to identifiable individuals but are nonetheless associated with normative expectation of data flows and separation. Not only is there evidence that there are already such expectations in relation to non-identifiable data,Footnote 19 but data relating to groups – rather than just individuals – will be of increasing importance.Footnote 20

There are myriad examples of how aggregated data have led to differential treatment of individuals due to association with group characteristics.Footnote 21 Beyond the obvious examples of individual discrimination and stigmatisation due to inferences drawn from (perceived) group membership, there can be group harm(s) to collective interests including, for example, harm connected to things held to be of common cultural value and significance.Footnote 22 It is the fact that data relates to the group level that leaves cultural values vulnerable to misuse of the data.Footnote 23 This goes beyond a recognition that privacy may serve ‘not just individual interests but also common, public, and collective purposes’.Footnote 24 It is recognition that it is not only individual privacy but group privacy norms that may serve these common purposes. In fact, group data, and the norms of exclusivity associated with it, are likely to be of increasing significance for society. As Taylor, Floridi and van der Sloot note,

with big data analyses, the particular and the individual is no longer central. … Data is analysed on the basis of patterns and group profiles; the results are often used for general policies and applied on a large scale.Footnote 25

This challenges the adequacy of a narrow atomistic conception of privacy to account for what will increasingly matter to society. De-identification of an individual as a member of a group, including those groups that may be created through the research and may not otherwise exist, does not protect against any relevant harm.Footnote 26 In the next part, we suggest that not only can the concept of the public interest be used to bring the full range of privacy interests into view, but that a failure to do so will undermine the legitimacy of any public interest decision-making process.

24.4 Group Privacy Interests and the Law

The argument in this section is that, although HRR does not currently recognise the concept of group privacy interests, through the concept of public interest inherent to both the law of data protection and the duty of confidence, there is opportunity to bring group privacy interests into view.

24.4.1 Data Protection Law

The Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (hereafter, Treaty 108) (as amended)Footnote 27 cast the template for subsequent data protection law when it placed the individual at the centre of its object and purposeFootnote 28 and defined ‘personal data’ as:

any information relating to an identified or identifiable individual (‘data subject’)Footnote 29

This definition narrows the scope of data protection law even further than data relating to an individual. Data relating to unidentified or unidentifiable individuals fall outside its concern. This blinkered view is replicated through data protection instruments from the first through to the most recent: the EU General Data Protection Regulation (GDPR).

The GDPR is only concerned with personal data, defined in a substantively similar and narrow fashion to Treaty 108. In so far as its object is privacy protection, it is predicated upon a relatively narrow and atomistic, conception of privacy. However, if the concerns associated with group privacy are viewed through the lens of public interest, then they may be given definition and traction even within the scope of a data protection instrument like the GDPR. The term ‘the public interest’ appears in the GDPR no fewer than seventy times. It has a particular significance in the context of health research. This is an area, such as criminal investigation, where the public interest has always been protected.

Our argument is that it is through the application of the public interest test to health research governance in data protection law, that there is an opportunity to recognise in part common interests in group privacy. For example, any processing of personal data within material and territorial scope of the GDPR requires a lawful basis. Among the legal bases most likely to be applicable to the processing of personal data for research purposes is either that the processing is necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller (Article 6(1)(e)), or, that it is necessary for the purposes of the legitimate interests pursued by the controller (Article 6(1)(f)). In the United Kingdom (UK), where universities are considered to be public authorities, universities are unable to rely upon ‘legitimate interests’ as a basis for lawful processing. Much health research in the UK will thus be carried out on the basis that it is necessary for the performance of a task in the public interest. Official guidance issued in the UK is that the organisations relying upon the necessity of processing to carry out a task ‘in the public interest’

should document their justification for this, by reference to their public research purpose as established by statute or University Charter.Footnote 30

Mere assertion that a particular processing operation is consistent with an organisation’s public research purpose will provide relatively scant assurance that the operation is necessary for the performance of a task in the public interest. More substantial justification would document justification relevant to particular processing operations. Where research proposals are considered by institutional review boards, such as university or NHS ethics committees, then independent consideration by such bodies of the public interest in the processing operation would provide the rationale. We suggest this provides an opportunity for group privacy concerns to be drawn into consideration. They might also form part of any privacy impact assessment carried out by the organisation. What is more, for the sake of legitimacy, any interference with group interests, or risk of harm to members of a group or to the collective interests of the group as a whole, should be subject to the test that members of the group be offered reasons to accept the processing as appropriate.Footnote 31 Such a requirement might support good practice in consumer engagement prior to the roll out of major data initiatives.

Admittedly, while this may provide opportunity to bring group privacy concerns into consideration where processing is carried out by a public authority (and the legal basis of processing is performance of a task carried out in the public interest), this only provides limited penetration of group privacy concerns into the regulatory framework. It would not, for example, apply where processing was in pursuit of legitimate interests or another lawful basis. There are other limited opportunities to bring group privacy concerns into the field of vision of data protection law through the lens of public interest.Footnote 32 However, for as long as the gravitational orbit of the law is around the concept of ‘personal data’, the chances to recognise group privacy interests are likely to be limited and peripheral. By contrast, more fundamental reform may be possible in the law of confidence.

24.4.2 Duty of Confidence

As with data protection and privacy,Footnote 33 there is an important distinction to be made between privacy and confidentiality. However, the UK has successfully defended its ability to protect the right to respect for private and family life, as recognised by Article 8 of the European Convention on Human Rights (ECHR), by pointing to the possibility of an action for breach of confidence.Footnote 34 It has long been recognised that the law’s protection of confidence is grounded in the public interestFootnote 35 but, as Lord Justice Briggs noted in R (W,X,Y and Z) v. Secretary of State for Health (2015),

the common law right to privacy and confidentiality is not absolute. English common law recognises the need for a balancing between this right and other competing rights and interests.Footnote 36

The argument put forward here is consistent with the idea that the protection of privacy and other competing rights and interests, such as those associated with health research, are each in the public interest. The argument here is that when considering the appropriate balance or trade-off between different aspects of the public interest, then a broader view of privacy protection than has hitherto been taken by English law is necessary to protect the legitimacy of decision-making. Such judicial innovation is possible.

The law of confidence has already evolved considerably over the past twenty or so years. Since the Human Rights Act 1998Footnote 37 came into force in 2000, the development of the common law has been in harmony with Articles 8 and 10 of the ECHR.Footnote 38 As a result, as Lord Hoffmann put it,

What human rights law has done is to identify private information as something worth protecting as an aspect of human autonomy and dignity.Footnote 39

Protecting private information as an aspect of individual human autonomy and dignity might signal a shift toward the kind of narrow and atomistic conception of privacy associated with data protection law. This would be as unnecessary as it would be unfortunate. In relation to the idea of privacy, the European Court of Human Rights has itself said that

The Court does not consider it possible or necessary to attempt an exhaustive definition of the notion of ‘private life’ … Respect for private life must also comprise to a certain degree the right to establish and develop relationships with other human beings.Footnote 40

It remains open to the courts to recognise that the implications of group privacy concerns have a bearing on an individual’s ability to establish and develop relations with other human beings. Respect for human autonomy and dignity may yet serve as a springboard toward a recognition by the law of confidence that data processing impacts upon the conditions under which we live social (not atomistic) lives and our ability to establish and develop relationships as members of groups. After all, human rights are due to members of a group and their protection has always been motivated by group concerns.Footnote 41

One of us has argued elsewhere that English Law took a wrong turn when R (Source Informatics) v. Department of HealthFootnote 42 was taken to be authority for the proposition that a duty of confidence cannot be breached through the disclosure of non-identifiable data. It is possible that the ratio in Source Informatics may yet be re-interpreted and recognised to be consistent with a claim that legal duties may be engaged through use and disclosure of non-identifiable data.Footnote 43 In some ways, this would simply be to return to the roots of the legal protection of privacy. In her book The Right to Privacy, Megan Richardson traces the origins and influence of the ideas underpinning the legal right to privacy. As she remarks, ‘the right from the beginning has been drawn on to serve the rights and interests of minority groups’.Footnote 44 Richardson recognises that, even in those cases where an individual was the putative focus of any action or argument,

Once we start to delve deeper, we often discover a subterranean network of families, friends and other associates whose interests and concerns were inexorably tied up with those of the main protagonist.Footnote 45

As a result, it has always been the case that the right to privacy has ‘broader social and cultural dimensions, serving the rights and interests of groups, communities and potentially even the public at large’.Footnote 46 It would be a shame if, at a time when we may need it most, the duty of confidence would deny its own potential to protect reasonable expectations in the use and disclosure of information simply because misuse had the potential to impact more than one identifiable individual.

24.5 Conclusion

The argument has been founded on the claim that a commitment to the protection of common interests in privacy and the product of health research, if placed alongside the commonly held value in individuals as free and equal persons, may establish a platform upon which one can construct a substantive idea of the public interest. If correct, then it is important to a proper calculation of the public interest to understand the breadth of privacy interests that need to be accounted for if we are to avoid subjugating the public to governance, and a trade-off between competing interests, that they have no reason to accept.

Enabling access to the data necessary for health research is in the public interest. So is the protection of group privacy. Recognising this point of connection can help guide decision-making where there is some kind of conflict or tension. The public interest can provide a common, commensurate framing. When this framing has a normative dimension, then this grounds the claim that the full range of common interests ought to be brought into view and weighed in the balance. One must capture all interests valued by the affected public, whether individual or common in nature, to offer them a reason to accept a particular trade-off between privacy and the public interest in health research. To do otherwise is to get the balance of governance wrong and compromise its social legitimacy.

That full range of common interests must include interests in group data. An understanding of what the public interest requires in a particular situation is short-sighted if this is not brought into view. An implication is that group interests must be taken into account within an interpretation and application of public interest in data protection law. Data controllers should be accountable for addressing group privacy interests in any public interest claim. With respect to the law of confidence, there is scope for even more significant reform. If the legitimacy of the governance framework, applicable to health data, is to be assured into the future, then it needs to be able to see – so that it might protect – reasonable expectations in data relating to groups of persons and not just identifiable individuals. Anything else will be a myopic failure to protect some of the most sensitive data about people simply on the grounds that misuse does not affect a sole individual but multiple individuals simultaneously. That is not a governance model that we have any reason to accept and we have the concept of public interest at our disposal to correct our vision and bring the full range of relevant interests into view.

25 Mobilising Public Expertise in Health Research Regulation

Michael M. Burgess
25.1 Introduction

This chapter will develop the role that deliberative public engagement should have in health research regulation. The goal of public deliberation is to mobilise the expertise that members of the public have, to explore their values in relation to specific trade-offs, with the objective of recommendations that respect diverse interests. Public deliberation requires that a small group is invited to a structured event that supports informed, civic-minded consideration of diverse perspectives on public interest. Ensuring that the perspectives considered are inclusive of perspectives that might otherwise be marginalised or silenced requires explicitly designing the small group in relation to the topic. Incorporating public expertise enhances the trustworthiness of policies and governance by explicitly acknowledging and negotiating diverse public interests. Trustworthiness is distinct from trust, so the chapter begins by exploring that distinction in the context of the example of care.data and the loss of trust in the English National Health Service’s (NHS) use of electronic health records for research. While better public engagement prior to the announcement might have avoided the loss of trust, subsequent deliberative public engagement may build trustworthiness into the governance of health research endeavours and contribute to re-establishing trust.

25.2 Trustworthiness of Research Governance

Some activities pull at the loose threads of social trust. These events threaten to undermine the presumption of legitimacy that underlie activities directed to public interest. The English NHS care.data programme is one cautionary tale (NHS, 2013).Footnote 1 The trigger event was the distribution of a pamphlet to households. The pamphlet informed the public that a national database of patients’ medical records would be used for patient care, to monitor outcomes, and that research would take place on anonymous datasets. The announcement was entirely legitimate within the existing regulatory regime. The Editor-in-Chief of the British Medical Journal summarises, ‘But all did not go to plan. NHS England’s care.data programme failed to win the public’s trust and lost the battle for doctors’ support. Two reports have now condemned the scheme, and last week the government decided to scrap it.’Footnote 2

The stimulation of public distrust is often characterised by a political deployment of a segment of the public, but it may lead to a wider rejection of previously non-controversial trade-offs. In the case of care.data, the first response was to ensure better education about benefits and enhanced informed consent. The Caldicott Report on the care.data programme called for better technology standards, publication of disclosure procedures, an easy opt-out procedure and a ‘dynamic consent’ process.Footnote 3

There are good reasons to doubt that improved regulation and informed consent procedures alone will restore the loss, or sustain current levels, of public trust. It was unlikely that the negative reaction to care.data had to do with an assessment of the adequacy of the regulations for privacy and access to health data. Moreover, everything proposed under care.data was perfectly lawful. It is far more likely that the reaction had to do with a rejection of what was presented as a clear case of justified use of patients’ medical records. The perception was that the trade-offs were not legitimate, at least to some of the public and practitioners. The destabilisation of trust that patient information was being used in appropriate ways, even with what should have been an innocuous articulation of practice, suggests a shift in how the balance between access to information and privacy are perceived. Regulatory experts on privacy and informed consent may strengthen protection or recalibrate what is protected. But such measures do not develop an understanding and response to how a wider public might assign proportionate weight to privacy and access in issues related to research regulation. Social controversy about the relative weight of important public interests demonstrates the loss of legitimacy of previous decisions and processes. It is the legitimacy of the programmes that require public input.

The literature on public understanding of science also suggests that merely providing more detailed information and technical protections is unlikely to increase public trust.Footnote 4 Although alternative models of informed consent are beyond the scope of this chapter, it seems more likely that informed consent depends on relationships of trust, and that trust, or its absence, is more of a heuristic approach that serves as a context in which people make decisions under conditions of limited time and understanding.Footnote 5 Trust is often extended without assessment of whether the conditions justify trust, or are trustworthy. It also follows that trust may not be extended even when the conditions seem to merit trust. The complicated relationship between trust and trustworthiness has been discussed in another chapter (see Chuong and O’Doherty, Chapter 12) and in the introduction to this volume, citing Onora O’Neill, who encourages us to focus on demonstrating trustworthiness in order to earn trust.

The care.data experience illustrates how careful regulation within the scope of law and current research ethics, and communicating those results to a wide public, was not sufficient for the plan to be perceived as legitimate and to be trusted. Regulation of health research needs to be trustworthy, yet distrust can be stimulated despite considerable efforts and on-going vigilance. If neither trust nor distrust are based on the soundness of regulation of health research, then the sources of distrust need to be explicitly addressed.

25.3 Patients or Public? Conceptualising What Interests Are Important

It is possible to turn to ‘patients’ or ‘the public’ to understand what may stabilise or destabilise trust and legitimacy in health research. There is considerable literature and funding opportunities related to involving patients in research projects, and related improved outcomes.Footnote 6 The distinction between public and patients is largely conceptual, but it is important to clarify what aspects of participants’ lives we are drawing on to inform research and regulation, and then to structure recruitment and the events to emphasise that focus.Footnote 7 In their role as a patient, or caregivers and advocates for family and friends in healthcare, participants can draw on their experiences to inform clinical care, research and policy. In contrast, decisions that allocate across healthcare needs, or broader public interests, require consideration of a wider range of experiences, as well as the values, and practical knowledge that participants hold as members of the public. Examples of where it is important to achieve a wider ‘citizen’ perspective include funding decisions on drug expenditures and disinvestment, and balancing privacy concerns against benefits from access to health data or biospecimens.Footnote 8 Considerations of how to involve the public in research priorities is not adequately addressed by involving community representatives on research ethics review committees.Footnote 9

Challenges to trust and legitimacy often arise when there are groups who hold different interpretations of what is in the public interest. Vocal participants on issues are often divide into polarised groups. But there is often also a multiplicity of public interests, so there is no single ‘public interest’ to be discovered or determined. Each configuration of a balance of interests also has resource implications, and the consequences are borne unevenly across the lines of inequity in society. There is democratic deficit when decisions are made without input from members of public who will be affected by the policy but have not been motivated to engage. This deficit is best addressed by ‘actively seek(ing) out moral perspectives that help to identify and explore as many moral dimensions of the problem as possible’.Footnote 10 This rejects the notions that bureaucracies and elected representatives are adequately informed by experts and stakeholders to determine what is in the interests of all who will be affected by important decisions. These decisions are, in fact, about a collective future, often funded by public funds with opportunity costs. Research regulation, like biotechnology development and policy, must explicitly consider how and who decides the relative importance of benefits and risks.

The distinction between trust and trustworthiness, between bureaucratic legitimacy and perceived social licence, gives rise to the concern that much patient and public engagement may be superficial and even manipulative.Footnote 11 Careful consideration must be given to how the group is convened, informed, facilitated and conclusions or recommendations are formulated. An earlier chapter considered the range of approaches to public and patient engagement, and how different approaches are relevant for different purposes (see Aitkin and Cunningham-Burley, Chapter 11).Footnote 12 To successfully stimulate trust and legitimacy, the process of public engagement requires working through these dimensions.

25.4 Conceptualising Public Expertise: Representation and Inclusiveness

The use of the term ‘public’ is normally intended to be as inclusive as possible, but it is also used to distinguish the call to public deliberation from other descriptions of members of society or stakeholders. There is a specific expertise called upon when people participate as members of the public as opposed to patients, caregivers, stakeholders or experts. Participants are sought for their broad life perspective. As perspective bearers coming from a particular structural location in society, with ‘experience, history and social knowledge’,Footnote 13 participants draw on their own social knowledge and capacity in a deliberative context that supports this articulation without presuming that their experiences are adequate to understand that of others’, or that there is necessarily a common value or interest

‘Public expertise’ is what we all develop as we live in our particular situatedness, and in structured deliberative events it is blended with an understanding of other perspectives, and directed to develop collective advice related to the controversial choices that are the focus of the deliberation. Adopting Althusser’s notion of hailing or ‘interpellation’ as ideological construction of people’s role and identity, Berger and De Cleen suggest that calling people to deliberate ‘offers people the opportunity to speak (thus empowering them) and a central aspect of how their talk is constrained and given direction (the exercise of power on people)’.Footnote 14 In deliberation, the manifestations of public expertise is interwoven with the overall framing, together co-creating the capacity to consider the issues deliberated from a collective point of view.Footnote 15 Political scientist Mark Warren suggests that ‘(r)epresentation can be designed to include marginalized people and unorganized interests, as well as latent public interests’.Footnote 16 As one form of deliberation, citizen juries, captured in the name and process, the courts have long drawn on public to constitute a group of peers who must make sense and form collective judgments out of conflicting and diverse information and alternative normative weightings.Footnote 17

Simone Chambers, in a classic review of deliberative democracy, emphasised two critiques from diversity theory, and suggested that these would be a central concern in the next generation of deliberative theorists: (1) reasonableness and reason-giving; (2) conditions of equality as participants in deliberative activities.Footnote 18 The facilitation of deliberative events is discussed below, but participants can be encouraged and given the opportunity to understand each other’s perspectives in a manner that may be less restrictive than theoretical discussions suggest. For example, the use of narrative accounts to explain how participants come to hold particular beliefs or positions provide important perspectives that might not be volunteered or considered if there was a strong emphasis on justifying one’s views with reasons in order for them to be considered.Footnote 19

The definition and operationalisation of inclusiveness is important because deliberative processes are rarely large scale, focussing instead on the way that small groups can demonstrate how a wider public would respond if they were informed and civic-minded.Footnote 20 Representation or inclusiveness is often the starting place for consideration of an engagement process.Footnote 21 Steel and colleagues have described three different types of inclusiveness that provides conceptual clarity about the constitution of a group for engagement: representative, egalitarian and normic diversity.Footnote 22

Representative diversity requires that the distribution of the relevant sub-groups in the sample reflects the same distribution as in the reference population. Egalitarian inclusiveness requires equal representation of people from each relevant sub-group so that each perspective is given equal representation. In contrast to representative diversity, Egalitarian diversity ignores the size of each sub-group in the population, and emphasises equal representation of each sub-group. Normic diversity requires the over-representation of sub-groups who are marginalised or overwhelmed by the larger, more influential or mainstream groups in the population. Each of these concepts aim for a symmetry, but the representative approach presumes that symmetry is the replication of the population, while egalitarian and normic concepts directly consider asymmetry of power and voice in society.

Attempts to enhance the range of perspectives considered in determining the public interest(s) is likely to draw on the normic and egalitarian concepts of diversity, and de-emphasise the representative notion. The goal of deliberative public engagement is to address a democratic deficit whereby some groups have been the dominant perspectives considered on the issues, even if none have prevailed over others. It seeks to include a wider range of input from diverse citizens about how to live together given the different perspectives on what is ‘in the public interest. Normic diversity suggests that dominant groups are less present in the deliberating group, and egalitarian suggests that it is important to have similar representation across the anticipated diverse perspectives. The deliberation must be informed about, but not subjugated by, dominant perspectives, and one approach is to exclude dominant perspectives, including those of substance experts, from participating in the deliberation, but introduce their perspectives and related information through materials and presentations intended to inform participants. Deliberative participants must exercise their judgement and critically consider a wide range of perspectives, while stakeholders are agents for a collective identity that asserts the importance of one perspective over others.Footnote 23 It is also challenging to identify the range of relevant perspectives that give particular form to the public expertise for an issue, although demographics may be used to ensure that participants reflect a range of life experiences.Footnote 24 Specific questions may also suggest that particular public perspectives are important to include in the deliberating group. For example, in Californian deliberations on biobanks it was important to include Spanish-only speakers because, despite accounting for the majority of births, they were often excluded from research regulation issues (normic diversity), and they were an identifiable group who likely had unique perspectives compared to other demographic segments of the California population (egalitarian diversity).Footnote 25

25.5 Mobilising Public Expertise in Deliberation

As previously discussed, mobilising public expertise requires considerable support. To be credible and legitimate, a deliberative process must demonstrate that the participants are adequately informed and consider diverse perspectives. Participants must respectfully engage each other in the development of recommendations that focus on reasoned inclusiveness but fully engage the trade-offs required in policy decisions.

It seems obvious that participation in an engagement to advise research regulation must be informed about the activities to be regulated. This is far from a simple task. An engagement can easily be undermined if the information provided is incomplete or biased. It is important to provide not only complete technical details, but also ensure that social controversies and stakeholder perspectives are fairly represented. This can be managed by having an advisory of experts, stakeholders and potential knowledge users. Advisors can provide input into the questions and the range of relevant information that participants must consider to be adequately informed. It is also important to consider how best to provide information to support comprehension across participants with different backgrounds. One approach is to utilise a combination of a background booklet and a panel of four to six speakers.Footnote 26 The speakers, a combination of experts and stakeholders, are asked to be impassioned, explaining how or why they come to their particular view. This will establish that there are controversies, help draw participants into the issues and stimulate interest in the textual information.

Facilitation is another critical element of deliberative engagement. Deliberative engagement is distinguished by collective decisions supported by reasons from the participants – the recommendations and conclusions are the result of a consideration of the diverse perspectives reflected in the process and among participants. The approach to facilitation openly accepts that participants may re-orient the discussion and focus, and that the role of facilitation is to frame the discussion in a transparent manner.Footnote 27 Small groups of six to eight participants can be facilitated to develop fuller participation and articulation of different perspectives and interests than is possible in a larger group. Large group facilitation can be oriented to giving the participants as much control over topic and approach as they assume, while supporting exploration of issues and suggesting statements where the group may be converging. The facilitator may also draw closure to enable participants to move on to other issues by suggesting that there is a disagreement that can be captured. Identifying places of deep social disagreement identifies where setting policy will need to resolve controversy about what is genuinely in the public’s interest, and where there may be a need for more nuanced decisions on a case-by-case basis. The involvement of industry and commercialisation in biobanks is a general area that has frequently defied convergence in deliberation.Footnote 28

Even if recruitment succeeds in convening a diverse group of participants, sustaining diversity and participation across participants requires careful facilitation. The deliberative nature of the activity is dynamic. Participants increase their level of knowledge and understanding of diverse perspectives as facilitation encourages them to shift from an individual to a collective focus. Premature insistence on justifications can stifle understanding of diverse perspectives, but later in the event, justifications are crucial to produce reasons in support of conclusions. Discussion and conclusions can be inappropriately influenced by participants’ personalities, as well as the tendency for some participants to position themselves as having authoritative expertise. It is well within the expertise of the public to consider whether claims to special knowledge or personalities are lacking substantive support for their positions. But self-reflective and respectful communication is not naturally occurring, and deliberation requires skilled facilitation to avoid dominance of some participants and to encourage critical reflection and participation of quieter participants. The framing of the issues and information as well as facilitation inevitably shapes the conclusions, and participants may not recognise that issues and concerns important to them have been ruled out of scope.

Assessing the quality of deliberative public engagement is fraught with challenges. Abelson and Nabatchi have provided good overviews of the state of deliberative civic engagement, assessing its impacts and assessment.Footnote 29

There are recent considerations of whether and under what conditions deliberative public engagement is useful and effective.Footnote 30 Because deliberative engagement is expensive and resource intensive, it needs to be directed to controversies where the regulatory bodies want, and are willing, to have their decisions and policies shaped by public input. Such authorities do not thereby give up their legitimate duties and freedom to act in the public interest or to consult with experts and stakeholders. Rather, activities such as deliberative public engagement are supplemental to the other sources of advice, and not determinative of the outcomes. This point is important for knowledge users, sponsors and participants to understand.

How, then, might deliberative public engagement have helped avoid the negative reaction to care.data? It is first important to distinguish trust from trustworthiness. Trust, sometimes considered as social licence, is usually presumed in the first instance. As a psychological phenomenon, trust is often a heuristic form of reasoning that supports economical use of intellectual and social capital.Footnote 31 There is some evidence that trust is particularly important with regard to research participation.Footnote 32 Based on previous experiences, we develop trust – or distrust – in people and institutions. There is a good chance that many people whose records are in the NHS would approach the use of their records for other purposes with a general sense of trust. Loss of trust often flows from abrupt discovery that things are not as we presumed, which is what appears to have happened in care.data. On the other hand, trustworthiness of governance is when the governance system has the characteristics that, if scrutinised, would support that it is worthy of trust.

Given this understanding, it might have been possible to demonstrate trustworthiness of governance of the NHS data by holding deliberative public engagement and considering its recommendations for data management. Also, public trust might not have been as widely undermined if the announcement of extension of access to include commercial partners provided a basis for finding the governance trustworthy. Of course, distrust of critical stakeholders and members of public will still require direct responses to their concerns.

It is important to note that trustworthiness that can stand up to scrutiny is the goal, rather than directing efforts at increasing trust. Since trust is given in many cases without reflection, it can often be manipulated. By aiming at trustworthiness, arrived at through considerations that include deliberative public input, the authorities demonstrate that their approach is trustworthy. Articulating how controversies have been considered with input from informed and deliberating members of public would have demonstrated that the trust presumed at the outset was, in an important sense, justified. Now, after the trust has been lost and education and reinforced individual consent has not addressed the concerns, deliberation to achieve legitimate and trustworthy governance may have a more difficult time stimulating wide public trust, but it may remain the best available option.

25.6 Conclusion

Deliberative public engagement has an important role in the regulation of health research. Determining what trade-offs are in the public interest requires a weighing of alternatives and relative weights of different interests. Experts and stakeholders are legitimate advocates for the interests they represent, but their interests manifest an asymmetry of power. Including a well-designed process to include diverse public input can increase the legitimacy and trustworthiness of the policies. Deliberative engagement mobilises a wider public to direct their collective experience and expertise. The resulting advice about what is in the public interest explicitly builds diversity in the recruitment of the participants and in the design of the deliberation.

Deliberative public engagement is helpful for issues where there is genuine controversy about what is in the public interest, but it is far from a panacea. It is an important complement to stakeholder and expert input. The deliberative approach starts with careful consideration of the issues to be deliberated and how the diversity is to be structured into the recruitment of a deliberating small group. Expert and stakeholder advisors, as well as decision-makers who are the likely recipients of the conclusions of the deliberation, can help develop the range of information necessary for deliberation on the issues to be informed. Participants need to be supported by exercises and facilitation that helps them develop a well-informed and respectful understanding of diverse perspectives. Facilitation then shifts to support the development of a collective focus and conclusions with justifications. Diversity and asymmetry of power is respected through the conceptualisation and implementation of inclusiveness, the development of information, and through facilitation and respect for different kinds of warranting. There must be a recognition that the role of event structure and facilitation means that the knowledge is co-produced with the participants, and that it is very challenging to overcome asymmetries, even in the deliberation itself. Another important feature is the ability to identify persistent disagreements and not force premature consensus on what is in the public interest. In this quality, it mirrors the need for, and nature of, regulation of health research to struggle with the issue of when research is in ‘the public interest’.

It is inadequate to assert or assume that research and its existing and emerging regulation is in the public interest. It is vital to ensure wide, inclusive consideration that is not overwhelmed by economic or other strongly vested interests. This is best accomplished by developing, assessing and refining ways to better include diverse citizens in the informed reflections about what is in our collective interests, and how to best live together when those interests appear incommensurable.

26 Towards Adaptive Governance in Big Data Health Research Implementing Regulatory Principles

Effy Vayena and Alessandro Blasimme
26.1 Introduction

In recent times, biomedical research has begun to tap into larger-than-ever collections of different data types. Such data include medical history, family history, genetic and epigenetic data, information about lifestyle, dietary habits, shopping habits, data about one’s dwelling environment, socio-economic status, level of education, employment and so on. As a consequence, the notion of health data – data that are of relevance for health-related research or for clinical purposes – is expanding to include a variety of non-clinical data, as well as data provided by research participants themselves through commercially available products such as smartphones and fitness bands.Footnote 1 Precision medicine that pools together genomic, environmental and lifestyle data represents a prominent example of how data integration can drive both fundamental and translational research in important domains such as oncology.Footnote 2 All of this requires the collection, storage, analysis and distribution of massive amounts of personal information as well as the use of state-of-the art data analytics tools to uncover new disease-related patterns.

To date, most scholarship and policy on these issues has focused on privacy and data protection. Less attention has been paid to addressing other aspects of the wicked challenges posed by Big Data health research and even less work has been geared towards the development of novel governance frameworks.

In this chapter, we make the case for adaptive and principle-based governance of Big Data research. We outline six principles of adaptive governance for Big Data research and propose key factors for their implementation into effective governance structures and processes.

26.2 The Case for Adaptive Principles of Governance in Big Data Research

For present purposes, the term ‘governance’ alludes to a democratisation of administrative decision-making and policy-making or, to use the words of sociologist Anthony Giddens, to ‘a process of deepening and widening of democracy [in which] government can act in partnership with agencies in civil society to foster community renewal and development.’Footnote 3

Regulatory literature over the last two decades has formalised a number of approaches to governance that seem to address some of the defining characteristics of Big Data health research. In particular, adaptive governance and principles-based regulation appear well-suited to tackle three specific features of Big Data research, namely: (1) the evolving, and thus hardly predictable nature of the data ecosystem in Big Data health research – including the fast-paced development of new data analysis techniques; (2) the polycentric character of the actor network of Big Data and the absence of a single centre of regulation; and (3) the fact that most of these actors do not currently share a common regulatory culture and are driven by unaligned values and visions.Footnote 4

Adaptive governance is based on the idea that – in the presence of uncertainty, lack of evidence and evolving, dynamic phenomena – governance should be able to adapt to the mutating conditions of the phenomenon that it seeks to govern. Key attributes of adaptive governance are the inclusion of multiple stakeholders in governance design,Footnote 5 collaboration between regulating and regulated actors,Footnote 6 the incremental and planned incorporation of evidence in governance solutionsFootnote 7 and openness to cope with uncertainties through social learning.Footnote 8 This is attained by planning evidence collection and policy revision rounds in order to refine the fit between governance and public expectations; distributing regulatory tasks across a variety of actors (polycentricity); designing partially overlapping competences for different actors (redundancy); and by increasing participation in policy and management decisions by otherwise neglected social groups. Adaptive governance thus seems to adequately reflect the current state of Big Data health research as captured by the three characteristics outlined above. Moreover, social learning – a key feature of adaptive governance – can help explore areas of overlapping consensus even in a fragmented actor network like the one that constitutes Big Data research.

Principles based regulation (PBR) is a governance approach that emerged in the 1990s to cope with the expansion of the financial services industry. Just as Big Data research is driven by technological innovation, financial technologies (the so-called fintech industry) have played a disruptive role for the entire financial sector.Footnote 9 Unpredictability, accrual of new stakeholders and lack of regulatory standards and best practices characterise this phenomenon. To respond to this, regulators such as the UK Financial Services Authority (FSA), backed-up by a number of academic supporters of ‘new governance’ approaches,Footnote 10 have proposed principles-based regulation as a viable governance model.Footnote 11 In this model, regulation and oversight relies on broadly-stated principles that reflect regulators orientations, values and priorities. Moreover, implementation of the principles is not entirely delegated to specified rules and procedures. Rather, PBR relies on regulated actors to set up mechanism to comply with the principles.Footnote 12 Principles are usually supplemented by guidance, white papers and other policies and processes to channel the compliance efforts of regulated entities. See further on PBR, Sethi, Chapter 17, this volume.

We contend that PBR is helpful to set up Big Data governance in the research space because it is explicitly focussed on the creation of some form of normative alignment between the regulator and the regulated; it creates conditions that can foster the emergence of shared values among different regulated stakeholders. Since compliance is not rooted on box-ticking nor respect for precisely-specified rules, PBR stimulates experimentation with a number of different oversight mechanisms. This bottom-up approach allows stakeholders to explore a wide range of activities and structures to align with regulatory principles, favouring the selection of more cost-efficient and proportionate mechanisms. Big data health research faces exactly this need to create stakeholders’ alignment and to cope with the wide latitude of regulatory attitudes that is to be expected in an innovative domain with multiple newcomers.

The governance model that we propose below relies on both adaptive governance – as to its capacity to remain flexible to future evolutions of the field – and PBR – because of its emphasis on principles as sources of normative guidance for different stakeholders.

26.3 A Framework to Develop Systemic Oversight

The framework we propose below provides guidance to actors that have a role in the shaping and management of research employing Big Data; it draws inspiration from the above-listed features of adaptive governance. Moreover, it aligns with PBR in that it offers guidance to stakeholders and decision-makers engaged at various levels in the governance of Big Data health research. As we have argued elsewhere, our framework will facilitate the emergence of systemic oversight functions for the governance of Big Data health research.Footnote 13 The development of systemic oversight relies on six high-order principles aimed at reducing the effects of a fragmented governance landscape and at channelling governance decisions – through both structures and processes – towards an ethically defensible common ground. These six principles do not predefine which specific governance structures and processes shall be put in place – hence the caveat that they represent high-order guides. Rather, they highlight governance features that shall be taken into account in the design of structures and processes for Big Data health research. Equally, our framework is not intended as a purpose-neutral approach to governance. Quite to the contrary; the six principles we advance do indeed possess a normative character in that they endorse valuable states of affairs that shall occur as a result of appropriate and effective governance. By the same token, our framework suggests that action should be taken in order to avoid certain kinds of risks that will most likely occur if left unattended. In this section, we will illustrate the six principles of systemic oversight – adaptivity, flexibility, monitoring, responsiveness, reflexivity and inclusiveness – while the following section deals with the effective interpretation and implementation of such principles in terms of both structures and processes.

Adaptivity: adaptivity is the capacity of governance structures and processes to ensure proper management of new forms of data as they are incorporated into health research practices. Adaptivity, as presented here, has also been discussed as a condition for resilience, that is, for the capacity of any given system to ‘absorb disturbances and reorganize while undergoing change so as to still retain essentially the same function, structure, identity and feedbacks.’Footnote 14 This feature is crucial in the case of a rapidly evolving field – like Big Data research – whose future shape, as a consequence, is hard to anticipate.

Flexibility: flexibility refers to the capacity to treat different data types depending on their actual use rather than their source alone. Novel analytic capacities are jeopardising existing data taxonomies, which rapidly renders regulatory categories constructed around them obsolete. Flexibility means, therefore, recognising the impact of technical novelties and, at a minimum, giving due consideration to their potential consequences.

Monitoring: risk minimisation is a crucial aim of research ethics. With the possible exception of highly experimental procedures, the spectrum of physical and psychological harms due to participation in health research is fairly straightforward to anticipate. In the evolving health data ecosystem described so far, however, it is difficult to anticipate upfront what harms and vulnerabilities research subjects may encounter due their participation in Big Data health research. This therefore requires on-going monitoring.

Responsiveness: despite efforts in monitoring emerging vulnerabilities, risks can always materialise. In Big Data health research, privacy breaches are a case in point. Once personal data are exposed, privacy is lost. No direct remedy exists to re-establish the privacy conditions that were in place before the violation. Responsiveness therefore prescribes that measures are put in place to at least reduce the impact of such violations on the rights, interests and well-being of research participants.

Reflexivity: it is well known that certain health-related characteristics cluster in specific human groups, such as populations, ethnic groups, families and socio-economic strata. Big data are pushing the classificatory power of research to the next level, with potentially worrisome implications. The classificatory assumptions that drive the use of rapidly evolving data-mining capacities need to be put under careful scrutiny as to their plausibility, opportunity and consequences. Failing to do so will result in harms to all human groups affected by those assumptions. What is more, public support for, as well as trust in, scientific research may be jeopardised by the reputational effects that can arise if reflexivity and scrutiny are not maintained.

Inclusiveness: the last component of systemic oversight closely resonates with one of the key features of adaptive governance, that is, the need to include all relevant parties in the governance process. As more diverse data sources are aggregated, the more difficult it becomes for research participants to exert meaningful control on the expanding cloud of personal data that is implicated by their participation.Footnote 15 Experimenting with new forms of democratic engagement is therefore imperative for a field that depends on resources provided by participants (i.e. data), but that, at the same time, can no longer anticipate how such resources will be employed, how they will be analysed and with which consequences. See Burgess, Chapter 25.

These six principles can be arranged to form the acronym AFIRRM: our model framework for the governance of Big Data health research.

26.4 Big Data Health Research: Implementing Effective Governance

While there is no universal definition of the notion of effective governance, it alludes in most cases to an alignment between purposes and outcomes, reached through processes that fulfil constituents’ expectations and which project legitimacy and trust onto the involved actors.Footnote 16 This understanding of effective governance fits well with our domain of interest: Big Data health research. In the remainder of this chapter, drawing on literature on the implementation of adaptive governance and PBR, we discuss key issues to be taken into account in trying to derive effective governance structures and oversight mechanism from the AFIRRM principles.

The AFIRRM framework endorses the use of principles as high-level articulations of what is to be expected by regulatory mechanisms for the governance of Big Data health research. Unlike the use of PBR in financial markets where a single regulator expects compliance, PBR in the Big Data context responds to the reality that governance functions are distributed among a plethora of actors, such as ethics review committees, data controllers, privacy commissioners, access committees, etc. PBR within the AFIRRM framework offers a blueprint for such a diverse array of governance actors to create new structures and processes to cope with the specific ethical and legal issues raised by the use of Big Data. Such principles have a generative function in the governance landscape, that is, in the process of being created to govern those issues.

The key advantage of principles in this respect is that they require making the reason behind regulation visible to all interested parties, including publics. This amounts to an exercise of public accountability that can bring about normative coherence among actors with different starting assumptions. The AFIRRM principles stimulate a bottom-up exploration of the values at stake and how compliance with existing legal requirements will be met. In this sense, the AFIRRM principles perform a formal, more than a substantive function, precisely because we assume the substantive ethical and legal aims of regulation that have already been developed in health research – such as the protection of research participants from the risk of harm – to hold true also for research employing Big Data. What AFIRRM principles do is to provide a starting point for deliberation and action that respects existing ethical standards and complies with pre-existing legal rules.

The AFIRRM principles do not envision actors in the space of Big Data research to self-regulate, but they do presuppose trust between regulators and regulated entities: regulators need to be confident that regulated entities will do their best to give effect to the principles in good faith. While some of the interests at stake in Big Data health research might be in tension – like the interest of researchers to access and distribute data, and the interests of data donors to control what their personal data are used for – developing efficient governance structures and processes that meet stakeholders’ expectations is of advantage for all interested parties to begin with conversations based on core agreed principles. Practically, this requires all relevant stakeholders to have a say in the development and operationalisation of the principles at stake.

Adaptive governance scholarship has identified typical impediments to effective operationalisation of adaptive mechanisms. A 2012 literature review of adaptive governance, network management and institutional analysis identified three key challenges to the effective implementation of adaptive governance: ill-defined purposes and objectives, unclear governance context and lack of evidence in support of blueprint solutions.Footnote 17

Let us briefly illustrate each of these challenges and explain how systemic oversight tries to avoid them. In the shift from centralised forms of administration and decision-making, to less formalised and more distributed governance networks that occurred over the last three decades,Footnote 18 the identification of governance objectives is no longer straightforward. This difficulty may also be due to the potentially conflicting values of different actors in the governance ecosystem. In this respect, systemic oversight has the advantage of not being normatively neutral. The six principles of systemic oversight determinedly aim at fostering an ethical common ground for a variety of governance actors and activities in the space of Big Data research. What underpins the framework, therefore, is a view of what requires ethical attention in this rapidly evolving field, and how to prioritise actions accordingly. In this way, systemic oversight can provide orientation for a diverse array of governance actors (structures) and mechanisms (processes), all of which are supposed to produce an effective system of safeguards around activities in this domain. Our framework directs attention to critical features of Big Data research and promotes a distributed form of accountability that will, where possible, emerge spontaneously from the different operationalisations of its components. The six components of systemic oversight, therefore, suggest what is important to take into account when considering how to adapt the composition, mandate, operations and scope of oversight bodies in the field of Big Data research.

The second challenge to effective adaptive governance – unclear governance context – refers to the difficulty of mapping the full spectrum of rules, mechanisms, institutions and actors involved in a distributed governance system or systems. Systemic oversight requires mapping the overall governance context in order to understand how best to implement the framework in practice. This amounts to an empirical inquiry into the conditions (structures, mechanisms and rules) in which governance actors currently operate. In a recent study we showed that current governance mechanisms for research biobanks, for instance, are not aligned with the requirements of systemic oversight.Footnote 19 In particular, we showed that systemic oversight can contribute to improve accountability of research infrastructures that, like biobanks, collect and distribute an increasing amount of scientific data.

The third and last challenge to effective operationalisation of adaptive mechanisms has to do with the limits of ready-made blueprint solutions to complex governance models. Political economist and Nobel Laureate Elinor Ostrom has written extensively on this. In her work on socio-ecological systems, Ostrom has convincingly shown that policy actors have the tendency to buy into what she calls ‘policy panaceas’,Footnote 20 that is, ready-made solutions to very complex problems. Such policy panaceas are hardly ever supported by solid evidence regarding the effectiveness of their outcomes. One of the most commonly cited reasons for their lack of effectiveness is that complexity entails high degrees of uncertainty as to the very phenomenon that policy makers are trying to govern.

We saw that uncertainty is characteristic of Big Data research too (see Section 26.2). That is why systemic oversight refrains from prescribing any particular governance solution. While not rejecting traditional predict-and-control approaches (such as informed consent, data anonymisation and encryption), systemic oversight does not put all the regulatory weight on any particular instrument or body. The systemic ambition of the framework lies in its pragmatic orientation towards a plurality of tools, mechanisms and structures that could jointly stabilise the responsible use of Big Data for research purposes. In this respect, our framework acknowledges that ‘[a]daptation typically emerges organically among multiple centers of agency and authority in society as a relatively self-organized or autonomous process marked by innovation, social learning and political deliberation’.Footnote 21

Still, a governance framework’s capacity to avoid known bottlenecks to operationalisation is a necessary but not a sufficient condition to its successful implementation. The further question is how the principles of the systemic oversight model can be incorporated into structures and processes in Big Data research governance. With structures we mean actors and networks of actors involved in governance, and organised in bodies charged with oversight, organisational or policy-making responsibilities. Processes, instead, are the mechanisms, procedures, rules, laws and codes through which actors operate and bring about their governance objectives. Structures and processes define the polycentric, redundant and experimental system of governance that an adaptive governance model intends to promote.Footnote 22

26.5 Key Features of Governance Structures and Processes

Here we follow the work of Rijke and colleaguesFootnote 23 in identifying three key properties of adaptive governance structures: centrality, cohesion and density. While it is acknowledged that centralised structures can be effective as a response to crises and emergencies, centralisation is precisely a challenge in Big Data; our normative response is to call for inclusive social learning among the broad array of stakeholders, subject to challenges of incomplete representation of relevant interests (see further below). Still, this commitment can help to promote network cohesion by fostering discussion about how to implement the principles, while also promoting the formation of links between governance actors, as required by density. In addition, this can help to ensure that governance roles are fairly distributed among a sufficiently diverse array of stakeholders and that, as a consequence, decisions are not hijacked by technical experts.

The governance space in Big Data research is already populated by numerous actors, such as IRBs, data access committees and advisory boards. These bodies are not necessarily inclusive of a sufficiently broad array of stakeholders and therefore they may not be very effective at promoting social learning. Their composition could thus be rearranged in order to be more representative of the interests at stake and to promote continued learning. New actors could also enter the governance system. For instance, data could be made available for research by data subjects themselves through data platforms.Footnote 24

Network of actors (structures) operating in the space of health research do so through mechanisms and procedures (processes) such as informed consent and ethics review, as well as data access review, policies on reporting research findings to participants, public engagement activities and privacy impact assessment.

Processes are crucial to effective governance of health research and are a critical component of the systemic oversight approach as their features can determine the actual impact of its principles. Drawing on scholarship in adaptive governance, we present three such features (components) that are central to the appropriate interpretation of the systemic oversight principles.

Social learning: social learning refers to learning that occurs by observing others.Footnote 25 In governance settings that are open to participation by different stakeholders, social learning can occur across different levels and hierarchies of the governance structures. According to many scholars, including Ostrom,Footnote 26 social learning represents an alternative to policy blueprints (see above) – especially when it is coupled with and leading to adaptive management. Planned adaptations – that is, previously scheduled rounds of policy revision in light of new knowledge – can be occasions for governance actors to capitalise on each other’s experience and learn about evolving expectations and risks. Such learning exercises can reduce uncertainty and lead to adjustments in mechanisms and rules. The premise of this approach is the realisation that in complex systems characterised by pronounced uncertainty, ‘no particular epistemic community can possess all the necessary knowledge to form policy’.Footnote 27 Social learning – be it aimed at gathering new evidence, at fostering capacity building or at assessing policy outcomes – is relevant to all of the six components of systemic oversight. The French law on bioethics, for instance, prescribes periodic rounds of nationwide public consultation – the so-called Estates General on bioethics.Footnote 28 This is an example of how social learning can be fostered. Similar social learning can be triggered even at smaller scales – for instance in local oversight bodies – in order to explore new solutions and alternative designs.

Complementarity: complementarity is the capacity of governance processes to fulfil both the need for processes to be functionally compatible and to ensure procedural correspondence between processes and the phenomena they intend to regulate. Functional complementarity refers to the distribution of regulatory functions across a given set of processes exhibiting partial overlap (see redundancy, above). This feature is crucial for both monitoring and reflexivity. Procedural complementarity, on the other hand, refers to the temporal alignment between governance processes and the activities that depend on such processes. One prominent example, in this respect, is the timing of ethics review processes, or that of data access requests processing.Footnote 29 For instance, the European General Data Protection Regulation (GDPR) prescribes a maximum 72-hour delay between detection and notification of privacy breaches. This provision is an example of procedural complementarity that would be of the utmost importance for the principle of responsiveness.

Visibility: governance processes need to be visible, that is, procedures and their scope need to be as publicly available as possible to whomever is affected by them or must act accordingly to them. The notion of regulatory visibility has recently been highlighted by Laurie and colleagues, who argue for regulatory stewardship within ecosystems to help researchers clarify values and responsibilities in health research and navigate the complexities.Footnote 30 Recent work also demonstrates that currently it is difficult to access policies and standard operating procedures of prominent research institutions like biobanks. In principle, fair scientific competition may militate against disclosure of technical details about data processing, but it is hard to imagine practical circumstances in which administrators of at least publicly funded datasets would not have incentives to share as much information as possible regarding the way they handle their data. Process visibility goes beyond fulfilling a pre-determined set of criteria (for instance, for auditing purposes). By disclosing governance processes and opportunities for engagement, actors actually offer reasons to be trusted by a variety of stakeholders.Footnote 31 This feature is of particular relevance for the principles of monitoring and reflexivity, as well as to improve the effectiveness of inclusive governance processes.

26.6 Conclusion

In this chapter, we have defended adaptive governance as a suitable regulatory approach for Big Data health research by proposing six governance principles to foster the development of appropriate structures and processes to handle critical aspects of Big Data health research. We have analysed key aspects of implementation and identified a number of important features that can make adaptive regulation operational. However, one might legitimately ask: in the absence of a central regulatory actor endowed with clearly recognised statutory prerogatives, how can it be assumed that the AFIRRM principles will be endorsed by the diverse group of stakeholders operating in the Big Data health research space? Clearly, this question does not have a straightforward answer. However, to increase likelihood of uptake, we have advanced AFIRRM as a viable and adaptable model for the creation of necessary tools that can deliver on common objectives. Our model is based on a careful analysis of regulatory scholarship vis-à-vis the key attributes of this type of research. We are currently undertaking considerable efforts to introduce AFIRRM to regulators, operators and organisations in the space of research or health policy. We are cognisant of the fact that the implementation of a model like AFIRRM needs not be temporally linear. Different actors may take initiative at different points in time. It cannot be expected that a coherent system of governance will emerge in a synchronically orchestrated manner through the uncoordinated action of multiple stakeholders. Such a path could only be imagined if a central regulator had the power and the will to make it happen. Nothing indicates, however, that regulation will assume a centralised character anytime soon. Nevertheless, polycentricity is not in itself a barrier to the emergence of a coherent governance ecosystem. Indeed, the AFIRRM principles – in line with its adaptive orientation – rely precisely on polycentric governance to cope with the uncertainty and complexity of Big Data health research.

27 Regulating Automated Healthcare and Research Technologies First Do No Harm (to the Commons)

Roger Brownsword
27.1 Introduction

New technologies, techniques, and tests in healthcare, offering better prevention, or better diagnosis and treatment, are not manna from heaven. Typically, they are the products of extensive research and development, increasingly enabled by high levels of automation and reliant on large datasets. However, while some will push for a permissive regulatory environment that is facilitative of beneficial innovation, others will push back against research that gives rise to concerns about the safety and reliability of particular technologies as well as their compatibility with respect for fundamental values. Yet, how are the interests in pushing forward with research into potentially beneficial health technologies to be reconciled with the heterogeneous interests of the concerned who seek to push back against them?

A stock answer to this question is that regulators, neither over-regulating nor under-regulating, should seek an accommodation or a balance of interests that is broadly ‘acceptable’. If the issue is about risks to human health and safety, then regulators – having assessed the risk – should adopt a management strategy that confines risk to an acceptable level; and, if there is a tension between, say, the interest of researchers in accessing health data and the interest of patients in both their privacy and the fair processing of their personal data, then regulators should accommodate these interests in a way that is reasonable – or, at any rate, not manifestly unreasonable.

The central purpose of this chapter is not to argue that this balancing model is always wrong or inappropriate, but to suggest that it needs to be located within a bigger picture of lexically ordered regulatory responsibilities.Footnote 1 In that bigger picture, the paramount responsibility of regulators is to act in ways that protect and maintain the conditions that are fundamental to human social existence (the commons). After that, a secondary responsibility is to protect and respect the values that constitute a group as the particular kind of community that it is. Only after these responsibilities have been discharged do we get to a third set of responsibilities that demand that regulators seek out reasonable and acceptable balances of conflicting legitimate interests. Accordingly, before regulators make provision for a – typically permissive – framework that they judge to strike an acceptable balance of interests in relation to some particular technology, technique or test, they should check that its development, exploitation, availability and application crosses none of the community’s red lines and, above all, that it poses no threat to the commons.

The chapter is in three principal parts. First, in Section 27.2, we start with two recent reports by the Nuffield Council on Bioethics – one a report on the use of Non-Invasive Prenatal Testing (NIPT),Footnote 2 and the other on genome-editing and human reproduction.Footnote 3 At first blush, the reports employ a similar approach, identifying a range of legitimate – but conflicting – interests and then taking a relatively conservative position. However, while the NIPT report exemplifies a standard balancing approach, the genome-editing report implicates a bigger picture of regulatory responsibilities. Second, in Section 27.3, I sketch my own take on that bigger picture. Third, in Section 27.4, I speak to the way in which the bigger picture might bear on our thinking about the regulation of automated healthcare and research technologies. In particular, in this part of the chapter, the focus is on those technologies that power smart machines and devices, technologies that are hungry for human data but then, in their operation, often put humans out of the loop.

27.2 NIPT, Genome-Editing and the Balancing of Interests

In its report on the ethics of NIPT, the Nuffield Council on Bioethics identifies a range of legitimate interests that call for regulatory accommodation. On the one side, there is the interest of pregnant women and their partners in making informed reproductive choices. On the other side, there are interests – particularly of the disability community and of future children – in equality, fairness and inclusion. The question is: how are regulators to ‘align the responsibilities that [they have] to support women to make informed reproductive choices about their pregnancies, with the responsibilities that [they have] … to promote equality, inclusion and fair treatment for all’?Footnote 4 In response to which, the Council, being particularly mindful of the interests of future children – in an open future – and the interest in a wider societal environment that is fair and inclusive, recommends that a relatively restrictive approach should be taken to the use of NIPT.

In support of the Council’s approach and its recommendation, there is a good deal that can be said. For example, the Council consulted widely before drawing up the inventory of interests to be considered: it engaged with the arguments rationally and in good faith; where appropriate, its thinking was evidence-based; and its recommendation is not manifestly unreasonable. If we were to imagine a judicial review of the Council’s recommendation, it would surely survive the challenge.

However, if the Council had given greater weight to the interest in reproductive autonomy together with the argument that women have ‘a right to know’ and that healthcare practitioners have an interest in doing the best that they can for their patients,Footnote 5 leading to a much less restrictive recommendation, we could say exactly the same things in its support.

In other words, so long as the Council – and, similarly, any regulatory body – consults widely and deliberates rationally, and so long as its recommendations are not manifestly unreasonable, we can treat its preferred accommodation of interests as acceptable. Yet, in such balancing deliberations, it is not clear where the onus of justification lies or what the burden of justification is; and, in the final analysis, we cannot say why the particular restrictive position that the Council takes is more or less acceptable than a less restrictive position.

Turning to the Council’s second report, it hardly needs to be said that the development of precision gene-editing techniques, notably CRISPR-Cas9, has given rise to considerable debate.Footnote 6 Addressing the ethics of gene editing and human reproduction, the Council adopted a similar approach to that in its report on NIPT. Following extensive consultation – and, in this case, an earlier, more general, reportFootnote 7 – there is a careful consideration of a range of legitimate interests, following which a relatively conservative position is taken. Once again, although the position taken is not manifestly unreasonable, it is not entirely clear why this particular position is taken.

Yet, in this second report, there is a sense that something more than balancing might be at stake.Footnote 8 For example, the Council contemplates the possibility that genome editing might inadvertently lead to the extinction of the human species – or, conversely, that genome editing might be the salvation of humans who have catastrophically compromised the conditions for their existence. In these short reflections about the interests of ‘humanity’, we can detect a bigger picture of regulatory responsibilities.

27.3 The Bigger Picture of Regulatory Responsibilities

In this part of the chapter, I sketch what I see as the bigger – three-tier – picture of regulatory responsibilities and then speak briefly to the first two tiers.

27.3.1 The Bigger Picture

My claim is that regulators have a first-tier ‘stewardship’ responsibility for maintaining the pre-conditions for any kind of human social community (‘the commons’). At the second tier, regulators have a responsibility to respect the fundamental values of a particular human community, that is to say, the values that give that community its particular identity. At the third tier, regulators have a responsibility to seek out an acceptable balance of legitimate interests. The responsibilities at the first tier are cosmopolitan and non-negotiable. The responsibilities at the second and third tiers are contingent, depending on the fundamental values and the interests recognised in each particular community. Conflicts between commons-related interests, community values and individual or group interests are to be resolved by reference to the lexical ordering of the tiers: responsibilities in a higher tier always outrank those in a lower tier. Granted, this does not resolve all issues about trade-offs and compromises because we still have to handle horizontal conflicts within a particular tier. But, by identifying the tiers of responsibility, we take an important step towards giving some structure to the bigger picture.

27.3.2 First-Tier Responsibilities

Regulatory responsibilities start with the existence conditions that support the particular biological needs of humans. Beyond this, however, as agents, humans characteristically have the capacity to pursue various projects and plans whether as individuals, in partnerships, in groups, or in whole communities. Sometimes, the various projects and plans that they pursue will be harmonious; but often – as when the acceptability of the automation of healthcare and research is at issue – human agents will find themselves in conflict with one another. Accordingly, regulators also have a responsibility to maintain the conditions – conditions that are entirely neutral between the particular plans and projects that agents individually favour – that constitute the context for agency itself.

Building on this analysis, the claim is that the paramount responsibility for regulators is to protect, preserve, and promote:

  • the essential conditions for human existence (given human biological needs);

  • the generic conditions for human agency and self-development; and,

  • the essential conditions for the development and practice of moral agency.

These, it bears repeating, are imperatives in all regulatory spaces, whether international or national, public or private. Of course, determining the nature of these conditions will not be a mechanical process. Nevertheless, let me indicate how the distinctive contribution of each segment of the commons might be elaborated.

In the first instance, regulators should take steps to maintain the natural ecosystem for human life.Footnote 9 At minimum, this entails that the physical well-being of humans must be secured: humans need oxygen, they need food and water, they need shelter, they need protection against contagious diseases, if they are sick they need whatever treatment is available, and they need to be protected against assaults by other humans or non-human beings. When the Nuffield Council on Bioethics discusses catastrophic modifications to the human genome or to the ecosystem, it is this segment of the commons that is at issue.

Second, the conditions for meaningful self-development and agency need to be constructed: there needs to be sufficient trust and confidence in one’s fellow agents, together with sufficient predictability to plan, so as to operate in a way that is interactive and purposeful rather than merely defensive. Let me suggest that the distinctive capacities of prospective agents include being able: to form a sense of what is in one’s own self-interest; to choose one’s own ends, goals, purposes and so on (‘to do one’s own thing’); and to form a sense of one’s own identity (‘to be one’s own person’).

Third, the commons must secure the conditions for an aspirant moral community, whether the particular community is guided by teleological or deontological standards, by rights or by duties, by communitarian or liberal or libertarian values, by virtue ethics, and so on. The generic context for moral community is impartial between competing moral visions, values, and ideals; but it must be conducive to ‘moral’ development and ‘moral’ agency in the sense of forming a view about what is the ‘right thing’ to do relative to the interests of both oneself and others.

On this analysis, each human agent is a stakeholder in the commons where this represents the essential conditions for human existence together with the generic conditions of both self-regarding and other-regarding agency. While respect for the commons’ conditions is binding on all human agents, it should be emphasised that these conditions do not rule out the possibility of prudential or moral pluralism. Rather, the commons represents the pre-conditions for both individual self-development and community debate, giving each agent the opportunity to develop his or her own view of what is prudent, as well as what should be morally prohibited, permitted or required.

27.3.3 Second-Tier Responsibilities

Beyond the stewardship responsibilities, regulators are also responsible for ensuring that the fundamental values of their particular community are respected. Just as each individual human agent has the capacity to develop their own distinctive identity, the same is true if we scale this up to communities of human agents. There are common needs and interests but also distinctive identities.

In the particular case of the United Kingdom: although there is not a general commitment to the value of social solidarity, arguably this is actually the value that underpins the NHS. Accordingly, if it were proposed that access to NHS patient data – data, as Philip Aldrick has put it, that is ‘a treasure trove … for developers of next-generation medical devices’Footnote 10 – should be part of a transatlantic trade deal, there would surely be an uproar because this would be seen as betraying the kind of healthcare community that we think we are.

More generally, many nation states have expressed their fundamental (constitutional) values in terms of respect for human rights and human dignity.Footnote 11 These values clearly intersect with the commons’ conditions and there is much to debate about the nature of this relationship and the extent of any overlap – for example, if we understand the root idea of human dignity in terms of humans having the capacity freely to do the right thing for the right reason,Footnote 12 then human dignity reaches directly to the commons’ conditions for moral agency.Footnote 13 However, those nation states that articulate their particular identities by reference to their commitment to respect for human dignity are far from homogeneous. Whereas in some communities, the emphasis of human dignity is on individual empowerment and autonomy, in others it is on constraints relating to the sanctity, non-commercialisation, non-commodification and non-instrumentalisation of human life.Footnote 14 These differences in emphasis mean that communities articulate in very different ways on a range of beginning-of-life and end-of-life questions as well as on questions of acceptable health-related research, and so on.

Given the conspicuous interest of today’s regulators in exploring technological solutions, an increasingly important question will be whether, and if so, how far, a community sees itself as distinguished by its commitment to regulation by rule and by human agents. In some smaller-scale communities or self-regulating groups, there might be resistance to a technocratic approach because automated compliance compromises the context for trust and for responsibility. Or, again, a community might prefer to stick with regulation by rules and by human agents because it is worried that with a more technocratic approach, there might be both reduced public participation in the regulatory enterprise and a loss of flexibility in the application of technological measures.

If a community decides that it is generally happy with an approach that relies on technological measures rather than rules, it then has to decide whether it is also happy for humans to be out of the loop. Furthermore, once a community is asking itself such questions, it will need to clarify its understanding of the relationship between humans and robots – in particular, whether it treats robots as having moral status, or legal personality, and the like.

These are questions that each community must answer in its own way. The answers given speak to the kind of community that a group aspires to be. That said, it is, of course, essential that the fundamental values to which a particular community commits itself are consistent with (or cohere with) the commons’ conditions.

27.4 Automated Healthcare and the Bigger Picture of Regulatory Responsibility

One of the features of the NHS Long Term PlanFootnote 15 – in which the NHS is described as ‘a hotbed of innovation and technological revolution in clinical practice’Footnote 16 – is the anticipated role to be played by technology in ‘helping clinicians use the full range of their skills, reducing bureaucracy, stimulating research and enabling service transformation’.Footnote 17 Moreover, speaking about the newly created unit, NHSX (a new joint organisation for digital, data and technology), the Health Secretary, Matt Hancock, said that this was ‘just the beginning of the tech revolution, building on our Long Term Plan to create a predictive, preventative and unrivalled NHS’.Footnote 18

In this context, what should we make of the regulatory challenge presented by smart machines and devices that incorporate the latest AI and machine learning algorithms for healthcare and research purposes? Typically, these technologies need data on which to train and to improve their performance. While the consensus is that the collection and use of personal data needs governance and that big datasets (interrogated by state of the art algorithmic tools) need it a fortiori, there is no agreement as to what might be the appropriate terms and conditions for the collection, processing and use of personal data or how to govern these matters.Footnote 19

In its recent final report on Ethics Guidelines for Trustworthy AI,Footnote 20 the European Commission (EC) independent high-level expert group on artificial intelligence takes it as axiomatic that the development and use of AI should be ‘human-centric’. To this end, the group highlights four key principles for the governance of AI, namely: respect for human autonomy, prevention of harm, fairness and explicability. Where tensions arise between these principles, then they should be dealt with by ‘methods of accountable deliberation’ involving ‘reasoned, evidence-based reflection rather than intuition or random discretion’.Footnote 21 Nevertheless, it is emphasised that there might be cases where ‘no ethically acceptable trade-offs can be identified. Certain fundamental rights and correlated principles are absolute and cannot be subject to a balancing exercise (e.g. human dignity)’.Footnote 22

In line with this analysis, my position is that while there might be many cases where simple balancing is appropriate, there are some considerations that should never be put into a simple balance. The group mentions human rights and human dignity. I agree. Where a community treats human rights and human dignity as its constitutive principles or values, they act – in Ronald Dworkin’s evocative terms – as ‘trumps’.Footnote 23 Beyond that, the interest of humanity in the commons should be treated as even more foundational (so to speak, as a super-trump).

It follows that the first question for regulators is whether new AI technologies for healthcare and research present any threat to the existence conditions for humans, to the generic conditions for self-development, and to the context for moral development. It is only once this question has been answered that we get to the question of compatibility with the community’s particular constitutive values, and, then, after that, to a balancing judgment. If governance is to be ‘human-centric’, it is not enough that no individual human is exposed to an unacceptable risk or is not actually harmed. To be fully human-centric, technologies must be designed to respect both the commons and the constitutive values of particular human communities.

Guided by these regulatory imperatives, we can offer some short reflections on the three elements of the commons and how they might be compromised by the automation of research and healthcare.

27.4.1 The Existence Conditions

Famously, Stephen Hawking remarked that ‘the advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity’.Footnote 24 As the best thing, AI would contribute to ‘[the eradication of] disease and poverty’Footnote 25 as well as ‘[helping to] reverse paralysis in people with spinal-cord injuries’.Footnote 26 However, on the downside, some might fear that in our quest for greater safety and well-being, we will develop and embed ever more intelligent devices to the point that there is a risk of the extinction of humans – or, if not that, then a risk of humanity surviving ‘in some highly suboptimal state or in which a large portion of our potential for desirable development is irreversibly squandered’.Footnote 27 If this concern is well-founded, then communities will need to be extremely careful about how far and how fast they go with intelligent devices.

Of course, this is not specifically a concern about the use of smart machines in the hospital or in the research facility: the concern about the existential threat posed to humans by smart machines arises across the board; and, indeed, concerns about existential threats are provoked by a range of emerging technologies.Footnote 28 In such circumstances, a regulatory policy of precaution and zero risk is indicated; and while stewardship might mean that the development and application of some technologies that we value has to be restricted, this is better than finding that they have compromised the very conditions on which the enjoyment of such technologies is predicated.

27.4.2 The Conditions for Self-Development and Agency

The developers of smart devices are hungry for data: data from patients, data from research participants, data from the general public. This raises concerns about privacy and data protection. While it is widely accepted that our privacy interests – in a broad sense – are ‘contextual’,Footnote 29 it is important to understand not just that ‘there are contexts and contexts’ but that there is a Context in which we all have a common interest. What most urgently needs to be clarified is whether any interests that we have in privacy and data protection touch and concern the essential conditions (the Context).

If, on analysis, we judge that privacy reaches through to the interests that agents necessarily have in the commons’ conditions – particularly in the conditions for self-development and agency – it is neither rational nor reasonable for agents, individually or collectively, to authorise acts that compromise these conditions (unless they do so in order to protect some more important condition of the commons). As Bert-Jaap Koops has so clearly expressed it, privacy has an ‘infrastructural character’, ‘having privacy spaces is an important presupposition for autonomy [and] self-development’.Footnote 30 Without such spaces, there is no opportunity to be oneself.Footnote 31 On this reading, privacy is not so much a matter of protecting goods – informational or spatial – in which one has a personal interest, but protecting infrastructural goods in which there is either a common interest (engaging first-tier responsibilities) or a distinctive community interest (engaging second-tier responsibilities).

By contrast, if privacy – and, likewise, data protection – is simply a legitimate informational interest that has to be weighed in an all things considered balance of interests, then we should recognise that what each community will recognise as a privacy interest and as an acceptable balance of interests might well change over time. To this extent, our reasonable expectations of privacy might be both ‘contextual’ and contingent on social practices.

27.4.3 The Conditions for Moral Development and Moral Agency

As I have indicated, I take it that the fundamental aspiration of any moral community is that regulators and regulatees alike should try to do the right thing. However, this presupposes a process of moral reflection and then action that accords with one’s moral judgment. In this way, agents exercise judgment in trying to do the right thing and they do what they do for the right reason in the sense that they act in accordance with their moral judgment. Accordingly, if automated research and healthcare relieves researchers and clinicians from their moral responsibilities, even though well intended, this might result in a significant compromising of their dignity, qua the conditions for moral agency.Footnote 32

Equally, if robots or other smart machines are used for healthcare and research purposes, some patients and participants might feel that this compromises their ‘dignity’ – robots might not physically harm humans, but even caring machines, so to speak, ‘do not really care’.Footnote 33 The question then is whether regulators should treat the interests of such persons as a matter of individual interest to be balanced against the legitimate interests of others, or as concerns about dignity that speak to matters of either (first-tier) common or (second-tier) community interest.

In this regard, consider the case of Ernest Quintana whose family were shocked to find that, at a particular Californian hospital, a ‘robot’ displaying a doctor on a screen was used to tell Ernest that the medical team could do no more for him and that he would soon die.Footnote 34 What should we make of this? Should we read the family’s shock as simply expressing a preference for the human touch or as going deeper to the community’s constitutive values or even to the commons’ conditions? Depending on how this question is answered, regulators will know whether a simple balance of interests is appropriate.

27.5 Conclusion

In this chapter, I have argued that it is not always appropriate to respond to new technologies for healthcare and research simply by enjoining regulators to seek out an acceptable balance of interests. My point is not that we should eschew either the balancing approach or the idea of ‘acceptability’ but that regulators should respond in a way that is sensitised to the full range of their responsibilities.

To the simple balancing approach, with its broad margin for ‘acceptable’ accommodation, we must add the regulatory responsibility to be responsive to the red lines and basic values that are distinctive of the particular community. Any claimed interest or proposed accommodation of interests that crosses these red lines or that is incompatible with the community’s basic values is ‘unacceptable’ – but this is for a different reason to that which applies where a simple balancing calculation is undertaken.

Most fundamentally, however, regulators have a stewardship responsibility in relation to the anterior conditions for humans to exist and for them to function as a community of agents. We should certainly say that any claimed interest or proposed accommodation of interests that is incompatible with the maintenance of these conditions is totally ‘unacceptable’ – but it is more than that. Unlike the red lines or basic values to which a particular community commits itself – red lines and basic values that may legitimately vary from one community to another – the commons’ conditions are not contingent or negotiable. For human agents to compromise the conditions upon which human existence and agency is itself predicated is simply unthinkable.

Finally, it should be said that my sketch of the regulatory responsibilities is incomplete – in particular, concepts such as the ‘public interest’ and the ‘public good’ need to be located within this bigger picture; and, there is more to be said about the handling of horizontal conflicts and tensions within a particular tier. Nevertheless, the ‘take home message’ is clear. Quite simply: while automated healthcare and research might be efficient and productive, new technologies should not present unacceptable risks to the legitimate interests of humans; beyond mere balancing, new technologies should be compatible with the fundamental values of particular communities; and, above all, these technologies should do no harm to the commons’ conditions – supporting human existence and agency – on which we all rely and which we undervalue at our peril.

Section IIB Widening the Lens Introduction

Ganguli-Mitra Agomoni

The sheer diversity of topics in health research makes for a daunting task in the development, establishment, and application of oversight mechanisms and various methods of governance. The authors of this section illustrate how this task is made even more complex by emerging technologies, applications and context, as well as the presence of a variety of actors both in the research and the governance landscape. Nevertheless, key themes emerge, and these sometimes trouble existing paradigms and parameters, and shift and widen our regulatory lenses. A key anchor is the relationship between governance and time: be it the urgent nature of research conducted in global health emergencies; the appropriate weight given to historical data in establishing evidence, anticipating future risk, benefit or harm; or the historical and current forces that have shaped regulatory structures as we meet them today. The perspectives explored in this section can be seen to illustrate different kinds of liminality, which result in regulatory complexity but also offer potential for new kinds of imaginaries, norms and processes.

A first kind of shift in lens is created by the nature of research contexts: for example, whether research is carried out in labs, in clinical settings, traditional healing encounters or, indeed, in a pandemic. These spaces might be the site where values, interests or rules conflict, or they might be characterised by the absence of regulation. Additional tension might be brought about in the interaction of what is being regulated, with how it is being regulated: emerging interventions in already established processes, traditional interventions in more recently developed but strongly established paradigms, or marginal interventions precipitated to the centre by outside forces (crises, economic profit, unexpected findings, imminent or certain injury or death). These shifts give rise to considerations of flexibility and resilience in regulation, of the legitimacy and authority of different actors, and the epistemic soundness in the development and deployment of innovative, experimental, or less established practices.

In Chapter 28, Ho addresses the key concept of risk, and its role within the governance of artificial intelligence (AI) and machine learning (ML) as medical devices. Using the illustration of AI/ML as clinical decision support in the diagnosis of diabetic retinopathy, the author situates their position in qualified opposition to those who perceive governance as an impediment to development and economic gain and those who favour more oversight of AI/ML. In managing such algorithms as risk objects in governance, Ho advocates a governance structure that re-characterises risk as a form of iterative learning process, rather than a rule-based one-time evaluation and regulatory approval based on the quantification of future risk.

The theme of regulation as obstacle is also explored in the following chapter (Chapter 29) by Lipworth et al., in the context of autologous mesenchymal stem cell-based interventions. Here, too, the perspective of the authors is set against those who see traditional governance and translational pathways as an impediment to addressing life-threatening and debilitating illnesses. They also resist the reimagination of healthcare as a marketplace (complete with aggressive marketing and dubious claims) where the patient is seen as a consumer, and the decision to access emerging and novel (unproven and potentially risky) interventions merely as a matter of shared decision-making between patient and clinician. The authors recommend the strengthening a multipronged governance framework, which includes professional regulation, marketplace regulation, regulation of therapeutic products, and research oversight.

In Chapter 30, Haas and Cloatre also explore the difficult task of aligning interventions and products within established regulatory and translational pathways. Here, however, the challenge is not novel or emerging interventions, but traditional or non-conventional medicine, which challenges establishes governance frameworks based on the biomedical paradigm, and yet which millions of patients worldwide rely on as their primary form of healthcare. Here, uncertainty relates to the epistemic legitimacy of non-conventional forms of knowledge gathering. Actors in conflict with established epistemic processes are informed by historical and contextual evidence and practices that far predate the establishment of current frameworks. Traditional and non-conventional interventions are, nevertheless, pushed towards hegemonic governance pathways, often in the ‘scientised and commercial’ forms, in order to gain recognition and legitimacy.

When considering pathways to legitimacy, a key role is played by ethics, in its multiple forms. In Chapter 31, Pickersgill explores ethics in its multiple forms through the eyes of neuroscience researchers, who in their daily practice experience the ethical dimensions of neuroscience and negotiate ethics as a regulatory tool. Ethics can be seen as obstacle to good science, and the (institutional) ethics of human research is often seen as prone to obfuscation and in lack of clear guidance. This results in novel practices and norms within the community, which are informed by a commitment to doing the right thing and by institutional requirements. In order to minimise potential subversion (even well-meant) of ethics in research, Pickersgill advocates the development of governance that arises not only from collaborations between scientists and regulators but also those who can act as critical friends to both of these groups of actors.

Ethics guidance and ethical practices are also explored by Ganguli-Mitra and Hunt (Chapter 32), this time in the context of research carried out in global health emergencies (GHEs). These contexts are characterised by various factors that complicate ethical norms and practices, as well as trouble existing frameworks and paradigms. GHEs are sites of multiple kinds of practices (humanitarian, medical, public health, development) and of multiple actors, whose goals and norms of conduct might be in conflict in a context that is characterised by urgency and high risk of injury and death. Using the examples of recent emergencies, the authors explore the changing nature of ethics and ethical practices in extraordinary circumstances.

In the final chapter of this section (Chapter 33), Arzuaga offers an illustration of regulatory development, touching upon the many actors, values, interests, and forces explored in the earlier chapters. Arzuaga reports on the governance of advanced therapeutic medicinal products (ATMPs) in Argentina, moving from a situation of non-intervention on the part of the state, to the establishment of a governance framework. Here, the role of hard and soft law as adding both resilience and flexibility to regulation is explored, fostering innovation without abdicating ethical concerns. Arzuaga describes early, unsuccessful attempts at regulating stem cell-based interventions, echoing the concerns presented by Lipworth et al., before exploring a more promising exercise in legal foresighting, which included a variety of actors and collaboration, as well a combination of top-down models and bottom-up, iterative processes.

28 When Learning Is Continuous Bridging the Research–Therapy Divide in the Regulatory Governance of Artificial Intelligence as Medical Devices

Calvin W. L. Ho
28.1 Introduction

The regulatory governance of Artificial Intelligence and Machine Learning (AI/ML) technologies as medical devices in healthcare challenges the regulatory divide between research and clinical care, which is typically of pharmaceutical products. This chapter considers the regulatory governance of an AI/ML clinical decision support (CDS) software for the diagnosis of diabetic retinopathy as a ‘risk object’ by the Food and Drug Administration (FDA) in the United States (US). The FDA’s regulatory principles and approach may play an influential role in how other countries govern this and other software as a medical device (SaMD). The disruptions that AI/ML technologies can cause are well publicised in the lay and academic media alike, although the more serious ‘risks’ of harm are still essentially anticipatory. In some quarters, there is a prevailing sense that a ‘light-touch’ approach to regulatory governance should be adopted to ensure that the advancement of AI – particularly in ways that are expected to generate economic gain – should not be unduly burdened. Hence, in response to the question of whether regulation of AI is needed now, scholars like Chris Reed have responded with a qualified ‘No’. As Reed explains, the use of the technology in medicine is already regulated by the profession, and regulation will be adapted piecemeal as new AI technologies come into use anyway.Footnote 1 A ‘wait and see’ approach is likely to produce better long-term results than hurried regulation based on a very partial understanding of what needs to be regulated. It is also perhaps consistent with this mind-set that the commercial development and application of AI and AI-based technologies remain largely unregulated.

This chapter takes a different view on the issue, and argues that the response should be a qualified ‘Yes’ instead, partly because there is already an existing regulatory framework in place that may be adapted to meet anticipated challenges. As a ‘risk object’, the regulation of AI/ML medical devices cannot be understood and managed separately from a broader ‘risk culture’ within which it is embedded. Contrary to what an approach in ‘command-and-control’ suggests, regulatory governance of AI/ML medical devices should not be understood merely as the application of external forces to contain ills that must somehow be managed in order to derive the desired effects. Arguably, it is this limited conception of ‘risks’ and its relationship with regulation that give rise to liminality. As Laurie and others clearly explains,Footnote 2 a liminal space is created contemporaneously with the uncertainties generated by new and emerging technologies. Drawing on the works of Arnold van Gennep and Victor Turner, ‘liminality’ is presented as an analytic to engage with the processual and experiential dynamics of transitional and transformational inter-structural boundary or marginal spaces. It is itself an intermediary process in a three-part pattern of experience, that begins with separation from an existing order, and concludes with re-integration into a new world.Footnote 3 Mapping liminal spaces and the changing boundaries entailed can help to highlight gaps in regulatory regimes.Footnote 4

Risk-based evaluation is often a feature of such liminal spaces, and when they become sites for battles of power and values, ethical issues arise. Whereas liminality has been applied to account for human experiences within regulated spaces, this chapter considers the epistemic quality of ‘risks’ and its situatedness within regulatory governance as a discursive practice and as a matter of social reality. In this respect, regulation is not necessarily extrinsic to its regulatory object, but constitutive of it. Concerns about ‘risks’ from technological innovations and the need to tame them have been central to regulatory governance.Footnote 5 Whereas governance has been a longstanding cultural phenomenon that relates to ‘the system of shared beliefs, values, customs, behaviours and artifacts that members of society use to cope with their world and with one another, and that are transmitted from generation to generation through learning’,Footnote 6 it is the regulatory turn that is especially instructive. Here, regulatory response is taken to reduce the uncertainty and instability of mitigating potential risks and harms and by directing or influencing actors’ behaviour to accord with socially accepted norms and/or to promote desirable social outcomes, and regulation encompasses any instrument (legal or non-legal in character) that is designed to channel group behaviour.Footnote 7 The high connectivity of AL/ML SaMDs that are capable of adapting to their digital environment in order to optimise performance suggests that the research agenda persists beyond what may be currently limited to the pilot or feasibility stages of medical device trials. If continuous risk-monitoring is required to support the use of SaMDs in a learning healthcare system, more robust and responsive regulatory mechanisms are needed, not less.Footnote 8

28.2 AI/ML Software as Clinical Decision Support

In April 2018, the FDA granted approval for IDx-DR (DEN180001) to be marketed as the first AI diagnostic system that does not require clinician interpretation to detect greater than a mild level of diabetic retinopathy in adults diagnosed with diabetes.Footnote 9 In essence, this SaMD applies an AI algorithm to analyse images of the eye taken with a retinal camera that are uploaded to a cloud server. A screening decision is made by the device as to whether the individual concerned is detected with ‘more than mild diabetic retinopathy’ and, if so, is referred to an eye care professional for medical attention. Where the screening result is negative, the individual will be rescreened in twelve months. IDx-DR was reviewed under the FDA’s De Novo premarket review pathway and was granted Breakthrough Device designation,Footnote 10 as the SaMD is novel and of low to moderate risk. On the whole, the regulatory process did not detract substantially from the existing regulatory framework for medical devices in the USA. A medical device is defined broadly to include low-risk adhesive bandages to sophisticated implanted devices. In the USA, a similar approach is adopted in the definition of the term ‘device’ in Section 201(h) of the Federal Food, Drug and Cosmetic Act.Footnote 11

For regulatory purposes, medical devices are classified based on their intended use and indications for use, degree of invasiveness, duration of use, and the risks and potential harms associated with their use. At the classification stage, a manufacturer is not expected to have gathered sufficient data to demonstrate that its proposed product meets the applicable marketing authorisation standard (e.g. data demonstrating effectiveness).  Therefore, the focus of the FDA’s classification analysis is on how the product is expected to achieve its primary intended purposes.Footnote 12 The FDA has established classifications for approximately 1700 different generic types of devices and grouped them into sixteen medical specialties referred to as ‘panels’. Each of these generic types of devices is assigned to one of three regulatory classes based on the level of control necessary to assure the safety and effectiveness of the device. The class to which the device is assigned determines, among other things, the type of premarketing submission/application required for FDA clearance to market. All classes of devices are subject to General Controls,Footnote 13 which are the baseline requirements of the FD&C Act that apply to all medical devices. Special Controls are regulatory requirements for Class II devices, and are usually device-specific and include performance standards, postmarket surveillance, patient registries, special labelling requirements, premarket data requirements and operational guidelines. For Class III devices, active regulatory review in the form of premarket approval is required (see Table 28.1).

Table 28.1. FDA classification of medical devices by risks

ClassRiskLevel of regulatory controlsWhether clinical trials requiredExamples
ILowGeneralNoGauze, adhesive bandages, toothbrush
IIModerateGeneral and specialMaybeSuture, diagnostic X-rays
IIIHighGeneral and premarket approvalYesPacemakers, implantable defibrillators, spinal cord stimulators

Clinical trials of medical devices, where required, are often non-randomised, non-blinded, do not have active control groups, and lack hard endpoints, since randomisation and blinding of patients or physicians for implantable devices will in many instances be technically challenging and ethically unacceptable.Footnote 14 Table 28.2 shows key differences between clinical trials of pharmaceuticals in contrast to medical devices.Footnote 15 Class I and some Class II devices may be introduced into the US market without having been tested in humans through an approval process that is based on predicates. Through what is known as the 510(k) pathway, a manufacturer needs to show that its ‘new’ device is at least as safe and effective as (or substantially equivalent to) a legally marketed predicate device (as was the case for IDx-DR).Footnote 16

Table 28.2. Comparing pharmaceutical trial phases and medical device trial stages

PharmaceuticalsMedical devices
PhaseParticipantsPurposeStageParticipantsPurpose
0
(Pilot/exploratory ; not all drugs undergo this phase)
10–15 participants with disease or conditionTest very small (subtherapeutic) dosage to study effects and mechanismsPilot/early feasibility/
first-in-human
10–15 participants with disease or conditionCollect preliminary safety and performance data to guide development
I
(Safety and toxicity)
10–100 healthy participantsTest safety and tolerance
Determine dosing and major adverse effects
Feasibility20–30 participants with disease or conditionAssess safety and efficacy of near-final or final device design
Guides design of pivotal study
II
(Safety and effectiveness)
50–200 participants with disease or conditionTest safety and effectiveness
Confirm dosing and major adverse effects
III
(Clinical effectiveness)
>100–1000 participants with disease or conditionTest safety and effectiveness
Determine drug–drug interaction and minor adverse effects
Pivotal>100–300 participants with disease or conditionEstablish clinical efficacy, safety and risks
IV
(Post-approval study)
>1000Collect long-term data and adverse effectsPost-approval study>1000Collect long-term data and adverse effects

The nature of regulatory control is changing; regulatory control does not arise solely through the exertion of regulatory power over a regulated entity but also acts intrinsically from within the entity itself. It is argued that risk-based regulation draws on different knowledge domains to constitute the AI/ML algorithm as a ‘risk object’, and not merely to subjugate it. Risk objectification renders the regulated entity calculable. Control does not thereby arise because the regulated entity behaves strictly in adherence to specific commands but rather because of the predictability of its actions. Where risk cannot be precisely calculated however, liminal spaces may help to articulate various ‘scenarios’ with different degrees of plausibility. These liminal spaces are thereby themselves a means by which uncertainty is managed. Typically, owing to conditions that operate outside of direct regulatory control, liminal spaces can either help to maintain a broader regulatory space to which they are peripheral, or contribute to its re-configuration through a ‘domaining effect’. This aspect will be considered in the penultimate section of this chapter.

28.3 Re-embedding Risk and a Return to Sociality

The regulatory construction of IDx-DR as a ‘risk object’ is accomplished by linking the causal attributes of economic and social risks, and risks to human safety and agency, to its constitutive algorithms reified as a medical device.Footnote 17 This ‘risk object’ is made epistemically ‘real’ when integrated through a risk discourse, by which risk attributions and relations have come to define identities, responsibilities, and socialities. While risk objectification has been effective in paving a way forward to market approval for IDx-DR, this technological capability is pushed further into liminality. The study that supported the FDA’s approval was conducted under highly controlled conditions where a relatively small group of carefully selected patients had been recruited to test a diagnostic system that had a narrow usage criteria.Footnote 18 It is questionable whether the AI/ML feature was itself tested, since the auto-didactic aspect of the algorithm was locked prior to the clinical trial, which greatly constrained the variability of the range of outputs.Footnote 19 At this stage, IDx-DR is not capable of evaluating the most severe forms of diabetic retinopathy that requires urgent ophthalmic intervention. However, IDx-DR is capable of ML, which is a subset of AI and refers to a set of methods that have the ability to automatically detect patterns in data in order to predict future data trends or for decision-making under uncertain conditions.Footnote 20 Deep learning (DL) is in turn a subtype of ML (and a subfield of representation learning) that is capable of delivering a higher level of performance, and does not require a human to identify and compute the discriminatory features for it. From the 1980s onwards, DL software has been applied in computer-aided detection systems, and the field of radiomics (a process that extracts large number of quantitative features from medical images) is broadly concerned with computer-aided diagnosis systems, where DL has enabled the use of computer-learned tumour signatures.Footnote 21 It has the potential to detect abnormalities, make differential diagnoses and generate preliminary radiology reports in the future, but only a few methods are able to manage the wide range of radiological presentations of subtle disease states. In the foreseeable future, unsupervised AI/ML will test the limits of conventional means of regulation of medical devices.Footnote 22 The challenges to risk assessment, management and mitigation will be amplified as AI/ML medical devices change rapidly and become less predictable.Footnote 23

Regulatory conservatism reflects a particular positionality and related interests that are at stake. For many high-level policy documents on AI, competitive advantage for economic gain is a key interest.Footnote 24 This position appears to support a ‘light touch’ approach to regulatory governance of AI in order to sustain technological development and advance national economic interests. If policymakers, as a matter of socio-political construction, consider regulation as impeding technological development, then regulatory governance is unlikely to see meaningful progression. Not surprisingly, the private sector has had a dominant presence in defining the agenda and shape of AI and related technologies. While this is not in and of itself problematic, the narrow regulatory focus and absence of broader participation could be. For instance, it is not entirely clear to what extent the development of AI/ML algorithms is determined primarily by sectorial interests.Footnote 25

Initial risk assessment is essentially consequentialist in its focus on intended use of the SaMD to achieve particular clinical outcomes. Risk characterisation is abstracted to two factors:Footnote 26 (1) significance of the information provided by the SaMD to the healthcare decision; and (2) state of the healthcare situation or condition. Risk is thereby derived from ‘objective’ information that is provided by the manufacturer on intended use of the information provided by the SaMD in clinical management. Such use may be significant in one of three ways: (1) to treat or to diagnose, (2) to drive clinical management or (3) to inform clinical management. The significance of an intended use is then associated with a healthcare situation or condition (i.e. critical, serious or non-serious). Schematically, Table 28.3 presents the risk characterisation framework based on four different levels of impact on the health of patients or target populations. Level IV of the framework (e.g. SaMD that performs diagnostic image analysis for making treatment decisions in patients with acute stroke, or screens for mutable pandemic outbreak that can be highly communicable through direct contact or other means) relates to the highest impact while Level I (e.g. SaMD that analyses optical images to guide next diagnostic action of astigmatism) relates to the lowest.Footnote 27

Table 28.3. Risk characterisation framework for software as a medical device

State of healthcare situation or conditionSignificance of information provided by SaMD to healthcare decision
Treat or diagnoseDrive clinical managementInform clinical management
CriticalIVIIIII
SeriousIIIIII
Non-seriousIIII

To counter the possible deepening of regulatory impoverishment, regulatory governance as concept and process will need to re-characterise risk management as a form of learning and experimentation rather than rule-based processes, thus placing stronger reliance on human capabilities to imagine alternative futures instead of quantitative ambitions to predict the future. Additionally, a regulatory approach that is based on total project lifecycle needs to be taken up. This better accounts for modifications that will be made to the device through real-world learning and adaptation. Such adaptation enables a device to change its behaviour over time based on new data and optimise its performance in real time with the goal of improving health outcomes. As the FDA’s conventional review procedures for medical devices discussed above are not adequately responsive to assess adaptive AI/ML technologies, the FDA has proposed for a premarket review mechanism to be developed.Footnote 28 This mechanism seeks to introduce a predetermined change control plan in the premarket submission, in order to give effect to the risk categorisation and risk management principles, as well as the total product lifecycle approach, of the IMDRF. The plan will include the types of anticipated modifications (or pre-specifications) and associated methodology that is used to implement the changes in a controlled manner while allowing risks to patients to be managed (referred to as Algorithm Change Protocol). In essence, the proposed changes will place on manufacturers a greater responsibility of monitoring the real-world performance of their medical devices and to make available the performance data through periodic updates on what changes were made as part of the approved pre-specifications and the Algorithm Change Protocol. In totality, these proposed changes will enable the FDA to evaluate and monitor, collaboratively with manufacturers, an AI/ML software as a medical device from its premarket development to postmarket performance. The nature of the FDA’s regulatory oversight will also become more iterative and responsive in assessing the impact of device optimisation on patient safety.

As the IMDRF also explains, every SaMD will have its own risk category according to its definition statement even when it is interfaced with other SaMD, other hardware medical devices or used as a module in a larger system. Importantly, manufacturers are expected to have an appropriate level of control to manage changes during the lifecycle of the SaMD. The IMDRF labels any modifications made throughout the lifecycle of the SaMD, including its maintenance phase, as ‘SaMD Changes’.Footnote 29 Software maintenance is in turn defined in terms of post-marketing modifications that could occur in the software lifecycle processes identified by the International Organization for Standardization.Footnote 30 It is generally recognised that testing of software is not sufficient to ensure safety in its operation. Safety features need to be built into the software at the design and development stages, and supported by quality management and post marketing surveillance after the SaMD has been installed. Post market surveillance includes monitoring, measurement and analysis of quality data through logging and tracking of complaints, clearing technical issues, determining problem causes and actions to address, identify, collect, analyse and report on critical quality characteristics of products developed. However, monitoring software quality alone does not guarantee that the objectives for a process are being achieved.Footnote 31

As a concern of Quality Management System (QMS), the IMDRF requires that maintenance activities preserve the integrity of the SaMD without introducing new safety, effectiveness, performance and security hazards. It recommends that a risk assessment, including considerations in relation to patient safety and clinical environment and technology and systems environment, should be performed to determine if the changes affect the SaMD categorisation and the core functionality of SaMD as set out in its definition statement. The proposed QMS complements the risk categorisation framework through its goal of incorporating good software quality and engineering practices into the device. Principles underscoring QMS are set out in terms of organisational support structure, lifecycle support processes, and a set of realisation and use processes for assuring safety, effectiveness and performance. These principles have been endorsed by the FDA in its final guidance to describe an internally agreed upon understanding (among regulators) of clinical evaluation and principles for demonstrating the safety, effectiveness and performance of the device, and activities that manufacturers can take to clinically evaluate their device.Footnote 32

28.4 Regulatory Governance as Participatory Learning System

In this penultimate section of this chapter, it is argued that the regulatory approach considered in the preceding sections is intended to support a participatory learning system comprising at least two key features: (1) a platform and/or mechanisms that enable constructive engagement with, and participation of, members of society; and (2) the means by which a common fund of knowledges (to be explained below) may be pooled to generate an anticipatory knowledge that could guide collective action. In some instances, institutionalisation could advance this agenda, but it is beyond the scope of this manuscript to examine this possibility to a satisfactory degree.

There is a diverse range of modalities through which constituents of a society engage in collaborative learning. As Annelise Riles’s PAWORNET illustrates, each modality has its own goals, character, strengths and limitations. In her study, Riles observes that networkers did not understand themselves to share a set of values, interests or culture.Footnote 33 Instead, they understood themselves to be sharing their involvement in a certain network that was a form of institutionalised association devoted to information sharing. What defined networkers most of all was the fact that they were personally and institutionally connected or knowledgeable about the world of specific institutions and networks. In particular, it was the work of creating documents, organising conferences or producing funding proposals that generated a set of personal relations that drew people together and also created divisions of its own. In the author’s own study,Footnote 34 ethnographic findings illustrate how the ‘publics’ of human stem cell research and oocyte donation were co-produced with an institutionalised ‘bioethics-as-public-policy’ entity known as the Bioethics Advisory Body. In that context, the ‘publics’ comprised institutions and a number of individuals – often institutionally connected – that represented a diverse set of values, interests and perhaps cultures (construed in terms of their day-to-day practices in the least). These ‘publics’ resemble a network in a number of ways. They were brought into a particular set of relationship within a deliberative space created mainly by the consultation papers and reinforced through a variety of means that included public meetings, conferences, and feedback sessions. Arguably, even individual feedback from a public outreach platform known as ‘REACH’ encompassed a certain kind of pre-existing (sub-) network that has been formed with a view to soliciting relatively more spontaneous and independent, uninvited forms of civil participatory action. But this ‘network’ is not a static one. It varied with, but was also shaped by, the broader phenomenon of science and expectations as to how science ought to be engaged. In this connection, Riles’s observation is instructive: ‘It is not that networks “reflect” a form of society, therefore, nor that society creates its artifacts … Rather, it is all within the recursivity of a form that literally speaks about itself’.Footnote 35

A ‘risk culture’ that supports learning and experimentation rather than rule-based processes must embed the operation of AI and related technologies as ‘risk objects’ within a common fund of knowledges. Legal processes are inherent to understanding the risk, such as that of a repeat sexual offence under ‘Megan’s Law’, which encompasses the US community notification statutes relating to sexual offenders.Footnote 36 Comprising three tiers, this risk assessment process determines the scope of community notification. In examining the constitutional basis of Megan’s Law, Mariana Valverde et al. observe that ‘the courts have emphasised the scientific expertise that is said to be behind the registrant risk assessment scale (RRAS) in order to argue that Megan’s Law is not a tool of punishment but rather an objective measure to regulate a social problem’.Footnote 37 However, reliance on Megan’s Law as grounded in objective scientific knowledge has given rise to an ‘intermediary knowledge in which legal actors – prosecutors and judges – are said not only to be more fair but even more reliable and accurate in determining a registrant’s risk of re-offence’.Footnote 38 In this, the study also illustrates a translation from scientific knowledge and processes to legal ones, and how the ‘law’ may be cognitively and normatively open.

Finally, the articulation of possible harms and dangers as ‘risks’ involves the generation of ‘anticipatory knowledge’, which is defined as ‘social mechanisms and institutional capacities involved in producing, disseminating, and using such forms [as] … forecasts, models, scenarios, foresight exercises, threat assessments, and narratives about possible technological and societal futures’.Footnote 39 Like Ian Hacking’s ‘looping effect’, anticipatory knowledge is about knowledge-making about the future, and could operate as a means to gap-filling. The study by Hugh Gusterson of the Reliable Replacement Warhead (RRW) program is illustrative of this point, where US weapons laboratories could design new and highly reliable nuclear weapons that are safe to manufacture and maintain.Footnote 40 Gusterson shows that struggle over the RRW Program, initiated by the US Congress in 2004, occurred across four intersecting ‘plateaus of nuclear calculations’ – geopolitical, strategic, enviropolitical, and technoscientific – each with its own contending narratives of the future. He indicates that ‘advocates must stabilise and align anticipatory knowledge from each plateau of calculation into a coherent-enough narrative of the future in the face of opponents seeking to generate and secure alternative anticipatory knowledges’.Footnote 41 Hence the interconnectedness of the four plateaus of calculation, including the trade-offs entailed, was evident in the production of anticipatory knowledge vis-à-vis the RRW program. In addition, the issues of performativity and ‘social construction of ambiguity’ were also evident. Gusterson observes that being craft items, no two nuclear weapons are exactly alike. However, the proscription of testing through detonation meant that both performativity and ambiguity over reliability became matters of speculation, determined through extrapolation from the past to fill knowledge ‘gaps’ in the present and future. This attempt at anticipatory knowledge creation also prescribed a form that the future was to take. Applying a similar analysis from a legal standpoint, Graeme Laurie and others explain that foresighting as a means of devising anticipatory knowledge is neither simple opinion surveying nor mere public participation.Footnote 42 It must instead be directed at the discovery of shared values, the development of shared lexicons, the forging of a common vision of the future and the taking of steps to realise the vision with the understanding that this is being done from a position of partial knowledge about the future. As we have considered earlier on in this chapter, this visionary account captures the approach that has been adopted by the IMDRF impressively well.

28.5 Conclusion

Liminality highlights the need for a processual-oriented mode of regulation in order to recognise the flexibility and fluidity of the regulatory context (inclusive of its objects and subjects) and the need for iterative interactions, as well as to possess the capacity to provide non-directive guidance.Footnote 43 If one considers law as representing nothing more than certainty, structure and directed agency, then we should rightly be concerned as to whether the law can envision and support the creation of genuinely liminal regulatory spaces, which is typified by uncertainty, anti-structure and an absence of agency.Footnote 44 The crucial contribution of regulatory governance however, is its conceptualisation of law as an epistemically open enterprise, and in respect of which learning and experimentation are possible.

29 The Oversight of Clinical Innovation in a Medical Marketplace

Wendy Lipworth , Miriam Wiersma , Narcyz Ghinea , Tereza Hendl , Ian Kerridge , Tamra Lysaght , Megan Munsie , Chris Rudge , Cameron Stewart and Catherine Waldby
29.1 Introduction

Clinical innovation is ubiquitous in medical practice and is generally viewed as both necessary and desirable. While innovation has been the source of considerable benefit, many clinical innovations have failed to demonstrate evidence of clinical benefit and/or caused harm. Given uncertainty regarding the consequences of innovation, it is broadly accepted that it needs some form of oversight. But there is also pushback against what is perceived to be obstruction of access to innovative interventions. In this chapter, we argue that this pushback is misguided and dangerous – particularly because of the myriad competing and conflicting interests that drive and shape clinical innovation.

29.2 Clinical Innovation and Its Oversight

While the therapeutics lifecycle is usually thought of as one in which research precedes clinical application, it is common for health professionals to offer interventions that differ from standard practice, and that have either not (yet) been shown to be safe or effective or have been shown to be safe but not yet subjected to large phase 3 trials. This practice is often referred to as ‘clinical innovation’.Footnote 1 The scope of clinical innovation is broad, ranging from minor alterations to established practice – for example using a novel suturing technique – to more significant departures from standard practice – for example using an invasive device that has not been formally tested in any population.

For the most part, clinical innovation is viewed as necessary and desirable. Medicine has always involved the translation of ideas into treatment and it is recognised that ideas originate in the clinic as well as in the research setting, and that research and practice inform each other in an iterative manner.Footnote 2 It is also recognised that the standard trajectory of research followed by health technology assessment, registration and subsidisation may be too slow for patients with life-limiting or debilitating diseases and that clinical innovation can provide an important avenue for access to novel treatments.Footnote 3 There are also limitations to the systems that are used to determine what counts as ‘standard’ practice because it is up to – usually commercial – sponsors to seek formal registration for particular indications.Footnote 4

While many clinical innovations have positively transformed medicine, others have failed to demonstrate evidence of clinical benefit,Footnote 5 or exposed patients to considerable harm – for example, the use of transvaginal mesh for the treatment of pelvic organ prolapse.Footnote 6 Many innovative interventions are also substantially more expensive than traditional treatments,Footnote 7 imposing costs on both patients and health systems. It is therefore broadly accepted that innovation requires some form of oversight. In most jurisdictions, oversight of innovation consists of a combination of legally based regulations and less formal governance mechanisms. These, in turn, can be focused on:

  1. 1. the oversight of clinical practice by professional organisations, medical boards, healthcare complaints bodies and legal regimes;

  2. 2. the registration of therapeutic products by agencies such as the US Food and Drug Administration, the European Medicines Agency and Australia’s Therapeutic Goods Administration;

  3. 3. consumer protection, such as laws aimed at identifying and punishing misleading advertising; and

  4. 4. the oversight of research when innovation takes place in parallel with clinical trials or is accompanied by the generation of ‘real world evidence’ through, for example, clinical registries.

The need for some degree of oversight is relatively uncontroversial. But there is also pushback against what is perceived to be obstruction of access to innovative interventions.Footnote 8 There are two main arguments underpinning this position. First, it is argued that existing forms of oversight create barriers to clinical innovation. Salter and colleagues, for example, view efforts to assert external control over clinical innovation as manifestations of conservative biomedical hegemony that deliberately hinders clinical innovation in favour of more traditional translational pathways.Footnote 9 It has also been argued that medical negligence law deters clinical innovationFootnote 10 and that health technology regulation is excessively slow and conservative, denying patients the ‘right to try’ interventions that have not received formal regulatory approval.Footnote 11

Second, it is argued that barriers are philosophically and politically inappropriate on the grounds that patients are not actually ‘patients’, but rather ‘consumers’. According to these arguments, consumers should be free to decide for themselves what goods and services they wish to purchase without having their choices restricted by regulation and governance systems – including those typically referred to as ‘consumer’ (rather than ‘patient’) protections. Following this line of reasoning, Salter and colleaguesFootnote 12 argue that decisions about access to innovative interventions should respect and support ‘the informed health consumer’ who:

assumes she/he has the right to make their own choices to buy treatment in a health care market which is another form of mass consumption…Footnote 13

and who is able to draw on:

a wide range of [information] sources which include not only the formally approved outlets of science and state but also the burgeoning information banks of the internet.Footnote 14

There are, however, several problems with these arguments. First, there is little evidence to support the claim that there is, in fact, an anti-innovative biomedical hegemony that is creating serious barriers to clinical innovation. While medical boards can censure doctors for misconduct, and the legal system can find them liable for trespass or negligence, these wrongs are no easier to prevent or prove in the context of innovation than in any other clinical context. Product regulation is similarly facilitative of innovation, with doctors being free to offer interventions ‘off-label’ and patients being allowed to apply for case-by-case access to experimental therapies. The notion that current oversight systems are anti-innovative is therefore not well founded.

Second, it is highly contestable that patients are ‘simply’ consumers – and doctors are ‘simply’ providers of goods and services – in a free market. For several reasons, healthcare functions as a very imperfect market: there is often little or no information available to guide purchases; there are major information asymmetries – exacerbated by misinformation on the internet; and patients may be pressured into accepting interventions when they have few, if any, other therapeutic options.Footnote 15 Furthermore, even if patients were consumers acting in a marketplace, it would not follow that the marketplace should be completely unregulated, for even the most libertarian societies have regulatory structures in place to prevent bad actors misleading people or exploiting them financially (e.g. through false advertising, price fixing or offering services that they are unqualified to provide).

This leaves one other possible objection to the oversight of clinical innovation – that patients are under the care of professionals who are able to collaborate with them in making decisions through shared decision-making. Here, the argument is that innovation (1) should not be overseen because it is an issue that arises between a doctor and a patient, and (2) does not need to be overseen because doctors are professionals who have their patients’ interests at heart. These are compelling arguments because they are consistent with both the emphasis on autonomy in liberal democracies and with commonly accepted ideas about professionals and their obligations.

Two objections can, however, be raised. First, these arguments ignore the fact that professionalism is concerned not only with patient well-being but also with commitments to the just distribution of finite resources, furthering scientific knowledge and maintaining public trust.Footnote 16 The second problem with these arguments is that they are premised on the assumption that all innovating clinicians are consistently alert to their professional obligations and willing to fulfil them. Unfortunately, this assumption is open to doubt. To illustrate this point, we turn to the case of autologous mesenchymal stem cell-based interventions.

29.3 The Case of Autologous Mesenchymal Stem Cell Interventions

Stem cell-based interventions are procedures in which stem cells – cells that have the potential to self-replicate and to differentiate into a range of different cell types – or cells derived from stem cells are administered to patients for therapeutic purposes. Autologous stem cell-based interventions involve administering cells to the same person from whom they were obtained. The two most common sources of such stem cells are blood and bone marrow (haematopoietic) cells and connective tissue (mesenchymal) cells.

Autologous haematopoietic stem cells are extracted from blood or bone marrow and used to reconstitute the bone marrow and immune system following high dose chemotherapy. Autologous mesenchymal cells are extracted most commonly from fat and then injected – either directly from the tissue extracts or after expansion in the laboratory – into joints, skin, muscle, blood stream, spinal fluid, brain, eyes, heart and so on, in order to ‘treat’ degenerative or inflammatory conditions. The hope is that because mesenchymal stem cells may have immunomodulatory properties they may support tissue regeneration.

The use of autologous haematopoietic stem cells is an established standard of care therapy for treating certain blood and solid malignancies and there is emerging evidence that they may also be beneficial in the treatment of immunological disorders, such as multiple sclerosis and scleroderma. In contrast, evidence to support the use of autologous mesenchymal stem cell interventions is weak and limited to only a small number of conditions (e.g. knee osteoarthritis).Footnote 17 And even in these cases, it is unclear what the precise biological mechanism is and whether the cells involved should even be referred to as ‘stem cells’Footnote 18 (we use this phrase in what follows for convenience).

Despite this, autologous mesenchymal stem cell interventions (henceforth AMSCIs) are offered for a wide range conditions for which there is no evidence of effectiveness, including spinal cord injury, motor neuron disease, dementia, cerebral palsy and autism.Footnote 19 Clinics offering these and other claimed ‘stem cell therapies’ have proliferated globally, primarily in the private healthcare sector – including in jurisdictions with well-developed regulatory systems – and there are now both domestic markets and international markets based on stem cell tourism.Footnote 20

While AMSCIs are relatively safe, they are far from risk-free, with harm potentially arising from the surgical procedures used to extract cells (e.g. bleeding from liposuction), the manipulation of cells outside of the body (e.g. infection) and the injection of cells into the bloodstream (e.g. immunological reactions, fever, emboli) or other tissues (e.g. cyst formation, microcalcifications).Footnote 21 Despite these risks, many of the practitioners offering AMSCIs have exploited loopholes in product regulation to offer these interventions to large numbers of patients.Footnote 22 To make matters worse, these interventions are offered without obvious concern for professional obligations, as evident in aggressive and misleading marketing, financial exploitation and poor-quality evidence-generation practices.

First, despite limited efficacy and safety, AMSCIs are marketed aggressively through clinic websites, advertisements and appearances in popular media.Footnote 23 This is inappropriate both because the interventions being promoted are experimental and should therefore be offered to the minimum number of patients outside the context of clinical trials, and because marketing is often highly misleading. In some cases, this takes the form of blatant misinformation – for example, claims that AMSCIs are effective for autism, dementia and motor neuron disease. In other cases, consumers are misled by what have been referred to as ‘tokens of legitimacy’. These include patient testimonials, references to incomplete or poor-quality research studies, links to scientifically dubious articles and conference presentations, displays of certification and accreditation from unrecognised organisations, use of meaningless titles such as ‘stem cell physician’ and questionable claims of ethical oversight. Advertising of AMSCIs is also rife with accounts of biological processes that give the impression that autologous stem cells are entirely safe – because they come from the patient’s own body – and possess almost magical healing qualities.Footnote 24

Second, AMSCIs are expensive, with patients paying thousands of dollars (not including follow-up care or the costs associated with travel).Footnote 25 In many cases, patients take drastic measures to finance access to stem cells, including mortgaging their houses and crowd-sourcing funding from their communities. Clinicians offering AMSCIs claim that such costs are justified given the complexities of the procedures and the lack of insurance subsidies to pay for them.Footnote 26 However, the costs of AMSCIs seem to be determined by the business model of the industry and by a determination of ‘what the market will bear’ – which in the circumstances of illness, is substantial. Furthermore, clinicians offering AMSCIs also conduct ‘pay-to-participate’ clinical trials and ask patients to pay for their information to be included in clinical registries. Such practices are generally frowned upon as they exacerbate the therapeutic misconception and remove any incentive to complete and report results in a timely manner.Footnote 27

Finally, contrary to the expectation that innovating clinicians should actively contribute to generating generalisable knowledge through research, clinics offering AMSCIs have proliferated in the absence of robust clinical trials.Footnote 28 Furthermore, providers of AMSCIs tend to overstate what is known about efficacyFootnote 29 and to misrepresent what trials are for, arguing that they simply ‘measure and validate the effect of (a) new treatment’.Footnote 30 Registries that have been established to generate observational evidence about innovative AMSCIs are similarly problematic because participation is voluntary, outcome measures are subjective and results are not made public. There are also problems with the overall framing of the registries, which are presented as alternatives – rather than supplements – to robust clinical trials.Footnote 31 And because many AMSCIs are prepared and offered in private practice, there is lack of oversight and independent evaluation of what is actually administered to the patient, making it impossible to compare outcomes in a meaningful way.Footnote 32

While it is possible that doctors offering autologous stem cell interventions simply lack awareness of the norms relating to clinical innovation, this seems highly unlikely, as many of these clinicians are active participants in policy debates about innovation and are routinely censured for behaviour that conflicts with accepted professional obligations. A more likely explanation, therefore, is that the clinicians offering autologous stem cell interventions are motivated not (only) by concern for their patients’ well-being, but also by other interests such as the desire to make money, achieve fame and satisfy their intellectual curiosity. In other words, they have competing and conflicting interests that override their concerns for patient well-being and the generation of valid evidence.

29.4 Implications for Oversight of Clinical Innovation

Unfortunately, the case of AMSCIs is far from unique. Other situations in which clinicians appear to be abusing the privilege of using their judgement to offer non-evidence-based therapies include orthopaedic surgeons over-using arthroscopies for degenerative joint disease,Footnote 33 assisted reproductive technology specialists who offer unproven ‘add-ons’ to traditional in-vitro fertilisationFootnote 34 and health professionals engaging in irresponsible off-label prescribing of psychotropic medicines.Footnote 35

Clinicians in all of these contexts are embedded in a complex web of financial and non-financial interests such as the desire to earn money, create product opportunities, pursue intellectual projects, achieve professional recognition and career advancement, and develop knowledge for the good of future patientsFootnote 36 – all of which motivate their actions. Clinicians are also susceptible to biases such as the ‘optimism bias’, which might lead them to over-value innovative technologies and they are impacted upon by external pressures, such as industry marketingFootnote 37 and pressure from patients desperate for a ‘miracle cure’.Footnote 38

With these realities in mind, arguments against the oversight of innovation – or, more precisely, a reliance on consumer choice – become less compelling. Indeed, it could be argued that the oversight of innovation needs to be strengthened in order to protect patients from exploitation by those with competing and conflicting interests. That said, it is important that the oversight of clinical innovation does not assume that all innovating clinicians are motivated primarily by personal gain and, correspondingly, that it does not stifle responsible clinical innovation.

In order to strike the right balance, it is useful – following Lysaght and colleaguesFootnote 39 – for oversight efforts to be framed in terms of, and account for, three separate functions: a negative function (focused on protecting consumers and sanctioning unacceptable practices, such as through tort and criminal law); a permissive function (concerned with frameworks that license health professionals and enable product development, such as through regulation of therapeutic products); and a positive function (dedicated to improving professional ethical behaviour, such as through professional registration and disciplinary systems). With that in mind, we now present some examples of oversight mechanisms that could be employed.

Those with responsibility for overseeing clinical practice need to enable clinicians to offer innovative treatments to selected patients outside the context of clinical trials, while at the same time preventing clinicians from exploiting patients for personal or socio-political reasons. Some steps that could be taken to both encourage responsible clinical innovation and discourage clinicians from acting on conflicts of interest might include:

  • requiring that all clinicians have appropriate qualifications, specialisation, training and competency;

  • mandating disclosure of competing and conflicting interests on clinic websites and as part of patient consent;

  • requiring that consent be obtained by an independent health professional who is an expert in the patient’s disease (if necessary at a distance for patients in rural and remote regions);

  • ensuring that all innovating clinicians participate in clinical quality registries that are independently managed, scientifically rigorous and publicly accessible;

  • requiring independent oversight to ensure that appropriate product manufacturing standards are met;

  • ensuring adequate pre-operative assessment, peri-operative care and post-operative monitoring and follow-up;

  • ensuring that patients are not charged excessive amounts for experimental treatments, primarily by limiting expenses to cost-recovery; and

  • determining that some innovative interventions should be offered only in a limited number of specialist facilities.

Professional bodies (such as specialist colleges), professional regulatory agencies, clinical ethics committees, drugs and therapeutics committees and other institutional clinical governance bodies would have an important role to play in ensuring that such processes are adhered to.

There may also be a need to extend current disciplinary and legal regimes regarding conflicts of interest (or at least ensure better enforcement of existing regimes). Many professional codes of practice already require physicians to be transparent about, and refrain from acting on, conflicts of interest. And laws in some jurisdictions already recognise that financial interests should be disclosed to patients, that patients should be referred for independent advice and that innovating clinicians need to demonstrate concern for patient well-being and professional consensus.Footnote 40

With respect to advertising, there is a need to prevent aggressive and misleading direct-to-consumer advertising while still ensuring that all patients who might benefit from an innovative intervention are aware that such interventions are being offered. With this in mind, it would seem reasonable to strengthen existing advertising oversight (which, in many jurisdictions, is weak and ad hoc). It may also be reasonable to prohibit innovating clinicians from advertising interventions directly to patients – including indirectly through ‘educational’ campaigns and media appearances – and instead develop systems that alert referring doctors to the existence of doctors offering innovative interventions.

Those regulating access to therapeutic products need to strike a balance between facilitating timely access to the products that patients want, and ensuring that those with competing interests are not granted licence to market products that are unsafe or ineffective. In this regard, it is important to note that product regulation is generally lenient when it comes to clinical innovation and it is arguable that there is a need to push back against current efforts to accelerate access to health technologies – efforts that are rapidly eroding regulatory processes and creating a situation in which patients are being exposed to an increasing number of ineffective and unsafe interventions.Footnote 41 In addition, loopholes in therapeutic product regulation that can be exploited by clinicians with conflicts of interest should be predicted and closed wherever possible.

Although clinical innovation is not under the direct control of research ethics and governance committees, such committees have an important role to play in ensuring that those clinical trials and registries established to support innovation are not distorted by commercial and other imperatives. The task for such committees is to strike a balance between assuming that all researcher/innovators are committed to the generation of valid evidence and placing excessive burdens on responsible innovators who wish to conduct high-quality research. In this regard, research ethics committees could:

  • ensure that participants in trials and registries are informed about conflicts of interest;

  • ensure that independent consent processes are in place so that patients are not pressured into participating in research or registries; and

  • consider whether it is ever acceptable to ask patients to ‘pay to participate’ in trials or in registries.

Research ethics committees also have an important role in minimising biases in the design, conduct and dissemination of innovation-supporting research. This can be achieved by ensuring that:

  • trials and registries have undergone rigorous, independent scientific peer review;

  • data are collected and analysed by independent third parties (e.g. Departments of Health);

  • data are freely available to any researcher who wants to analyse it; and

  • results – including negative results – are widely disseminated in peer-reviewed journals.

While this chapter has focused on traditional ‘top-down’ approaches to regulation and professional governance, it might also be possible to make use of what Devaney has referred to as ‘reputation-affecting’ regulatory approaches.Footnote 42 Such approaches would reward those who maintain their independence or manage their conflicts effectively with reputation-enhancing measures such as access to funding and publication in esteemed journals. In this regard, other parties not traditionally thought of as regulators – such as employing institutions, research funders, journal reviewers and editors and the media – might have an important role to play in the oversight of clinical innovation.

Importantly, none of the oversight mechanisms we have suggested here would discourage responsible clinical innovation. Indeed, an approach to the oversight of clinical innovation that explicitly accounts for the realities of competing and conflicting interests could make it easier for well-motivated clinicians to obtain the trust of both individual patients and broader social licence to innovate.

29.5 Conclusion

Clinical innovation has an important and established role in biomedicine and in the development and diffusion of new technologies. But it is also the case that claims about patients’ – or consumers’ – rights and about the sanctity of the doctor–patient relationship, can be used to obscure both the risks of innovation and the vested interests that drive some clinicians’ decision to offer innovative interventions. In this context, adequate oversight of clinical innovation is crucial. After all, attempts to exploit the language and concept of innovation not only harms patients, but also threatens legitimate clinical innovation and undermines public trust. Efforts to push back against the robust oversight of clinical innovation need, therefore, to be viewed with caution.

30 The Challenge of ‘Evidence’ Research and Regulation of Traditional and Non-Conventional Medicines

Nayeli Urquiza Haas and Emilie Cloatre
30.1 Introduction

Governments and stakeholders have struggled to find a common ground on how to regulate research for different (‘proven’ or ‘unproven’) practices. Research on traditional, alternative and complementary medicines is often characterised as following weak research protocols and as producing evidence too poor to stand the test of systematic reviews, thus rendering individual case studies results insignificant. Although millions of people rely on traditional and alternative medicine for their primary care needs, the regulation of research into, and practice of, these therapies is governed by biomedical parameters. This chapter asks how, despite efforts to accommodate other forms of evidence, regulation of research concerning traditional and alternative medicines is ambiguous as to what sort of evidence – and therefore what sort of research – can be used by regulators when deciding how to deal with practices that are not based on biomedical epistemologies. Building on ideas from science and technology studies (STS), in this chapter we analyse different approaches to the regulation of traditional and non-conventional medicines adopted by national, regional and global governmental bodies and authorities, and we identify challenges to the inclusion of other modes of ‘evidence’ based on traditional and hybrid epistemologies.

30.2 Background

Non-conventional medicines are treatments that are not integrated to conventional medicine and are not necessarily delivered by a person with a degree in medical science. This may include complementary, alternative and traditional healers who may derive their knowledge from local or foreign knowledges, skills or practices.Footnote 1 For the World Health Organization (WHO), traditional medicine may be based on explicable or non-explicable theories, beliefs and experiences of different indigenous cultures.Footnote 2 That being said, traditional medicine is often included within the umbrella term of ‘non-conventional medicine’ in countries where biomedicine is the norm. However, this is often considered a misnomer insofar as traditional medicine may be the main source of healthcare in many countries, independent of its legitimate or illegitimate status. Given the high demand for traditional and non-conventional therapies, governments have sought to bring these therapies into the fold of regulation, yet, the processes involved to accomplish this task have been complicated by the tendency to rely on biomedicine’s standards of practice as a baseline. For example, the absence of and/or limited data produced by traditional and non-conventional medicine research and the unsatisfactory methodologies that do not stand the test of internationally recognised norms and standards for research involving human subjects have been cited as common barriers to the development of legislation and regulation of traditional and non-conventional medicine.Footnote 3 In 2019, the WHO reported that 99 out of 133 countries considered the absence of research as one of the main challenges to regulating these fields.Footnote 4 At the same time, governments have been reluctant to integrate traditional and non-conventional medicines as legitimate healthcare providers because their research is not based on the ‘gold standard’, namely multi-phase clinical trials.Footnote 5 Without evidence produced through conventional research methodologies, it is argued that people are at risk of falling prey to charlatans who peddle magical cures – namely placebos without any concrete therapeutic value – or that money is wasted on therapies and products based on outdated or disparate bodies of knowledge rather than systematic clinical research.Footnote 6 While governments have recognised to some extent the need to accommodate traditional and non-conventional medicines for a variety of reasonsFootnote 7 – including the protection of cultural rights, consumer rights, health rights, intellectual property and biodiversityFootnote 8 – critics suggest that there is no reason why these modalities of medicine should be exempted from providing quality evidence.Footnote 9

Picking up on some of these debates, this chapter charts the challenges arising from attempts to regulate issues relevant to research in the context of traditional and alternative medicine. From the outset, it explores what kinds of evidence and what kinds of research are accepted in the contemporary regulatory environment. It outlines some of the sticky points arising out of debates about research of traditional and non-conventional medicines, in particular, the role of placebo effects and evidence. Section 30.4 explores two examples of research regulation: WHO’s Guidelines for Methodologies on Research and Evaluation of Traditional Medicine and the European Directive on Traditional Herbal Medicine Products (THMPD). Both incorporate mixed methodologies into research protocols and allow the use of historical data as evidence of efficacy, thus recognising the specificity of traditional medicine and non-conventional medicine. However, we argue that these strategies may themselves become subordinated to the biomedical logics, calling into question the extent to which other epistemologies or processes are allowed to shape what is considered as acceptable evidence. Section 30.5 focuses on the UK, as an example of how other processes and rationalities, namely economic governmentalities, shape the spaces that non-conventional medicine can inhabit. Section 30.6 untangles and critically analyses the assumptions and effects arising out of the process of deciding what counts as evidence in healthcare research regulations. It suggests that despite attempts to include different modalities, ambiguities persist due to acknowledged and unacknowledged hierarchies of knowledge-production explored in this chapter. The last section opens up a conversation about what is at stake when the logic underpinning the regulation of research creates a space for difference, including different medical traditions and what counts as evidence.Footnote 10

30.3 Evidence-Based Medicine and Placebo Controls

Evidence-based medicine (EBM) stands for the movement which suggests that the scientific method allows researchers to find the best evidence available in order to make informed decisions about patient care. To find the best evidence possible, which essentially means that the many is more significant than the particular, EBM relies on multiple randomised controlled trials (RCTs) and evidence from these is eventually aggregated and compared.Footnote 11 Evidence is hierarchically organised, whereby meta-reviews and systematic reviews based on RCTs stand at the top, followed by non-randomised controlled trials, observational studies with comparison groups, case series and reports, single case studies, expert opinion, community evidence and individual testimonies at the bottom. In addition to reliance on quantity, the quality of the research matters. Overall, it means that the best evidence is based on data from blinded trials, which show a causal relation between therapeutic interventions and the effect, and isolates results from placebo-effects.

From a historical perspective, the turn to blinded tests represented a significant shift in medical practice insofar as it diminished the relevance of expert opinion, which was itself based on a hierarchy of knowledge that tended to value authority and theory over empirical evidence. Physicians used to prescribe substances, such as mercury, that although believed to be effective for many ailments, were later found to be highly toxic.Footnote 12 Thus, the notion of evidence arising out of blinded trials closed the gap between science and practice, and also partially displaced physicians’ authority. Blinded trials and placebo controls had other effects: they became a tool to demarcate ‘real’ medicine from ‘fake’ medicine, proper doctors from ‘quacks’ and ‘snake-oil’ peddlers. By exposing the absence of a causal relationships between the therapy and the physical effect, some therapies and knowledges associated with them were rebranded as fraudulent or as superstitions. While the placebo effect might retrospectively explain why some of these discarded therapies were seen as effective, in practice, EBM’s hierarchy of evidence dismisses patients’ subjective accounts.Footnote 13 While explanations about the placebo effect side-lined the role of autosuggestion in therapeutic interventions, they did not clarify either the source or the benefits of self-suggestion.

Social studies suggest that the role of imagination has been overlooked as a key element mediating therapeutic interactions. Phoebe Friesen argues that, rather than being an ‘obstacle’ that modern medicine needed to overcome, imagination ‘is a powerful instrument of healing that can, and ought to be, subjected to experimental investigations.’Footnote 14 At the same time, when the positive role of the placebo effect and self-suggestion has been raised, scholarship research has pointed out dilemmas that remain unsolved, for example: Is it ethical to give a person a placebo in the conduct of research on non-orthodox therapies, and when is it justifiable, and for which conditions? Or, could public authorities justify the use of tax-payers money for so-called ‘sham’ treatments when people themselves, empowered by consumer choice rhetoric and patient autonomy, demand it? As elaborated in this chapter, some governments have been challenged for using public money to fund therapies deemed to be ‘unscientific’, while others have tightened control, fearing that self-help gurus, regarded as ‘cultish’ sect-leaders, are exploiting vulnerable patients.

To the extent that physiological mechanisms of both placebo and nocebo effects are still unclear, there does not seem to be a place in mainstream public healthcare for therapies that do not fit the EBM model because it is difficult to justify them politically and judicially, especially as healthcare regulations rely heavily on science to demonstrate public accountability.Footnote 15 And yet, while the importance of safety, quality and efficacy of therapeutic practices cannot be easily dismissed, the reliance on EBM as a method to demarcate effective from non-effective therapies dismisses too quickly the reasons why people are attracted to these therapies. When it comes to non-conventional medicines, biomedicine and the scientific method do not factor in issues such as patient choice or the social dimension of medical practice.Footnote 16 In that respect, questions as to how non-conventional medicine knowledges can demonstrate whether they are effective or not signal broader concerns. First, is it possible to disentangle the reliance of public accountability from science in order to solve the ethical, political, social and cultural dilemmas embedded in the practice of traditional and alternative medicine? Second, if we are to broaden the scope of how evidence is assessed, are there other processes or actors that shape what is considered effective from the perspective of healthcare regulation, for example, patient choice or consumer rights? And, finally, if science is not to be considered as the sole arbiter of healing, what are the spaces afforded for other epistemologies of healing? Without necessarily answering all of these questions, the aim of this chapter is to signpost a few sticky points in these debates. The next section explores three examples, at different jurisdictional levels – national, regional and international – of how healthcare regulators have sought to provide guidelines on how to incorporate other types of evidence into research dealing with traditional and non-conventional medicine.

30.4 Integration as Subordination: Guidelines and Regulations on Evidence and Research Methodologies

Traditional medicine has been part of the WHO’s political declarations and strategies born in the lead up to the 1978 Declaration of Alma Ata.Footnote 17 Since then, the WHO has been at the forefront of developing regulations aimed at carving out spaces for traditional medicines. However, the organisation has moved away from its original understanding of health, which was more holistic and focused on social practices of healing. Regional political mobilisations underpinned by postcolonial critiques of scientific universalism were gradually replaced again by biomedical logics of health from the 1980s onwards.Footnote 18 This approach, favouring biomedical standards of practice, can be appreciated to some extent in the ‘General Guidelines for the Research of Non-Conventional Medicines’, which is prefaced by the need to improve research data and methodologies with a view of furthering the regulation and integration of traditional herbal medicines and procedure-based therapies.Footnote 19 The guidelines state that conventional methodologies should not hamper people’s access to traditional therapies; and instead, reaffirms the plurality of non-orthodox practices.Footnote 20 Noting the great diversity of practices and epistemologies framing traditional medicine, the guidelines re-organised them around two broad classifications – medicines and procedure-based therapies.

Based on these categories, the guidelines suggest that efficacy can be demonstrated through different research methodologies and types of evidence, including historical evidence of traditional use. To ensure safety and efficacy standards are met, herbal medicines ought to be first differentiated through botanical identification based on scientific Latin plant names. Meanwhile, the guidelines leave some room for the use of historical records of traditional evidence of efficacy and safety, which should be demonstrated through a variety of sources including literature reviews, theories and concepts of system of traditional medicine, as well as clinical trials. It also affirms that post-marketing surveillance systems used for conventional medicines are relevant in monitoring, reporting and evaluating adverse effects of traditional medicine.

More importantly, the guidelines contemplate the use of mixed methodologies, whereby EBM can make up for the gaps of evidence of efficacy in traditional medicine. And, where claims are based on different traditions, for example, Traditional Chinese Medicine (TCM) and Western Herbalism, the guidelines require evidence linking them together; and where there is none, scientific evidence should be the basis. If there are any contradictions between them, ‘the claim used must reflect the truth, on balance of the evidence available’.Footnote 21 Although these research methodologies give the impression of integrating traditional medicine into the mainstream, the guidelines reflect policy transformations since the late 1980s, when plants appeared more clearly as medical objects in the Declaration of Chiang Mai.Footnote 22 Drawing on good manufacturing practice guidelines as tools to assess the safety and quality of medicines, WHO’s guidelines and declarations between 1990 and 2000 increasingly framed herbal medicines as an object of both pharmacological research and healthcare governance.Footnote 23

WHO’s approach resonates with contemporary European Union legislation, namely the Directive 2004/24/EC on the registration of traditional herbal medicines.Footnote 24 This Directive also appears to be more open to qualitative evidence based on historical sources, but ultimately subordinates evidence to the biomedical mantra of safety and quality that characterises the regulation of conventional medicines. Traditional herbal medicine applications should demonstrate thirty years of traditional use of the herbal substances or combination thereof, of which fifteen years should be in the European Union (EU). In comparison with conventional medicines requiring multiphase clinical trials in humans, the Directive simplifies the authorisation procedure by admitting bibliographic evidence of efficacy. However, applications must be supplemented with non-clinical studies – namely, toxicology studies – especially if the herbal substance or preparation is not listed in the Community Pharmacopeia.Footnote 25 In the end, these regulations subordinate traditional knowledges to the research concepts and methodologies of conventional medicine. Research centres of non-conventional medicines in the EU also align mission statements to integration-based approaches, whereby inclusion of traditional and non-conventional medicine is premised on their modernisation through science.Footnote 26 However, as we argue in the next section, science is not the sole arbiter of what comes to be excluded or not in the pursuit of evidence. Indeed, drawing on the UK as a case study, we argue that economic rationalities are part of the regulatory environment shaping what is or is not included as evidence in healthcare research.

30.5 Beyond Evidence: The Economic Reasoning of Clinical Guidelines

Despite there being no specific restrictions preventing the use of non-conventional treatments within the National Health Service (NHS), authorities involved in the procurement of health or social care work have been under increasing pressure to define the hierarchy of scientific evidence in public affairs. For example, under pressure of being judicially reviewed, the Charities Commission opened up a consultation that produced new guidance for legal caseworkers assessing applications from charities promoting the use of complementary and alternative medicine. Charities have to define their purpose and how this benefits publics. For example, if the declared purpose is to cure cancer through yoga, it will have to demonstrate evidence of public benefit, based on accepted sources of evidence and EBM’s ‘recognised scales of evidence’. Although observations, personal testimonies or expert opinion are not excluded per se, they cannot substitute scientific medical explanation.Footnote 27 For the Commission, claims that fail the scientifically-based standard are meant to be regarded as cultural or religious beliefs.

There have also been more conspicuous ways in which evidence, as understood through a ‘scientific-bureaucratic-medicine’ model, has been used to limit the space for non-conventional medicines.Footnote 28 Clinical guidelines are a key feature of this regulatory model – increasingly institutionalised in the UK since the 1980s. The main body charged with this task is the National Institute for Health and Care Excellence (NICE), a non-departmental public body with statutory footing through the Health and Social Care Act 2012. The purpose of NICE clinical guidelines is to reduce variability in both quality and availability in the delivery of treatments and care and to confirm an intervention’s effectiveness. Although not compulsory, compliance with the guidelines is the norm and exceptions are ‘both rare and carefully documented’Footnote 29 because institutional performance is tied to their implementation and non-adherence may have a financial impact.Footnote 30 Following a campaign by ‘The Good Thinking Society’, an anti-pseudoscience charity, NHS bodies across London, Wales and the North of England have stopped funding homeopathic services.Footnote 31 Meanwhile, an NHS England consultation also led to the ban of the prescription of products considered to be of ‘low clinical value’, such as homeopathic and herbal products. Responding to critics, the Department of Health defended its decision to defund non-conventional medicine products stating they were neither clinically nor cost effective.Footnote 32 However, it is also worth noting that outside of the remit of publicly funded institutions, traditional and non-conventional medicines have been tolerated, or even encouraged, as a solution to relieve the pressure from austerity healthcare policies. For example, the Professional Standards Authority (PSA) has noted that accredited registered health and social care practitioners – which include acupuncturists, sports therapists, aromatherapy practitioners, etc. – could help relieve critical demand for NHS services.Footnote 33 This raises questions about what counts as evidence and how different regulators respond to specific practices that are not based on biomedical epistemologies, particularly what sort of research is acceptable in healthcare policy-making. What we have sought to demonstrate in this section is the extent to which, under the current regulatory landscape, the production of knowledge has become increasingly enmeshed with various layers of laws and regulations drafted by state and non-state actors.Footnote 34 Although the discourse has focused on problems with the kind of evidence and research methodologies used by advocates of non-conventional medicine, a bureaucratic application of EBM in the UK has limited access to traditional and non-conventional medicines in the public healthcare sector. In addition to policing the boundaries between ‘fake’ and ‘real’ medicines, clinical guidelines also delimit which therapies should be funded or not by the state. Thus, this chapter has sketched the links between evidence-based medicine and law, and the processes that influence what kind of research and what kind of evidence are appropriate for the purpose of delivering healthcare. Regulation, whether through laws implementing the EU Directives on the registration of traditional herbal medicines, or clinical guidelines produced by NICE, can be seen as operating as normative forces shaping healthcare knowledge production. The final section analyses the social and cultural dimensions of knowledge production and it argues that contemporary regulatory approaches discussed in the preceding sections assume non-conventional knowledges follow a linear development. Premised upon notions of scientific progress and modernity, this view ultimately fails to grasp the complexity of knowledge-production and the hybrid nature of healing practices.

30.6 Regulating for Uncertainty: Messy Knowledges and Practices

Hope for a cure, dissatisfaction with medical authority, highly bureaucratised healthcare systems or limited access to primary healthcare, are among some of the many reasons that drive people to try untested as well as the unregulated pills and practice-based therapies from traditional and non-conventional medicines. While EBM encourages a regulatory environment averse to the miracle medicines or testimonies of overnight cures and home-made remedies, Lucas Richert argues ‘unknown unknowns fail to dissuade the sick, dying or curious from experimenting with drugs’.Footnote 35 The problem, however, is the assumption that medicines, and also law, progress in a linear trajectory. In other words, that unregulated drugs became regulated through standardised testing and licensing regulations that carefully assess medicines quality, safety and efficacy before and after they are approved into the market.Footnote 36

Instead, medicines’ legal status may not always follow this linear evolution. We have argued so far that the regulatory environment of biomedicine demarcates boundaries between legitimate knowledge-makers/objects and illegitimate ones, such as street/home laboratories and self-experimenting patients.Footnote 37 But ‘evidence’ also acts as a signpost for a myriad of battles to secure some kind of authority over what is legitimate or not between different stakeholders (patient groups, doctors, regulators, industry, etc.).Footnote 38 Thus, by looking beyond laboratories and clinical settings, and expanding the scope of research to the social history of drugs, STS scholarship suggests that the legal regulation of research and medicines is based on more fragmented and dislocated encounters between different social spaces where experimentation happens.Footnote 39 For example, Mei Zhan argues that knowledge is ‘always already impure, tenuously modern, and permanently entangled in the networks of people, institutions, histories, and discourses within which they are produced’.Footnote 40 This means neither ‘Western’ biomedical science or ‘traditional’ medicines have ever been static and hermeneutically sealed spaces. Instead, therapeutic interventions and encounters are often ‘uneven’ and messy, linking dissimilar traditions and bringing together local and global healing practices, to the point that they constantly disturb assumptions about ‘the Great Divides’ in medicine. For example, acupuncture’s commodification and marketisation in Western countries reflects how Traditional Chinese Medicine has been transformed through circulation across time and space, enlisting various types of actors from different professional healthcare backgrounds – such as legitimate physicians, physiotherapists, nurses, etc. – as well as lay people who have not received formal training in a biomedical profession. New actors with different backgrounds take part in the negotiations for medical legitimacy and authority that are central to the reinvention of traditional and non-conventional medicine. These are processes of ‘translocation’ – understood as the circulation of knowledges across different circuits of exchange value – which reconfigure healing communities worldwide.Footnote 41

So, in the process of making guidelines, decisions and norms about research on traditional and non-conventional medicines, the notion of ‘evidence’ could also signify a somewhat impermanent conclusion to a struggle between different actors. As a social and political space, the integration of traditional medicine and non-conventional medicine is not merely a procedural matter dictated by the logic of medical sciences. Instead, what is accepted or not as legitimate is constantly ‘remodelled’ by political, economic and social circumstances.Footnote 42 In that sense, Stacey Langwick argues that evidence stands at the centre of ontological struggles rather than simply being contestations of authority insofar it is a ‘highly politicized and deeply intimate battle over who and what has the right to exist’.Footnote 43 For her, determination of what counts as evidence is at the heart of struggles of postcoloniality. When regulations based on EBM discard indigenous epistemologies of healing or the hybrid practices of individuals and communities who pick up knowledge in fragmented fashion, they also categorise their experiences, histories and effects as non-events. This denial compounds the political and economic vulnerability of traditional and non-conventional healers insofar as their survival depends on their ability to adapt their practice to conventional medicine, by mimicking biomedical practices and norms.Footnote 44 Hence, as Marie Andree Jacobs argues, the challenge for traditional and non-conventional medicines lies in translating ‘the alternativeness of its knowledge into genuinely alternative research practices’ and contributes to reimagining alternative models of regulation.Footnote 45

30.7 Conclusion

This chapter analysed how regulators respond to questions of evidence of traditional and non-conventional medicines. It argued that these strategies tend to subordinate data that is not based on EBM’s hierarchies of evidence, allowing regulators to demarcate the boundaries of legitimate research as well as situating the ‘oddities’ of non-conventional medicines outside of science (e.g. as ‘cultural’ or ‘religious’ issues in the UK’s case). In order to gain legitimacy and authority, as exemplified through the analysis of specific guidelines and regulations of research of traditional and non-conventional medicines, the regulatory environment favours the translation and transformation of traditional and non-conventional medicines into scientised and commercial versions of themselves. Drawing on STS scholarship, we suggested understanding these debates as political and social struggles reflecting changes about how people heal themselves and others in social communities that are in constant flux. More importantly, they reflect struggles of healing communities seeking to establish their own viability and right to exist within the dominant scientific-bureaucratic model of biomedicine. This chapter teased out limits of research regulation on non-conventional medicines, insofar practices and knowledges are already immersed in constantly shifting processes, transformed by the very efforts to pin them down into coherent and artificially closed-off systems. By pointing out the messy configurations of social healing spaces, we hope to open up a space of discussion with the chapters in this section. Indeed, how can we widen the lens of research regulation, and accommodate non-conventional medicines, without compromising the safety and quality of healthcare interventions? At the very minimum, research on regulation could engage with the social and political context of medicine-taking, and further the understanding of how and why patients seek one therapy over another.

31 Experiences of Ethics, Governance and Scientific Practice in Neuroscience Research

Martyn Pickersgill
31.1 IntroductionFootnote 1

Over the last decade or so, sociologists and other social scientists concerned with the development and application of biomedical research have come to explore the lived realities of regulation and governance in science. In particular, the instantiation of ethics as a form of governance within scientific practice – via, for instance, research ethics committees (RECs) – has been extensively interrogated.Footnote 2 Social scientists have demonstrated the reciprocally constitutive nature of science and ethics, which renders problematic any assumption that ethics simply follows (or stifles) science in any straightforward way.Footnote 3

This chapter draws on and contributes to such discussion through analysing the relationship between neuroscience (as one case study of scientific work) and research ethics. I draw on data from six focus groups with scientists in the UK (most of whom worked with human subjects) to reflect on how ethical questions and the requirements of RECs as a form of regulation are experienced within (neuro)science. The focus groups were conducted in light of a conceptual concern with how ‘issues and identities interweave’; i.e. how personal and professional identities relate to how particular matters of concern are comprehended and engaged with, and how those engagements themselves participate in the building of identities.Footnote 4 The specific analysis presented is informed by the work of science and technology studies (STS) scholar Sheila Jasanoff and other social scientists who have highlighted the intertwinement of knowledge with social order and practices.Footnote 5 In what follows, I explore issues that the neuroscientists I spoke with deem to be raised by their work, and characterise how both informal ideas about ethics and formal ethical governance (e.g. RECs) are experienced and linked to their research. In doing so, I demonstrate some of the lived realities of scientists who must necessarily grapple with the heterogenous forms of health-related research regulation the editors of this volume highlight in their Introduction, while seeking to conduct research with epistemic and social value.Footnote 6

31.2 Negotiating the Ethical Dimensions of Neuroscience

It is well known that scientists are not lovers of the bureaucracies of research management, which are commonly taken to include the completion of ethical review forms. This was a topic of discussion in the focus groups: one scientist, for instance, spoke of the ‘dread’ (M3, Group 5) felt at the prospect of applying for ethical approvals. Such an idiom will no doubt be familiar to many lawyers, ethicists and regulatory studies scholars who have engaged with life scientists about the normative dimensions of their work.

Research governance – specifically, ethical approvals – could, in fact, be seen as having the potential of hampering science, without necessarily making it more ethical. In one focus group (Group 1), three postdoctoral neuroscientists discussed the different terms ethics committees had asked them to use in recruitment materials. One scientist (F3) expressed irritation that another (F2) was required to alter a recruitment poster, in order that it clearly stated that participants would receive an ‘inconvenience allowance’ rather than be ‘paid’. The scientists did not think that this would facilitate recruitment into a study, nor enable it to be undertaken any more ethically. F3 described how ‘it’s just so hard to get subjects. Also if you need to get subjects from the general public, you know, you need these tricks’. It was considered that changing recruitment posters would not make the research more ethical – but it might prevent it happening in the first place.

All that being said, scientists also feel motivated to ensure their research is conducted ‘ethically’. As the power of neuroimaging techniques increases, it is often said that it becomes all the more crucial for neuroscientists to engage with ethical questions.Footnote 7 The scientists in my focus groups shared this sentiment, commonly expressed by senior scientists and ethicists. As one participant reflected, ‘the ethics and management of brain imaging is really becoming a very key feature of […] everyday imaging’ (F2, Group 4). Another scientist (F1, Group 2) summarised the perspectives expressed by all those who participated in the focus groups:

I think the scope of what we can do is broadening all the time and every time you find out something new, you have to consider the implications on your [research] population.

What scientists consider to be sited within the territory of the ‘ethical’ is wide-ranging, underscoring the scope of neuroscientific research, and the diverse institutional and personal norms through which it is shaped and governed. One researcher (F1, Group 2) reflected that ethical research was not merely that which had been formally warranted as such:

I think when I say you know ‘ethical research’, I don’t mean research passed by an ethics committee I mean ethical to what I would consider ethical and I couldn’t bring myself to do anything that I didn’t consider ethical in my job even if it’s been passed by an ethics committee. I guess researchers should hold themselves to that standard.

Conflicts about what was formally deemed ethical and what scientists felt was ethical were not altogether rare. In particular, instances of unease and ambivalence around international collaboration were reflected upon in some of the focus group discussions. Specifically, these were in relation to collaboration with nations that the scientists perceived as having relatively lax ethical governance as compared to the UK. This could leave scientists with a ‘slight uneasy feeling in your stomach’ (F2, Group 4). Despite my participants constructing some countries as being more or less ‘ethical’, no focus group participant described any collaborations having collapsed as a consequence of diverging perspectives on ethical research. However, the possibility that differences between nations exist, and that these difference could create problems in collaboration, was important to the scientists I spoke with. There was unease attached to collaborating with a ‘country that doesn’t have the same ethics’ (F2, Group 4). To an extent, then, an assumption of a shared normative agenda seemed to have significance as an underpinning for cross-national team science.

The need to ensure confidentiality while also sharing data with colleagues and collaborators was another source of friction. This was deemed to be a particularly acute issue for neuroscience, since neuroimaging techniques were seen as being able to generate and collect particularly sensitive information about a person (given both the biological salience of the brain and the role of knowledge about it in crafting identities).Footnote 8 The need to separate data from anything that could contribute to identifying the human subject it was obtained from impacted scientists’ relationships with their research. In one focus group (Group 3), M3 pointed out that no longer were scientists owners of data, but rather, they were responsible chaperones for it.

Fears were expressed in the focus groups that neuroscientific data might inadvertently impact upon research participants, for instance, affecting their hopes for later life, legal credibility and insurance premiums. Echoing concerns raised in both ethics and social scientific literatures, my participants described a wariness about any attempt to predict ‘pathological’ behaviours, since this could result in the ‘labelling’ (F1, Group 4) or ‘compartmentalising’ (F2, Group 4) of people.Footnote 9 As such, these scientists avoided involving themselves in research that necessarily entailed children, prisoners, or ‘vulnerable people’ (F2, group 4). Intra-institutional tensions could emerge when colleagues were carrying out studies that the scientists I spoke with did not regard as ethically acceptable.

Some focus group participants highlighted the hyping of neuroscience, and argued that it was important to resist this.Footnote 10 These scientists nevertheless granted the possibility that some of the wilder promises made about neuroscience (e.g. ‘mind reading’) could one day be realised – generating ethical problems in the process:

there’s definitely a lot of ethical implications on that in terms of what the average person thinks that these methods can do and can’t do, and what they actually can do. And if the methods should get to the point where they could do things like that, to what extent is it going to get used in what way. (F1, group 1)

Scientists expressed anxiety about ‘develop[ing] your imaging techniques’ but then being unable to ‘control’ the application of these (F2, Group 4). Yet, not one of my participants stated that limits should be placed on ‘dangerous’ research. Developments in neuroscience were seen neither as intrinsically good nor as essentially bad, with nuclear power sometimes invoked as a similar example of how, to their mind, normativity adheres to deployments of scientific knowledge rather than its generation. More plainly: the rightness or wrongness of new research findings were believed to ‘come down to the people who use it’ (F1, Group 1), not to the findings per se. Procedures almost universally mandated by RECs were invoked as a way of giving licence to research: ‘a good experiment is a good experiment as long as you’ve got full informed consent, actually!’ (F1, Group 3). Another said:

I think you can research any question you want. The question is how you design your research, how ethical is the design in order to answer the question you’re looking at. (F2, Group 2)

Despite refraining from some areas of work themselves, due to the associated social and ethical implications my participants either found it difficult to think of anything that should not be researched at all, or asserted that science should not treat anything as ‘off-limits’. One scientist laughed in mock horror when asked if there were any branches of research that should not be progressed: ‘Absolutely not!’ (F1 Group 3). This participant described how ‘you just can’t stop research’, and prohibitions in the UK would simply mean scientists in another country would conduct those studies instead. In this specific respect, ethical issues seemed to be somewhat secondary to the socially produced sense of competition that appears to drive forward much biomedical research.

31.3 Incidental Findings within Neuroimaging Research

The challenge of what to do with incidental findings is a significant one for neuroscientists, and a matter that has exercised ethicists and lawyers (see Postan, Chapter 23 in this volume).Footnote 11 They pose a particular problem for scientists undertaking brain imaging. Incidental findings have been defined as ‘observations of potential clinical significance unexpectedly discovered in healthy subjects or in patients recruited to brain imaging research studies and unrelated to the purpose or variables of the study’.Footnote 12 The possibilities and management of incidental findings were key issues in the focus group discussions I convened, with a participant in one group terming them ‘a whole can of worms’ (F1, Group 3). Another scientist reflected on the issue, and their talk underscores the affective dimensions of ethically challenging situations:

I remember the first time [I discovered an incidental finding] ’cos we were in the scanner room we were scanning the child and we see it online basically, that there might be something. It’s a horrible feeling because you then, you obviously at this point you know the child from a few hours, since a few hours already, you’ve been working with the child and it’s … you have a personal investment, emotional investment in that already but the important thing is then once the child comes out of the scanner, you can’t say anything, you can’t let them feel anything, you know realise anything, so you have to be just really back to normal and pretend there’s nothing wrong. Same with the parents, you can’t give any kind of indication to them at all until you’ve got feedback from an expert, which obviously takes so many days, so on the day you can’t let anything go and no, yeah it was, not a nice experience. (F2, Group 2)

Part of the difficulties inherent in this ethically (and emotionally) fraught area lies in the relationality between scientist and research subject. Brief yet close relationships between scientists and those they research are necessary to ensure the smooth running of studies.Footnote 13 This intimacy, though, makes the management of incidental findings even more challenging. Further, the impacts of ethically significant issues on teamwork and collaboration are complex; for instance, what happens if incidental findings are located in the scans of co-workers, rather than previously unknown research subjects? One respondent described how these would be ‘even more difficult to deal with’ (F1, Group 1). Others reflected that they would refrain from ‘helping out’ by participating in a colleague’s scan when, for instance, refining a protocol. This was due to the potential of neuroimaging to inadvertently reveal bodily or psychological information that they would not want their colleagues to know.

The challenge of incidental findings is one that involves a correspondence between a particular technical apparatus (i.e. imaging methods that could detect tumours) and an assemblage of normative imperatives (which perhaps most notably includes a duty of care towards research participants). This correspondence is reciprocally impactful: as is well known, technoscientific advances shift the terrain of ethical concern – but so too does the normative shape the scientific. In the case of incidental findings, for example, scientists increasingly felt obliged to cost in an (expensive) radiologist into their grants, to inspect each participant’s scan; a scientist might ‘feel uncomfortable showing anybody their research scan without having had a radiologist look at it to reassure you it was normal’ (F1, Group 3). Hence, ‘to be truly ethical puts the cost up’ (F2, Group 4). Not every scientist is able to command such sums from funders, who might also demand more epistemic bang for the buck when faced with increasingly costly research proposals. What we can know is intimately linked to what we can, and are willing to, spend. And if being ‘truly ethical’ indeed ‘puts the cost up’, then what science is sponsored, and who undertakes this, will be affected.

31.4 Normative Uncertainties in Neuroscience

Scientific research using human and animal subjects in the UK is widely felt to be an amply regulated domain of work. We might, then, predict that issues like incidental findings can be rendered less challenging to deal with through recourse to governance frameworks. Those neuroscientists who exclusively researched animals indeed regarded the parameters and procedures defining what was acceptable and legal in their work to be reasonable and clear. In fact, strict regulation was described as enjoining self-reflection about whether the science they were undertaking was ‘worth doing’ (F1, Group 6). This was not, however, the case for my participants working with humans. Rather, they regarded regulation in general as complicated, as well as vague: in the words of two respondents, ‘too broad’ and ‘open to interpretation’ (F1, Group 2), and ‘a bit woolly’ and ‘ambiguous’ (F2, group 2). Take, for instance, the Data Protection Act: in one focus group (Group 3) a participant (F1) noted that a given university would ‘take their own view’ about what was required by the Act, with different departments and laboratories in turn developing further – potentially diverging – interpretations.

Within the (neuro)sciences, procedural ambiguity can exist in relation to what scientists, practically, should do – and how ethically valorous it is to do so. Normative uncertainty can be complicated further by regulatory multiplicity. The participants of one focus group, for example, told me about three distinct yet ostensibly nested ethical jurisdictions they inhabited: their home department of psychology, their university medical school and their local National Health Service Research Ethics Committee (NHS REC). The scientists I spoke with understood these to have different purviews, with different procedural requirements for research, and different perspectives on the proper way enactment of ethical practices, such as obtaining informed consent in human subjects research.

Given such normative uncertainty, scientists often developed what we might term ‘ethical workarounds’. By this, I mean that they sought to navigate situations where they were unsure of what, technically, was the ‘right’ thing to do by establishing their own individual and community norms for the ethical conduct of research, which might only be loosely connected to formal requirements. In sum, they worked around uncertainty by developing their own default practices that gave them some sense of surety. One participant (F1, Group 2) described this in relation to drawing blood from people who took part in her research. To her mind, this should be attempted only twice before being abandoned. She asserted that this was not formally required by any research regulation, but instead was an informal standard to which she and colleagues nevertheless adhered.

In the same focus group discussion, another scientist articulated a version of regulatory underdetermination to describe the limits of governance:

not every little detail can be written down in the ethics and a lot of it is in terms of if you’re a researcher you have to you know make your mind up in terms of the ethical procedures you have to adhere to yourself and what would you want to be done to yourself or not to be done … (F2, Group 2)

Incidental findings were a key example of normative uncertainty and the ethical workarounds that resulted from this. Although ‘not every little detail can be written down’, specificity in guidelines can be regarded as a virtue in research that is seen to have considerable ethical significance, and where notable variations in practice were known to exist. The scientist quoted above also discussed how practical and ethical decisions must be made as a result of the detection of clinically relevant incidental findings, but that their precise nature was uncertain: scientists were ‘struggling’ due to being ‘unsure’ what the correct course of action should be. Hence, ‘proper guidelines’ were invoked as potentially useful, but these were seemingly considered to be hard to come by.

The irritations stimulated by a perceived lack of clarity on the ethically and/or legally right way to proceed are similarly apparent in the response of this scientist to a question about her feelings upon discovering, for the first time, a clinically relevant incidental finding in the course of her neuroimaging work:

It was unnerving! And also because it was the first time I wasn’t really sure how to deal with it all, so I had to go back in the, see my supervisor and talk to them about it and, try to find out how exactly we’re dealing now with this issue because I wasn’t aware of the exact clear guidelines. (F2, Group 2)

Different scientists and different institutions were reported to have ‘all got a different way of handling’ (F2, Group 4) the challenge of incidental findings. Institutional diversity was foregrounded, such as in the comments of F1 (Group 1). She described how when working at one US university ‘there was always a doctor that had to read the scans so it was just required’. She emphasised how there was no decision-making around this on behalf of the scientist or the research participant: it was simply a requirement. On the other hand, at a different university this was not the case – no doctor was on call to assess neuroimages for incidental findings.

An exchange between two researchers (F1 and F2, Group 2) also illustrates the problems of procedural diversity. Based in the same university but in different departments, they discussed how the complexities of managing incidental findings was related, in part, to practices of informed consent. Too lengthy a dialogue to fully reproduce here, two key features stood out. First, differences existed in whether the study team would, in practice, inform a research subject’s physician in the event of an individual finding: in F2’s case, it was routine for the physician to be contacted, but F1’s participants could opt out of this. However, obtaining physician contact details was itself a tricky undertaking:

we don’t have the details of the GP so if we found something we would have to contact them [the participant] and we’d have to ask them for the GP contact and in that case they could say no, we don’t want to, so it’s up to them to decide really, but we can’t actually say anything directly to them what we’ve found or what we think there might be because we don’t know, ’cos the GP then will have to send them to proper scans to determine the exact problem, ’cos our scans are obviously not designed for any kind of medical diagnosis are they? So I suppose they’ve still got the option to say no. (F2, Group 2)

It is also worth noting at this point the lack of certitude of the scientists I spoke with about where directives around ethical practice came from, and what regulatory force these had. F1 (Group 1) and F2 (Group 2) above, for instance, spoke about how certain processes were ‘just required’ or how they ‘have to’ do particular things to be ‘ethical’. This underscores the proliferation and heterogeneity of regulation the editors of this volume note in their Introduction, and the challenges of comprehending and negotiating it in practice by busy and already stretched professionals.

31.5 Discussion

The ethical aspects of science often require discursive and institutional work to become recognised as such, and managed thereafter. In other words, for an issue to be regarded as specifically ethical, scientists and universities need to, in some sense, agree that it is; matters that ethicists, for instance, might take almost for granted as being intrinsically normative can often escape the attention of scientists themselves. After an issue has been characterised by researchers as ethical, addressing it can necessitate bureaucratic innovation, and the reorganisation of work practices (including new roles and changing responsibilities). Scientists are not always satisfied with the extent to which they are able, and enabled, to make these changes. The ethics of neuroscience, and the everyday conversations and practices that come into play to deal with them, can also have epistemic effects: ethical issues can and do shape scientists relationships with the work, research participants, and processes of knowledge-production itself.

The scientists I spoke with listed a range of issues as having ethical significance, to varying degrees. Key among these were incidental findings. The scientists also engaged in what sociologist Steven Wainwright and colleagues call ‘ethical boundary work’; i.e. they sometimes erected boundaries between scientific matters and normative concerns, but also collapsed these when equating good science with ethical science.Footnote 14 This has the effect of enabling scientists to present research they hold in high regard as being normatively valorous, while also bracketing off ethical questions they consider too administratively or philosophical challenging to deal with as being insufficiently salient to science itself to necessitate sustained engagement.

Still, though, ethics is part and parcel of scientific work and of being a scientist. Normative reflection is, to varying degrees, embedded within the practices of researchers, and can surface not only in focus group discussions but also in corridor talk and coffee room chats. This is, in part, a consequence of the considerable health-related research regulation to which scientists are subject. It is also a consequence of the fact that scientists are moral agents: people who live and act in a world with other persons, and who have an everyday sense of right and wrong. This sense is inevitably and essentially context-dependent, and it inflects their scientific practice and will be contoured in turn by this. It is these interpretations of regulation in conjunction with the mundane normativity of daily life that intertwine to constitute scientists’ ethical actions within the laboratory and beyond, and in particular that cultivate their ethical workarounds in conditions of uncertainty.

31.6 Conclusion

In this chapter I have summarised and discussed data regarding how neuroscientists construct and regard the ethical dimensions of their work, and reflected on how they negotiate health-related research regulation in practice. Where does this leave regulators? For a start, we need more sustained, empirical studies of how scientists comprehend and negotiate the ethical dimensions of their research in actual scientific work, in order to ground the development and enforcement of regulation.Footnote 15 What is already apparent, however, is that any regulation that demands actions that require sharp changes in practice, to no clear benefit to research participants, scientists, or wider society, is unlikely to invite adherence. Nor are frameworks that place demands on scientists to act in ways they consider unethical, or which place unrealistic burdens (e.g. liaising with GPs without the knowledge of research participants) on scientists that leave them anxious and afraid that they are, for instance, ‘breaking the law’ when failing to act in a practically unfeasible way.

It is important to recognise that scientists bring to bear their everyday ethical expertise to their research, and it is vital that this is worked with rather than ridden over. At the same time, it takes a particular kind of scientist to call into question the ethical basis of their research or that of close colleagues, not least given an impulse to conflate good science with ethical science. Consequently, developing regulation in close collaboration with scientists also needs the considered input of critical friends to both regulators and to life scientists (including but not limited to social scientific observers of the life sciences). This would help mitigate the possibility of the inadvertent reworking or even subverting of regulation designed to protect human subjects by well-meaning scientists who inevitably want to do good (in every sense of the word) research.

32 Humanitarian Research Ethical Considerations in Conducting Research during Global Health Emergencies

Agomoni Ganguli-Mitra and Matthew Hunt
32.1 Introduction

Global health emergencies (GHEs) are situations of heightened and widespread health crisis that usually require the attention and mobilisation of actors and institutions beyond national borders. Conducting research in such contexts is both ethically imperative and requires particular ethical and regulatory scrutiny. While global health emergency research (GHER) serves a crucial function of learning how to improve care and services for individuals and communities affected by war, natural disasters or epidemics, conducting research in such settings is also challenging at various levels. Logistics are difficult, funding is elusive, risks are elevated and likely to fluctuate, social and institutional structures are particularly strained, infrastructure destroyed. GHER is diverse. It includes biomedical research, such as studies on novel vaccines and treatments, or on appropriate humanitarian and medical responses. Research might also include the development of novel public health interventions, or measures to strength public health infrastructure and capacity building. Social science and humanities research might also be warranted, in order to develop future GHE responses that better support affected individuals and populations. Standard methodologies, including those related to ethical procedures, might be particularly difficult to implement in such contexts.

The ethics of GHER relates to a variety of considerations. First are the ethical and justice-based justifications to conduct research at all in conditions of emergency. Second, the ethics of GHER considers whether research is designed and implemented in an ethically robust manner. Finally, ethical issues also relate to questions arising in the course of carrying out research studies. GHER is characterised by a heterogeneity (of risk, nature, contexts, urgency, scope) which itself gives rise to various kinds of ethical implications:Footnote 1 why research is done, who conducts research, where and when it is conducted, what kind of research is done and how. It is therefore difficult to fully capture the range of ethical considerations that arise, let alone provide a one-size-fits-all solution to such questions. Using illustrations drawn from research projects conducted during GHEs, we discuss key ethical and governance concerns arising in GHER – beyond those traditionally associated with biomedical research – and explore the future direction of oversight for GHER. After setting out the complex context of GHER, we illustrate the various ethical issues associated with justifying research, as well as considerations related to context, social value and engagement with the affected communities. Finally, we explore some of the new orientations and lenses in the governance of GHER through recent guidelines and emerging practices.

32.2 The Context of Global Health Emergency Research

GHEs are large-scale crises that affect health and that are of global concern (epidemics, pandemics, as well as health-related crises arising from conflicts, natural disasters or forced displacement). They are characterised by various kinds of urgency, driven by the need to rapidly and appropriately respond to the needs of affected populations. However, effective responses, treatments or preventative measures require solid evidence bases, and the establishment of such knowledge is heavily dependent on findings from research (including biomedical research) carried out in contexts of crises.Footnote 2 As the Council for International Organizations of Medical Sciences (CIOMS) guidelines point out: ‘disasters can be difficult to prevent and the evidence base for effectively preventing or mitigating their public health impact is limited’.Footnote 3 Generating relevant knowledge in emergencies is therefore necessary to enhance the care of individuals and communities, for example through treatments, vaccines or improved palliative care. Research can also consolidate preparedness for public health and humanitarian responses (including triage protocols) and contribute to capacity building (for example, by training healthcare professionals) in order to strengthen health systems in the long run. Ethical consideration and regulation must therefore adapt to both immediate and urgent issues, as well as contribute to developing sustainable and long-term processes and practices.

Adding to this is the fact that responses to GHEs involve a variety of actors: humanitarian responders, health professionals, public health officials, researchers, media, state officials, armed forces, national governments and international organisations. Actors conducting both humanitarian work and research can encounter particular ethical challenges, given the very different motivations behind response and research. Such dual roles might, at times, pull in different directions and therefore warrant added ethical scrutiny and awareness, even where such actors might be best placed to deliver both aims, given their presence and knowledge of the context, and especially if they have existing relationships with affected communities.Footnote 4 Medical and humanitarian responses to GHEs are difficult contexts for ethical deliberation – for ethics review and those involved in research governance – where various kinds of motivations and values collide, potentially giving rise to conflicting values and aims, or to incompatible lines of accountabilityFootnote 5 (for example, towards humanitarian versus research organisations or towards national authorities versus international organisations).

Given the high level of contextual and temporal complexity, and the heightened vulnerability to harm of those affected by GHEs, there is a broad consensus within the ethics literature that research carried out in such contexts requires both a higher level of justification and careful ongoing ethical scrutiny. Attention to vulnerability is, of course, not new to research ethics. It has catalysed many developments in the field, such as the establishment of frameworks, principles, and rules aiming to ensure that participants are not at risk of additional harm, and that their interests are not sacrificed to the needs and goals of research. It has also been a struggle in research governance, however, to find appropriate regulatory measures and measures of oversight that are not overly protectionist; ones that do not stereotype and silence individuals and groups but ensure that their interests and well-being are protected. The relationship between research and vulnerability becomes particularly knotty in contexts of emergency. How should we best attend to vulnerability when it is pervasive?Footnote 6 On the one hand, all participants are in a heightened context of vulnerability when compared to populations under ordinary circumstances. On the other hand, those individuals who suffer from systematic and structural inequality, disadvantage and marginalisation, will also see their potential vulnerabilities exacerbated in conditions of public health and humanitarian emergencies. The presence of these multiple sources and forms of vulnerability adds to the difficulty in determining whether research and its design are ethically justified.

32.3 Justifying Research: Why, Where and When?

While research is rightly considered an integral part of humanitarian and public health responses to GHEs,Footnote 7 and while there may indeed, as the WHO suggests, be an ‘ethical obligation to learn as much as possible, as quickly as possible’,Footnote 8 research must be ethically justified on various fronts. At a minimum, GHER must not impede current humanitarian and public health responses, even as it is deployed with the aim of improving future responses. Nor should it drain existing resource and skills. Additionally, the social value of such research derives from its relevance to the particular context and the crisis at hand.Footnote 9 Decisions regarding location, recruitment of participants, as well as study design (including risk–benefit calculations) must ensure that scientific and social value are not compromisedFootnote 10 in the process. The Working Group on Disaster Research Ethics (WGDRE), formed in response to the 2004 Indian Ocean tsunami, has argued that while ethical research can be conducted in contexts of emergencies, such research must respond to local needs and priorities, in order avoid being opportunistic.Footnote 11 Similar considerations were reiterated during the 2014–2016 Ebola outbreak. Concern was expressed that ‘some clinical sites could be perversely incentivized to establish research collaborations based on resources promised, political pressure or simply the powers of persuasion of prospective researchers – rather than a careful evaluation of the merits of the science or the potential benefit for patients. Some decision-makers at clinical sites may not have the expertise to evaluate the scientific merits of the research being proposed’.Footnote 12 Such observation reflects considerations that have been identified in a range of GHE settings.

The question of social value is not only related to the ultimate or broad aims of research. Specific research questions can only be justified if these cannot be investigated in non-crisis conditions,Footnote 13 and as specified above, where the answers to such questions is expected to be of benefit to the community in question – or to relevantly similar communities, be it during the current emergency, or in the future. Relatedly, research should be conducted within settings that are most likely to benefit from the generation of such knowledge, perhaps because they are the site of cyclical disasters or endemic outbreaks that frequently disrupt social structures. Given the heightened precarity of GHE contexts, the risk of exposing study participants to additional harm is particularly salient, and such potential risk must therefore be systematically justified. If considerations of social value are key, these need to extend to priority-setting in GHER. Yet, the funding and development of GHER is not immune to how research priority is set globally. Consequently, this divergence (between the kind of research that is currently being funded and developed, and the research that might be required in specific contexts of crisis) will present particular governance challenges at the local, national, and global levels. Stakeholders from contexts of scarce resources have warned that priority-setting in GHE might mirror broader global research agendas, where the health concerns and needs of low- and middle-income countries (LMICs) are systematically given lower priority.Footnote 14 The global research agenda is not generally directed by the specific needs arising from crises (especially crisis in resource-poor contexts), and yet the less well-funded and less resilient health systems of LMICs frequently bear the brunt and severity of crises. The ethical challenges associated with conducting research in contexts of crisis therefore are consistently present at all levels, from the broader global research agenda, to the choice of context and participants, from how research is designed and conducted, to how research data and findings are used and shared.

32.4 Justifying Research: What and How?

GHER includes a wide range of activities, from minimally invasive collection of dataFootnote 15 and systems research aimed at strengthening health infrastructure,Footnote 16 to more controversial procedures including testing of experimental therapeutics and vaccines.Footnote 17 A common issue of GHER, one that has arisen prominently during recent epidemics and infectious disease outbreaks, is the challenge to long-established standards and trials designs, in particular to what is known as the ‘gold standard’: randomised, double-blind clinical trials as the standard developmental pathway for new drugs and interventions. The ethical intuitions and debates often pull in different direction. As discussed earlier in the chapter, the justification for conducting research in crises must be ethically robust, as must research design and deployment. Equally, in the context of the COVID-19 pandemic, a strong argument has been made for the need to ensure methodologically rigorous research design and not to accept lower scientific standards as a form of ‘pandemic research exceptionalism’.Footnote 18 At the time of writing, human challenge trials – the intentional infection of research participants with the virus – proposed as a way to accelerate the development of a vaccine for the novel coronavirus, remain ethically and scientifically controversial. While some commentators have suggested that this may be a rapid and necessary route to vaccine development,Footnote 19 others have argued that the criteria for ethical justification of human challenge studies, including social value and fair participation selection, are not likely to be met.Footnote 20

Such tensions are particularly heightened in contexts of high infectious rates, morbidity and mortality. During the 2014–2016 Ebola outbreak in West Africa, several unregistered interventions were approved for use as investigational therapeutics. Importantly, while these were approved for emergency use, they were to be deployed under the MEURI scheme: ‘monitored emergency use of unregistered and experimental interventions (MEURI)’,Footnote 21 that is, through a process where results of an intervention’s use are shared with the scientific and medical community, and not under the medical label of ‘compassionate use’. This approach allows clinical data to be compiled and thus contributes to the process of generating generalisable evidence. The deployment of experimental drugs was once again considered – alongside the deployment of experimental vaccines – early during the 2018 Ebola outbreak in the Democratic Republic of the Congo.Footnote 22 This time, regulatory and ethical frameworks were in place to approve access to five investigational therapeutics under the MEURI scheme,Footnote 23 two of which have since shown promise during the clinical trials conducted in 2018. The first Ebola vaccine, approved in 2019, was tested through ring vaccine trials first conducted during the 2014–2016 West African outbreak. Methods and study designs need to be aligned with the needs of the humanitarian response, and yet it is not an easy task to translate the values of humanitarian responses onto research design. How experimental interventions should be deployed under the MEURI scheme was heavily debated and contested by local communities, who saw these interventions as their only and last resort against the epidemic.

While success stories in GHER heavily depend on global cooperation, suitable infrastructure, and often collaboration between the public and private sector, such interventions are unlikely to succeed without the collaboration and engagement of local researchers and communities, and without establishing a relationship based on trust. Engaging with communities and establishing relationships of trust and respect are key to successful research endeavours in all contexts, but are particularly crucial where social structures have broken down and where individuals and communities are at heightened risk of harm. Community engagement, especially for endeavours not directly related to response and medical care, is also particularly challenging to implement. These challenges are most significant in sudden-onset GHE such as earthquakes,Footnote 24 if prior relationships do not exist between researchers and the communities. During the 2014–2016 Ebola outbreak, the infection and its spread caused ‘panic in the communities by the lack of credible information and playing to people’s deepest fears’.Footnote 25 Similarly, distrust arose during the subsequent outbreak in eastern DRC, a region already affected by conflict, where low trust in institutions and officials resulted in low acceptance of vaccines and a spread of the virus.Footnote 26 Similarly, in the aftermath of Hurricane Katrina there was widespread frustration and distrust of the US federal response by those engaged in civil society and community-led responses.Footnote 27 However, such contexts have also given rise to new forms solidarity and cooperation. The recent Ebola outbreaks, the aftermath of Katrina, the 2004 Indian Ocean tsunami and the Fukushima disaster have also given rise to unprecedent levels engagement and leadership by members of the affected communities.Footnote 28 Given that successful responses to GHEs are heavily dependent on trust as well as on the engagement and ownership of response activities by local communities, there is little doubt that successful endeavours in GHER will also depend on establishing close, trustworthy and respectful collaborations between researchers, responders, local NGOs, civil society and members of the affected population.

32.5 Governance and Oversight: Guidelines and Practices

The difficulty of conducting GHER is compounded by much complexity at the level of regulation, governance and oversight. Those involved in research in these contexts are working in and around various ethical frameworks including humanitarian ethics, medical ethics, public health ethics and research ethics. Each framework has traditionally been developed with very different actors, values and interests in mind. Navigating these might result in various kinds of conflicts or dissonance, and at the very least make GHER a particularly challenging endeavour. Such concerns are then compounded by regulatory complexity, including existing national laws and guidelines, international regulations and guidance produced by different international bodies (for example, the International Health Regulations 2005 by the WHO and Good Clinical Practice by the National Institute for Health Research in the United Kingdom), all of which are engaged in a context of urgency, shifting timelines and rapidly evolving background conditions. Two recent pieces of guidance are worth highlighting in this context. The first are the revised CIOMS guidelines, published in 2016, which have a newly added entry (Guideline 20) specifically addressing GHER. The CIOMS guidelines recognise that ‘[d]isasters unfold quickly and study designs need to be chosen so that studies will yield meaningful data in a rapidly evolving situation. Study designs must be feasible in a disaster situation but still appropriate to ensure the study’s scientific validity’.Footnote 29 While reaffirming the cornerstones of research ethics, Guideline 20 also refers to the need to ensure equitable distribution of risks and benefits; the importance of community engagement; the need for flexibility and expediency in oversight while providing due scrutiny; and the need to ensure the validity of informed consent obtained under conditions of duress. CIOMS also responds to the need for flexible and alternative study designs and suggests that GHER studies should ideally be planned ahead and that generic versions of protocols could be pre-reviewed prior to a disaster occurring.

Although acting at a different governance level to CIOMS, the Nuffield Council on Bioethics has also recently published a report on GHER,Footnote 30 engaging with emerging ethical issues and echoing the central questions and values reflected in current discussions and regulatory frameworks. Reflecting on the lessons learned from various GHEs over the last couple of decades, the report encourages the development of an ethical compass for GHER that focuses on respect, reducing suffering, and fairness.Footnote 31 The report is notable for recommending that GHER endeavours attend not just to whose needs are being met (that is, questions of social value and responsiveness) but also to who has been involved in defining those needs. In other words, the report reminds us that beyond principles and values guiding study design and implementation, ethical GHER requires attention to a wider ethics ecosystem that includes all stakeholders, and that upholding fairness is not only a feature of recruitment or access to the benefits of research, but must also exist in collaborative practices with local researchers, authorities and communities.

All guidelines and regulations need interpretation on the ground,Footnote 32 at various levels of governance, as well as by researcher themselves. The last couple of decades have seen a variety of innovative and adaptive practices being developed for GHER, including the establishment of research ethics committees specifically associated with humanitarian organisations. Similarly, many research ethics committees that are tasked with reviewing GHER protocols have adapted their standard procedures in line with the urgency and developing context of GHEs.Footnote 33 Such strategies include convening ad-hoc meetings, prioritising these protocols in the queue for review, waiving deadlines, having advisors pre-review protocols and conducting reviews by teleconference.Footnote 34 Another approach can be found in the development of pre-approved, or pre-reviewed protocol templates, which allow research ethics committees to conduct an initial review of proposed research ahead of a crisis occurring, or to review generic policies for the transfer of samples and data. Following their experience in reviewing GHER protocols during the 2014–2016 Ebola outbreak, members of the World Health Organization Ethics Review Committee recommended the formation of a joint research ethics committee for future GHEs.Footnote 35 A need for greater involvement and interaction between ethics committees and researchers has been indicated by various commentators, pointing to the need for ethical review to be an ongoing and iterative process. One such model for critical and ongoing engagement, entitled ‘real-time responsiveness’,Footnote 36 proposes a more dynamic approach to ethics oversight for GHER, including more engagement between researchers, research ethics committees, and advisors once the research is underway. An iterative review process has been proposed for research in ordinary contextsFootnote 37 but is particularly relevant to GHER, given the urgency and rapidly changing context.

It is important to also consider how to promote and sustain the ethical capacities of researchers in humanitarian settings. Such capacities include the following, which have been linked to ethical humanitarian action:Footnote 38 foresighting (the ability to anticipate potential for harms), attentiveness (especially for the social and relational dynamics of particular GHE contexts), and responsiveness to the often-shifting features of a crisis, and their implications for the conduct of the research. These capacities point to the role of virtues, in addition to guidelines and principles, in the context of GHER. As highlighted by O’Mathuna, humanitarian research calls for virtuous action on the part of researchers in crisis settings ‘to ensure that researchers do what they believe is ethically right and resist what is unethical’.Footnote 39 Ethics therefore is not merely a feature of approval or bureaucratic procedure. It must be actively engaged with at various levels and also by all involved, including by researchers themselves.

32.6 New Orientations and Lenses

As outlined above, GHEs present a distinctive context for the conduct of research. Tailored ethics guidance for GHER has been developed by various bodies, and it has been acknowledged that GHER can be a challenging fit for standard models to ethics oversight and review. As a result, greater flexibility in review procedures has been endorsed, while emphasising the importance of upholding rigorous appraisal of protocol. Particular attention has been given to the proportionality of ethical scrutiny to the ethical concerns (risk of harm, issues of equity, situations of vulnerability) associated with particular studies. Novel approaches, such as the preparation and pre-review of generic protocols, have also been incorporated into more recent guidance documents (e.g. CIOMS) and implemented by research ethics committees associated with humanitarian organisations.Footnote 40 These innovations reflect the importance of temporal sensitivity in GHER and in its review. As well as promoting timely review processes for urgent protocols, scrutiny is also needed to identify research that does not need to be conducted in an acute crisis and whose initiation ought to be delayed.

Discussions about GHER, and on disaster risk reduction more broadly, also point to the importance of preparedness and anticipation. Sudden onset events and crises often require quick response and reaction. Nonetheless, there are many opportunities to lay advance groundwork for research and also for research ethics oversight. In this sense, pre-review of protocols, careful preparation of standard procedures, and even research ethics committees undertaking their own planning procedures for reviewing GHER, are all warranted. It also suggests that while methodological innovation and adaptive designs may be required, methodological standards should be respected in crisis research and can be promoted with more planning and preparation.

32.7 Conclusion

Research conducted in GHEs present a particularly difficult context in terms of governance. While each kind of emergency presents its own particular challenge, there are recurring patterns, characterised by urgency in terms of injury and death, extreme temporal constraints, and uncertainty in terms of development and outcome. Research endeavours have to be ushered through a plethora of regulation at various levels, not all of which have been developed with GHER in mind. Several sectors are necessarily involved: humanitarian, medical, public health, and political to name just a few. Conducting research in these contexts is necessary, however, in order to contribute to a robust evidence-base for future emergencies. Ethical considerations are crucial in the implementation and interpretation of guidance, and in rigorously evaluating justification for research. Governance must find a balance between the protection of research participants, who find themselves in particular circumstances of precarity, and the need for flexibility, preparedness, and responsiveness as emergencies unfold. Novel ethical orientations suggest the need, at times, to rethink established procedures, such as one-off ethics approval, or gold standard clinical trials, as well as to establish novel ethical procedures and practice, such as specially trained ethics committees, and pre-approval of research protocols. However, the ethics of such research also suggest that time, risk and uncertainty should not work against key ethical considerations relating to social value, fairness in recruitment or against meaningful and ongoing engagement with the community in all phases of response and research. A dynamic approach to the governance of GHER will also require supporting the ability of researchers, ethics committees and those governing research to engage with and act according to the ethical values at stake.

33 A Governance Framework for Advanced Therapies in Argentina Regenerative Medicine, Advanced Therapies, Foresight, Regulation and Governance

Fabiana Arzuaga
33.1 Introduction

Research in the field of regenerative medicine, especially that which uses cells and tissues as therapeutic agents, has given rise to new products called ‘advanced therapies’ or advanced therapeutic medicinal products (ATMPs). These cutting-edge advances in biomedical research have generated new areas for research at both an academic and industrial level and have posed new challenges for existing regulatory regimes applicable to therapeutic products. The leading domestic health regulatory agencies in the world, such as the US Food and Drug Administration (FDA) and the European Medicines Agency (EMA), have regulated therapeutic tissues and cells as biological medicines and are currently making efforts to establish a harmonised regulatory system that facilitates the process of approval and implementation of clinical trials.

In the mid-2000s, the Argentine Republic did not have any regulations governing ATMPs, and governance approaches to them were weak and diverse. Although the process of designing a governance framework posed significant challenges, Argentina started to develop a regulatory framework in 2007. After more than ten years of work, this objective was achieved thanks to local efforts and the support of academic institutions and regulatory agencies from countries with more mature regulatory frameworks. In 2019, however, Argentina was leading in the creation of harmonised regulatory frameworks in Latin America.

In this chapter I will show how the framework was developed from a position of state non-intervention to the implementation of a governance framework that includes hard and soft law. I will identify the main objectives that drove this process, the role of international academic and regulatory collaborations, milestones and critical aspects of the construction of normative standards and the ultimate governance framework, and the lessons learned, in order to be able to transfer them to other jurisdictions.

33.2 The Evolution of Regulation of Biotechnology in Argentina: Agricultural Strength and Human Health Fragmentation

Since its advent in the middle of the 1990s, modern biotechnology has represented an opportunity for emerging economies to build capacity alongside high-income countries, thereby blurring the developed/developing divide in some areas (i.e. it represents a ‘leapfrog’ technology similar to mobile phones). For this to occur, and for maximum benefit to be realised, an innovation-friendly environment had to be fostered. Such an environment does not abdicate moral limits or public oversight but is characterised by regulatory clarity and flexibility.Footnote 1 The development of biotechnology in the agricultural sector in Argentina is an example of this. Although it had not been a technology-producing country, Argentina faced a series of favourable conditions that allowed the rapid adoption of genetically modified crops.Footnote 2 At the same time, significant institutional decisions were made, especially with regard to biosecurity regulations, with the creation of the National Commission for Agricultural Biotechnology (CONABIA) in 1991.Footnote 3 These elements, together with the fact that Argentina has 26 million hectares of arable land, made the potential application of these technologies in Argentina – and outside the countries of origin of the technology, especially the USA – possible. This transformed Argentina into an exceptional ‘landing platform’ for the rapid adoption of these biotechnological developments. The massive incorporation of Roundup Ready (RR) soybean is explained by the reduction of its production costs and by the expansion of arable land. This positioned Argentina as the world’s leading exporter of genetically modified soybean and its derivatives.Footnote 4

The development of biotechologies directed at human health was more complex and uncertain, and unfolded in a more contested and dynamic setting, which resulted in it evolving at a much slower pace, with regulation also developing more slowly, involving a greater number of stakeholders. This context, as will be demonstrated below, offered opportunities for developing new processual mechanisms aimed at soliciting and developing the views and concerns of diverse stakeholders.Footnote 5

33.3 First Steps in the Creation of A Governance Framework for Cell Therapies

The direct antecedent of stem cells for therapeutic purposes is the hematopoietic progenitor cell (HPC), which has been extracted from bone marrow to treat blood diseases for more than fifty years and is considered an ‘established practice’.Footnote 6 HPC transplantation is regulated by the Transplant Act 1993, and its regulatory authority is the National Institute for Transplantation (INCUCAI), which adopted regulations governing certain technical and procedural aspects of this practice in 1993 and 2007.Footnote 7 This explains the rationale by which many countries – including Argentina – started regulating cell therapies under the a transplantation legal framework. However, Argentina’s active pursuit of regenerative medicine research aimed at developing stem cell solutions to health problems required something more, and despite its efforts to promote this research, there were no regulations or studies related to ethics and the law in this field.Footnote 8

In 2007, the Advisory Commission on Regenerative Medicine and Cellular Therapies (Commission) was created under the National Agency of Promotion of Science and Technology (ANPCYT) and the Office of the Secretary of Science and Technology was transformed into the Ministry of Science, Technology and Productive Innovation (MOST) in 2008.Footnote 9 The Commission comprised Argentinian experts in policy, regulation, science and ethics, and was set up initially with the objective of advising the ANPCYT in granting funds for research projects in regenerative medicine.Footnote 10 However, faced with a legal gap and the increasing offer of unproven stem cells treatments to patients, this new body became the primary conduit for identifying policy needs around stem cell research and its regulation, including how existing regulatory institutions in Argentina such INCUCAI and the National Administration of Drugs, Food and Medical Technology (ANMAT), would be implicated.

The Commission promoted interactions with a wide range of stakeholders from the public and private sectors, the aim being to raise awareness and interest regarding the necessity of forging a governance framework for research and products approval in the field of regenerative medicine. In pursuing this ambitious objective, the Commission wanted to benefit from lessons from other regions or countries.Footnote 11 In 2007, it signed a Collaborative Agreement between the Argentine Secretary of Science and Technology and the University of Edinburgh’s AHRC SCRIPT Centre (the Arts and Humanities Research Council Research Centre for Studies in Intellectual Property & Technology Law).Footnote 12 This collaboration, addressed in greater detail below, extended to 2019 and was a key factor in the construction of the regulatory framework for ATMPs in Argentina.

33.4 From Transplants to Medicines

In 2007, in an attempt to halt the delivery of untested stem cell–based treatments that were not captured by the current regulatory regime applicable to HPCs, the Ministry of Health issued Resolution MS 610/2007, under the Transplant Act 1993. The 610/2007 Resolution states ‘activities related to the use of human cells for subsequent implantation in humans fall within the purview of the Transplant Authority (INCUCAI)’.Footnote 13 This Resolution formally recognises INCUCAI’s competence to deal with activities involving the implantation of cellular material into humans. However, it is very brief and does not specify which type of cell it applies to, nor any specific procedures (kind of manipulation) to which cells it can be subject, an issue that is, in any event, beyond the scope of the Act.Footnote 14 This Resolution is supplemented by Regulatory Decree 512/95, which, in Article 2, states that ‘any practice that involves implanting of human cells that does not fall within HPC transplantation is radically new and therefore is considered as experimental practice until it is demonstrated that it is safe and effective’.

To start a new experimental practice, researchers or medical practitioners must seek prior authorisation from INCUCAI by submitting a research protocol signed by the medical professional or team leader who will conduct the investigation, complying with all requirements of the regulations, including the provision of written informed consent signed by the research subjects, who must not be charged any monies to participate in the procedure. In May 2012, INCUCAI issued Resolution 119/2012, a technical standard to establish requirements and procedures for the preparation of cellular products. Substantively, it is in harmony with international standards of good laboratory and manufacturing practices governing this matter. However, very few protocols have been filed with INCUCAI since 2007, and the delivery of unproven stem cell treatments continued to grow, a situation that exposed INCUCAI’s difficulties in policing the field and reversing the growth of health scams.Footnote 15

Another attempt to regulate was the imposition of obligations to register some cellular-based products as biological medicaments. The ANMAT issued two regulations under the Medicines Act 1964:Footnote 16 Dispositions 7075/2011 and 7729/2011. These define ‘biological medicinal products’ as ‘products derived from living organisms like cells or tissues’, a definition that captures stem cell preparations, and they are categorised in both Dispositions as ATMPs. Cellular-based or biological medicaments must be registered with the National Drugs Registry (REM), and approval for marketing, use and application in humans falls within the scope of the Medicines Act and its implementing regulations. Cellular medicine manufacturers must register at the ANMAT as manufacturing establishments, and they must request product registration before marketing or commercialising their products.

Importantly, the ANMAT regulations do not apply in cases where ATMPs are manufactured entirely by an authorised medical centre, to be used exclusively in that centre. In that case, the local health authority maintains the right for approval. Like all regulations issued by the national Ministry of Health under the Medicines Act, the provisions of Dispositions 7075/2011 and 7729/2011 apply only in areas of national jurisdiction, in cases where interprovincial transit is implicated, or where ATMPs are imported or exported. In short, the Medicines Act is not applicable so long as the product does not leave the geographic jurisdiction of the province. And within the provinces, regulatory solutions were inconsistent; for example, in one they were regulated as transplants and in another as medicines.

As alluded to above, while imperfect regulatory attempts were pursued, the offer of unproven treatments with cells continued to grow. As in many countries, it was usual to find publications in the media reporting the – almost magical – healing power of stem cells, with little or no supporting evidence, and such claims have great impact on public opinion and on the decisions of individual patients. Moreover, the professionals offering these ‘treatments’ took refuge in the independence of medical practice and the autonomy that it offers, but it seems clear that some of the practices reported were directly contrary to established professional ethics, and they threatened the safety of patients receiving the treatments.Footnote 17 In addition to the safety issues, given that these were experimental therapies (that have not been proven to be safe and effective), health insurers have stated their refusal to cover them (and one can anticipate the same antipathy to indemnifying patients who chose to accept them and are injured by them). Indeed, patients filed judicial actions demanding payment of such treatments by both health insurance institutions and the national and provincial state (as guarantors of public health).Footnote 18

The regulatory regime – by virtue of its silence, its imperfect application to regenerative medicine and concomitant practices, and its shared authority between national and provincial bodies – permitted unethical practices to continue, and decisions of some courts have mandated the transfer of funds from the state (i.e. the social welfare system) to the medical centres offering these experimental cellular therapies. In short, the regime established a poorly coordinated regulatory patchwork that was proving to be insufficient to uniformly regulate regenerative medicine – and stem cell – research and its subsequent translation into clinical practice and treatments as ATMPs. Moreover, attempts by regulatory authorities to stop these practices, though valiant, also proved ineffective.

33.5 Key Drivers for the Construction of the Governance Framework

The landscape described above endured until 2017, when the Interministerial Commission for Research and Medicaments of Advanced Therapies (Interminsterial Commission) was created. This new body, jointly founded by the Ministry of Science and Technology (MOST) and the Ministry of Health (MOH), which also oversaw INCUCAI and ANMAT, was set up to:

  1. 1. Advise the MOST and MOH in the subjects of their competence.

  2. 2. Review current regulations on research, development and approval of products in order to propose and raise for the approval of the competent authority, a comprehensive and updated regulatory framework for advanced therapies.

  3. 3. Promote dissemination within the scientific community and the population more broadly on the state of the art relating to ATMPs.

Led by a coordinator appointed by the MOST, the Interministerial Commission focused its efforts first on adopting a new regulatory framework that was harmonised with the EMA and FDA, and that recognised the strengths of local institutions in fulfilling its objectives. The strategy to create the governance framework was centred in three levels of norms: federal law, regulation and soft law. The proposal was accepted by both Ministries and efforts were made to put in force, first, the regulatory framework and soft law in order to stop the delivery of unproven treatments. These elements would then be in force while a bill of law was sent to the National Parliament.

On September 2018, the new regulatory framework was issued through ANMAT Disposition 179/2018 and an amendment to the Transplant Law giving competence to INCUCAI to deal with hematopoietic progenitor cells (CPH) in their different collection modalities, the cells, tissues and/or starting materials that originate, compose or form part of devices, medical products and medicines, as well as cells of human origin of autologous use used in the same therapeutic procedure with minimal manipulation and to perform the same function of origin.

The Interministerial Commission benefitted immensely from the work of the original Commission, which was formed in 2007 and which collaborated across technical fields and jurisdictional borders for a decade, moving Argentina from a position of no regulation for ATMPs, to one of imperfect regulation (limited by the conditions of the time). The original Commission undertook the following:

  1. 1. Undertaking studies on the legislation of Argentina and other countries to better understand how these technical developments might be shaped by law (i.e. through transplant, medicines or a sui generis regime).

  2. 2. Proposing a governance framework adapted to the Argentine legal and cultural context, harmonised with European and US normative frameworks.

  3. 3. Communicating this initiative to all interested sectors and managing complex relationships to promote debate in society, and then translate learnings from that debate into a normative/governance plan.Footnote 19

The work of the Commission was advanced through key collaborations; first and foremost with the University of Edinburgh (2007–2019). This collaboration had several strands and an active institutional relationship.Footnote 20

Other collaborations involved the Spanish Agency for Medicaments, the Argentine judiciaryFootnote 21 and the creation of the Patient Network for Advanced Therapies (APTA Network) to provide patients with accurate information about advances in science and their translation into healthcare applications. All this was accompanied by interactions with a range of medical societies in order to establish a scientific position in different areas of medicine against the offer of unproven treatments.Footnote 22

33.6 Current Legal/Regulatory Framework

The current legal framework in force and proposed by the Interministerial Commission is the result of a collaboration work focused on identifying the different processes involved in research and approval of ATMPs and set up an effective articulation between its parts. It consists of laws and regulations and establishes a coordinated intervention of both authorities, ANMAT and INCUCAI, in the process of approval of research and products. The system operates as follows:

  1. 1. Medicaments Law establishes ANMAT with competence to regulate the scientific and technical requirements at national level applicable to clinical pharmacology studies, the authorisation of manufacturing establishments, production, registration and authorisation of commercialisation, and surveillance of Advanced Therapy Medicaments.Footnote 23

  2. 2. Transplants Law establishes INCUCAI with competencies to regulate the stages of donation, obtaining, and control of cells and/or tissues from human beings when they are used as starting material in the production of an ATMP.Footnote 24

  3. 3. Manufacturing establishments that produce ATMP must be authorised by ANMAT.

  4. 4. When an ATMP is developed and used within the same facility, the donation, procurement, production and control stages are ruled under the INCUCAI regulations. INCUCAI must request the intervention of ANMAT for the evaluation and technical assistance in the stages of the manufacturing process, in order to guarantee that they meet the same standards as the rest of the Advanced Therapy Medications.

  5. 5. Cell preparations containing cells of human origin with minimal manipulation are not considered medications and will be under the INCUCAI regulations.

Finally, the newly amended Argentine Civil Commercial Code 2015 establishes the ethico-legal requirements for clinical trials. Specifically, Article 58 states that investigations in human beings through interventions, such as treatments, preventative methods, and diagnostic or predictive tests, whose efficacy or safety are not scientifically proven, can only be carried out if specific requirements are met relating to consent, privacy, and a protocol that has received ethical approval, etc.

Laws and regulations above described combine to form a reasonably comprehensive normative system applicable to research, market access approval and pharmacovigilance for ATMPs, harmonised with international standards.

Importantly, and interestingly, though many stakeholders in the period 2011–2017 reported a preference for command-and-control models of regulation (i.e. state-led, top-down approaches)Footnote 25 and many elements of the prevailing regime do now reflect this, the framework itself emerged through a bottom-up, iterative process, which sought to connect abstract concepts and models of governance with actual experience and the national social and legal normative culture. While the Commission, together with a key circle of actors, shaped the process, a wide variety of stakeholders from academia, regulatory bodies, medical societies, researchers, patients and social media cooperated to advance the field. Their efforts were very much an example, imperfectly realised, of legal foresighting.Footnote 26

To complete the normative framework currently in force, it would be advisable to maintain a soft law design to provide support to regulatory bodies to maintain updated proceedings as well as the flexibility to accompany the advances of science. Finally, it would be prudent to count on a federal law that regulates clinical research, and fundamentally to provide the regulatory authority a robust policy power to stop the advance of eventual unproven treatments across the country as a legal warranty for the protection of patients and research human subjects.

33.7 Conclusion

The design and adoption of a governance framework for regenerative medicine research and ATMPs in the Argentine Republic has been a decade-long undertaking that has relied on the strengths and commitment of key institutions like MOST, MOH, ANMAT and INCUCAI and on the ongoing engagement with a range of stakeholders.

To achieve the current normative framework, it was necessary to amend existing legal instruments and issue new laws and regulations.

The new framework exemplifies a more joined-up regime that is harmonised with other important regulatory agencies like EMA and the FDA. This is important because the development of ATMPs is increasingly global in nature, and it is expected that Argentine regulators will work closely with international partners in multiple ways to support safe and effective innovation that will benefit a wider segment of the population, including, importantly, traditionally marginalised groups.

Section IIC Towards Responsive Regulation Introduction

McMillan Catriona

This section of the volume offers a contemporary selection of examples of where existing models of law and regulation are pushed to their limits. Novel challenges are arising that require reflection on appropriate and adaptive regulatory responses, especially where ethical concerns raise questions about the acceptability of the research itself. The focus in this section is on how these examples create disturbances within regulatory approaches and paradigms, and how these remain a stubborn problem if extant approaches are left untouched. The reference to ‘responsive’ here highlights the temporally limited nature of law and regulation, and the reflexivity and adaptability that is required by these novel challenges to health research regulation. The choice of examples is illustrative of existing and novel research contexts where the concepts, tools, and mechanisms discussed in Part I come into play.

The first part of this section speaks to nascent challenges in the field of reproductive technologies, an area of health research often characterised by its disruptiveness to particular legal and social norms. The first two contributions to this theme focus on human gene editing, a field of research that erupted in global public ethical and policy debates when the live birth of twin girls, Lulu and Nana, whose genes had been edited in vitro, was announced by biophysician He Jiankui in 2018. For Isasi (Chapter 34), recent crises such as this provide opportunity to transform not only global policy on human germline gene editing, but collective behaviours in this field. In this chapter, she analyses the commonalities and divergences in international normative systems that regulate gene editing. For Isasi, a policy system that meaningfully engages global stakeholders can only be completely effective if we achieve both societal consensus and governance at local and global levels. For Chan (Chapter 35), the existence of multiple parallel discourses highlighted by Isasi can be used to facilitate broader representation of views within any policy solution. Chan considers the wider lessons that the regulatory challenges of human germline gene editing pose for the future of health research regulations. She posits that human germline gene editing is a ‘contemporary global regulatory experiment-in progress’, which we can use to revisit current regulatory frameworks governing contentious science and innovation.

For the authors of the next two chapters, the order upon which existing regulatory approaches were built is being upended by new, dynamic sociotechnical developments that call into question the boundaries that law and regulation has traditionally relied upon. First, Hinterberger and Bea (Chapter 36) challenge us to consider how we might reconsider normative regulatory boundaries in their chapter on human animal chimeras – an area of biomedical research where our normative distinctions between human and animal are becoming more blurred as research advances. Here, the authors highlight the potential of interspecies research to perturb lasting, traditional regulatory models in the field of biomedical research.  Next, McMillan (Chapter 37) examines the fourteen-day limit on embryo research as a current example of an existing regulatory tool – here a legal ‘line in the sand’ – that is being pushed to its scientific limits. She argues that recent advancements in in vitro embryo research challenge us to disrupt our existing legal framework governing the processual entity that is the embryo in vitro. For McMillan, disrupting our existing regulatory paradigms in embryo research enables essential policy discussion surrounding how we can, and whether we should, implement enduring regulatory frameworks in such a rapidly changing field.

For the second part of this section, the final two chapters examine the downstream effects of health research regulation in two distinct contexts. For these authors, it is clear that innovation in research practice and its applications requires us not only to disrupt our normative regulatory frameworks and systems, but to do so in a way that meaningfully engages stakeholders (see Laurie, Introduction). Jackson (Chapter 38) challenges the sufficiency of giving patients information about the limited evidence-base behind ‘add-on’ treatments that are available in fertility clinics, as a mechanism for safely controlling their use. For Jackson, regulation of these add-ons needs to go further; she argues that these treatments should be deemed by the Human Fertilisation and Embryology Authority – the regulator of fertility clinics and research centres in the UK – as ‘unsuitable practices’. She highlights the combination of a poor evidence base for the success of these ‘add-on’ treatments and patients’ understandable enthusiasm that these might improve fertility treatment outcomes. Her contribution confronts this ‘perfect storm’ of the uncertain yet potentially harmful nature of these add-ons, which are routinely ‘oversold’ in these clinics, yet under-researched. Jackson’s offering gives us an example of an ongoing and increasing practice and process that requires us to disrupt prevailing regulatory norms. In the final chapter of this section, Harmon (Chapter 39) offers human enhancement as an example of how a regulatory regime, catalysed by disruptive research and innovation, has failed to capture key concepts. For Harmon, greater integration of humans and technology requires our regulatory frameworks to engage with ‘identity’ and ‘integrity’ more deeply, yet the current regulatory regime’s failure to do so provides a lack of support and protection for human wellbeing.

Together, these chapters provide detailed analyses of carefully chosen examples and/or contexts that instantiate the necessity for reflexivity in a field where paradigms are (and should be) disturbed by health research and innovation. It is clear that particular regulatory feedback loops within and across particular regulatory spaces need to be closed in order to deliver authentic learning back to the system and to its users (see Laurie, Introduction and Afterword). A key theme in this section is the call to approach health research regulation as a dynamic endeavour, continually constituted by scientific processes and engaged with stakeholders and beneficiaries. In doing so, this section provides grounded assessments of HRR, showing the positive potential of responsive regulation as new approaches to health research attempt to meet the demands of an ever-changing world.

34 Human Gene Editing Traversing Normative Systems

Rosario Isasi
34.1 Introduction

Gene editing technologies consist of a set of engineering tools, such as CRISPR/Cas9, that seek to deliberately target and modify specific DNA sequences of living cells.Footnote 1 They can enable both ex vivo and in vivo deletions and additions to DNA sequences at both somatic and germline cell levels. While technical and safety challenges prevail, particularly regarding germline applications, these technologies are touted as transformational for the promotion and improvement of health and well-being. Furthermore, their enhanced simplicity, efficiency, precision, and affordability had spurred their development. This in turn, has brought to the fore scientific and socio-political debates concerning their wide range of actual and potential applications together with their inexorable ethical implications.

The term ‘inevitable’ refers to the certainty or the unavoidability of an occurrence. Such was the worldwide response after the 2018 announcement – and later confirmationFootnote 2– of the live birth of twin girls whose genomes were edited during in vitro fertilisation procedures. While foreseeable, shock followed and ignited intense national and international debates. China was placed at the epicentre of controversy, as the ubiquitous example of inadequate governance and moral failure. Yet, as the facts of the case unfolded, it became clear that the global community shared a critical level of responsibility.Footnote 3 Crisis can provoke substantial changes in governance and fundamentally alter the direction of a given policy system. While the impact of the shock is still being felt, the subsequent phase of readjustment has yet to take place. A ‘window of opportunity’ is thereby present for collective assessment of its impact, for ascertaining accountability, and for enacting resulting responses. Reactionary approaches can be predicted, as demonstrated by the wave of policies in the 1990s and 2000s following the derivation of the first human embryonic stem cell line or the birth of ‘Dolly’ the cloned mammal. Indeed, the ‘embryo-centric’ approach that characterised these past debates is still present.Footnote 4 Additionally, the globalisation phenomenon has permeated the genomics field, reshuffling the domain of debate and action from the national to the international. A case in point are the past International Gene Editing Summits aimed at fostering global dialogue.Footnote 5

So far, human gene editing (HGE) has stimulated a new wave of policy by an extensive range of national and international actors (e.g. governments, professional organisations, funding agencies, etc.). This chapter outlines some of the socio-ethical issues raised by HGE technologies, with focus on human germline interventions (HGI), and addresses a variety of policy frameworks. It further analyses commonalities as well as divergences in approaches traversing a continuum of normative models.

34.2 Navigating Normative Systems for HGE

Across jurisdictions, the regulation of genomics research has generally followed a linear path combining ‘soft’ and ‘hard’ approaches that widely consider governance as a ‘domestic matter’.Footnote 6 Driven by scientific advances and changes in societal attitudes that resulted in greater technological uptake, genomics has increasingly become streamlined. This is reflected in the departure from the exceptionalist regulation of somatic gene therapy, now ruled by the general biomedical research framework, or in the increasing acceptance of reproductive technologies, where pre-implantation genetic diagnosis is no longer considered as an experimental treatment.

Normative systems cluster a broad range of rules or principles governing and evaluating human behaviour, thereby establishing boundaries between what should be considered acceptable or indefensible actions. They are influenced by local historical, socio-cultural, political and economic factors. Yet, international factors are not without effect. These systems are enacted by a recognised legitimate authority and unified by their purpose, such as the protection of a common good. Often, they encompass set criteria for imposing punitive consequences in the form of civil and criminal sanctions, or by moral ones, in the form of social condemnation for deviations. The boundaries normative systems impose are sometimes set arbitrarily, while in others, these divisions are systematically designed. Thus, they either create invisible or discernible ethical thresholds by making explicit the principles and values underpinning them.

At the same time, normative systems are often classified by their coercive or binding nature, as exemplified in the binary distinction between ‘soft’ and ‘hard’ law. While this categorisation is somewhat useful, it is important to note that ‘hard’ and ‘soft’ laws are not necessarily binary; rather, they often act as mutually reinforcing or complementary instruments. The term ‘soft law’ refers to policies that are not legally binding or are of voluntary compliance, such as those emanating from self-regulatory bodies (e.g. professional guidelines, codes of conduct) or by international agencies (e.g. declarations) without formal empowered mechanisms to enforce compliance, including sanctions. In turn, ‘hard law’ denotes policies that encompass legally enforceable obligations, such regulations. They are of binding nature to the parties involved and can be coercively enforced by an appropriate authority (e.g. courts).

In the context of HGI, normative systems have opted for either a public ordering model consisting of state-led, top-down legislative approaches, or a private ordering one, which adopts a bottom-up, self-regulatory approach. In between them, there is also a mix of complex public–private models. Normative systems are present in a continuum from permissive, to intermediate, and to restrictive, reflecting attitudes towards scientific innovation, risk tolerance and considerations for proportional protections to cherished societal values (e.g. dignity, identity, integrity, equality and other fundamental freedoms). The application of HGE technologies in general, and HGI in particular, are regulated in over forty countries by a complex set of legislation, professional guidelines, international declarations, funding policies and other instruments.Footnote 7 Given their diverse nature, these norms vary in their binding capacity (e.g. legislation vs self-regulation), their breadth and their scope (e.g. biomedical research vs clinical applications vs medical innovation). Notwithstanding all the previously stated heterogeneity in normative models, harmonised core elements are still present between them.

Resistance towards applying HGE in the early stages of development commonly rest on beliefs regarding the moral – and fortiori legal – status of the embryo, social justice and welfare concerns. Their inheritable capacity, in turn, brings to the fora issues such as intergenerational responsibility and the best interests of the future child, together with concerns regarding their population (e.g. genetic diversity), societal (e.g. discrimination, disability) and political impacts (e.g. public engagement, democracy).Footnote 8 Remaining safety and efficacy challenges are also of chief importance and often cited to invoke the application of the ‘precautionary principle’. Lastly, fears over ‘slippery slopes’ leading to problematic (e.g. non-medical or enhancements) uses and eugenic applications are at the centre of calls for restrictive normative responses.Footnote 9 However, across these systems the foundational principles underpinning a given norm and reflecting a society’s or an institution’s common vision and moral values are not always sufficiently substantiated, if at all articulated. As such, calls for caution to protect life, dignity and integrity, or against eugenic scenarios, appear as mere blanket or rhetorically arguments used for political expediency. As a consequence, the thresholds separating what is deemed as an acceptable or indefensible practice remain obscure and leave an ambiguous pathway to resolve the grey areas, mostly present in the transition towards clinical applications.

An unprecedented level of policy activity followed the rapid development of HGE. National and international scientific organisations, funding and regulatory agencies, as well advocacy groups have responded to these advances by enacting ‘soft laws’ appealing for caution, while others have opted for assessing the effectiveness of extant ‘hard’ and ‘soft’ policies.

34.2.1 National Policy Frameworks

Normative systems are often conceptualised using a hierarchy that differentiates between restrictive, intermediate and permissive approaches. Under this model, restrictive policies set up ethical and political boundaries by employing upstream limits – blank bans or moratoria – to interventions irrespective of their purpose. Pertaining to the application of HGI, restrictive approaches essentially outlaw or tightly regulate most embryo and gamete research. Supported by concerns over degrading dignity and fostering commodification of potential life, these approaches are based on attributing a moral – personhood or special – status to embryos, and thus advocating for robust governmental controls. Stipulations forbidding ‘genetic engineering on human germ cells, human zygotes or human embryos’Footnote 10 or stating that no ‘gene therapy shall be applied to an embryo, ovum or fetus’Footnote 11 exemplify this model.

While apparently wide-ranging, restrictive policies contain several potential loopholes. Among their major shortcomings are their reliance on research exceptions for therapeutic interventions that are deemed beneficial or life preserving to the embryo, or which are necessary in order to achieve a pregnancy. Terminological imprecisions will render as inapplicable a norm once a particular intervention could be considered as medical innovation or standard medical practice. Similar gaps are present in norms referencing specific technologies and in legal definitions of what constitute a embryo or a gamete, as all of these could later be outpaced by scientific advances, such as those brought by developments in the understanding of embryogenesis, organoids, and pluripotent stem cells. Indeed, the growth of HGE technologies has brought back to centre stage reflections over what is a reproductive cell. Evocative of the debates that took place during the peak of the stem cell era, the scientific, legal, and moral status of these entities continue to be tested, while at the same time remaining as the most prevalent policy benchmark. Whether silent or overtly present in distinct conceptualisations (e.g. developmental capacity or precise time period), criteria defining these early stages of human development are at the core of policies directing the permissibility of certain interventions.

The most favoured policy position is, however, an intermediate one, in which restrictions are applied downstream by banning research with reproductive purposes. Yet, this position considers permissible the practices that are directed at fundamental scientific research activities, such as investigating basic biology or aspects of the methodology itself. Policies adopted in countries such the Netherlands,Footnote 12 reflect this moderate perspective by outlawing any intervention directed at initiating – including attempts to initiate – a pregnancy with an embryo – or a reproductive cell – that has been subject to research or whose germline has been intentionally altered. Balancing social and scientific concerns, this approach calls for modest governance structures, yet close oversight. Nevertheless, it is at the risk of internal inconsistencies and ambiguities, given that norms are often the result of political compromises, which seem necessary in order to achieve policy adoption. A case in point are those research policies that confer moral and legal status to the human embryo while – at the same time – mandating their destruction after a certain period of time, or in ambiguous norms regarding the permissibility of clinical translation.

Largely misinterpreted, liberal models do not necessarily postulate a laissez-faire or a blanket unregulated approach. Rather, they provide significant scientific freedom predicated on the strength of their governance frameworks. They seek to promote scientific advances as a tool for social progress. In the context of HGI, liberal policiesFootnote 13 allow for basic and reproductive research while banning clinical implementation. Given that these approaches depend on the effectiveness of their governance structures (e.g. licensing, oversight) with decisions often on a case-by-case or a de-facto basis, they are at the risk of arbitrary applications and system failure. Moreover, when the model rests on self-regulatory approaches devoid of effective enforcement mechanisms, they risk being – or being perceived to be – self-serving and following a market consumer model.

Throughout policy models, the progression from research to clinical purposes is at times blurred in the peculiarities of such approaches. In fact, uncertainty regarding the scope of requirements is particularly present when there are permissible exceptions to norms forbidding HGE in reproductive cells. This is the case of Israel, which outlaws using reproductive cells that have undergone a permanent intentional genetic modification (germline gene therapy) in order to cause the creation of a person’,Footnote 14 yet permits to apply to a research licence ‘for certain types of genetic intervention’ provided that ‘human dignity will not be prejudiced’.Footnote 15 Similarly in France where ‘eugenic practice aimed at organizing the selection of persons’ and alteration(s) made to genetic characteristics in order to modify the offspring of a person’ are banned,Footnote 16 yet at the same time the law exempts interventions aiming ‘for the prevention and treatment of genetic diseases’Footnote 17 without providing further guidance.

Notwithstanding heterogeneous normative approaches, these models share a common objective: fostering scientific innovation and freedoms while protecting their vision of a common good, mostly expressed in safeguarding human dignity. In order to do so, sanctions and other coercive mechanisms are often adopted as deterrents. Indeed, the global HGE policy landscape is frequently accompanied by some form of sanctions, ranging from criminal to pecuniary and other social penalties. In particular, when such systems are based on legislative models, criminal penalties – substantial imprisonment and fines – are the standard. Upholding criminal law in biomedical research is an exceptional approach, and societies around the world use this tool to send the strongest condemnatory message. Here, as in other fields, criminal law serves as a tool for moral education and for achieving retribution, denunciation, and/or deterrence. But other type of penalties, such as moral sanctions, could be equally powerful. A radical example of the latter is China’s ‘social credit system’Footnote 18 where research misconduct is sanctioned by a wide umbrella of actors, which can impose an equally wide set of penalties and can even reach far beyond the traditional academic setting – from employment to funding, insurance, and banking eligibility. However, employing criminal law can be problematic because it often requires intentionality (mens rea). In the context of HGI, criminal law could create loopholes for downstream interventions when restrictions are limited to certain applications. For instance, German law bans the ‘artificial’ alteration of ‘the genetic information of a human germ line cell’Footnote 19 and the use of such cell for fertilisation. Yet, such prohibition would not be applicable ‘if any use of it for fertilisation has not been ruled out.’Footnote 20 While under Canadian legislation, it is an offense to ‘knowingly’ ‘alter the genome of a cell of a human being or in vitro embryo such that the alteration is capable of being transmitted to descendants.’Footnote 21

Comparably, an issue of shared concern across normative systems are references to the eugenic potential of HGI. Fears over the ability to alter the germline infringing dignity and integrity have been widely articulated in policies. These concerns are best illustrated in France, where a new crime against the integrity of the human species has been typified and which forbids ‘carrying out a eugenic practice aimed at organizing the selection of persons.’Footnote 22 Similarly, Indian guidelines restrict ‘eugenic genetic engineering for selection against personality, character, formation of body organs, fertility, intelligence and physical, mental and emotional characteristics.’Footnote 23 In the same vein, Belgium outlaws carrying out ‘research or treatments of eugenic nature that is to say, focused on the selection or amplification of non-pathological genetic characteristics of the human species.’Footnote 24 However, these policies provide little guidance for interpretation: when should interventions seeking to repair deleterious gene mutations or confer disease immunity – at the individual or population level – be considered eugenic interventions? Or a non-medical or enhancement practice? Selecting or de-selecting traits, while not an ethically neutral intervention, is not per se eugenics. Therefore, contextualising thresholds and defining the paraments for scientific and ethical acceptability of such interventions are required not only to provide much needed legal clarity, but also to avoid being perceived as simply rhetorical calls for political expediency.

34.2.2 International Policy Frameworks

Significant policy activity followed the refinement of HGE. A wide range of professional organisations, funding and regulatory agencies, quickly reacted to these developments with statements reflecting an equally varied range of positions.Footnote 25A common theme among them is a circumspect attitude with appeals for the protection of dignity and integrity. While these positions endorse different normative approaches, they all pay particular attention to intergenerational responsibilities in their calls for principled restrictions to reproductive HGI.

Among the earliest international instruments addressing HGI are several non-binding Declarations adopted under the United Nations’ framework. First, are the UNESCO’s Universal Declaration on the Human Genome and Human Rights and the ensuing report on HGE by their International Bioethics Committee, which conceptualise the genome as the ‘heritage of humanity’ and in that vein, they plea for a moratorium on HGI that is based on prevailing ‘concerns about the safety of the procedure and its ethical implications.’Footnote 26 Succeeding UNESCO’s efforts, and after a failed attempt to adopt legally binding policy, the United Nations passed the UN Declaration on Human Cloning, calling on states ‘to adopt the measures necessary to prohibit the application of genetic engineering techniques that may become contrary to human dignity.’Footnote 27 The pleas raised by these UN bodies remain a contemporary mandate, appealing for concrete measures to implement moral commitments into national legislation with the necessary enforcement measures.

Following the human rights approach enshrined in the abovementioned instruments, two important regional policies were enacted: the Council of Europe’s Oviedo ConventionFootnote 28 and the European Union Clinical Trials Regulation.Footnote 29 These remain to date as the only international legally binding instruments governing HGI. The Oviedo Convention – as a general rule – explicitly forbids research and clinical interventions seeking to modify the genome. Yet, it exempts interventions that are ‘undertaken for preventive, diagnostic or therapeutic purposes’ when the aim is ‘not to introduce any modification in the genome of any descendants’.Footnote 30 In turn, the cited EU Regulation focuses on gene therapy, banning clinical trials resulting ‘in modifications to the subject’s germ line genetic identity’.Footnote 31 Yet, no guidance has been provided to define or interpret the notion of ‘genetic identity’ in order to fully grasp the scope and breadth of such provisions.

Actors from different fields and parts of the worldFootnote 32 have been quite prolific in articulating their positions with regards to HGE and in conveying how they envisage – or not – a path forward to reproductive HGE.Footnote 33 Even in China after the birth of the HGE twins, funding and professional organisations have swiftly publicised their positions,Footnote 34 aligning to mainstream ones. Indeed, all of these statements share several common threads. First, they all endorse a guarded approach to HGI, calling for temporary halts or moratoria, rather than advocating for permanent bans. The scope and breadth of such restrictions vary, from positions that seek to prevent clinical applications but allow reproductive research, to those that condemn any use. Second, a prospective approach also characterises them. While recent developments might render prevention a futile goal, precautionary measures fostering scientific integrity are still relevant. Third, they are by far based on scientific concerns, given the current inability to fully assess HGE’s safety and efficacy. Notably, societal considerations focusing on protecting human rights are also prevalent. Lastly, appeals for public engagement are widespread, including calls for participatory, inclusive and transparent dialogue in order to empower stakeholders, inform policy-making efforts, and foster trustworthiness.Footnote 35

34.3 The Road to Harmonisation

Reactionary responses often follow the advent of scientific developments deemed to be disruptive to notions of integrity and dignity, such as with HGI. A concomitant result of the debates over genetic engineering techniques that started decades ago, is an overall fraught policy landscape that generally seeks to condemn such interventions but is void of global governance. However, they steered a level of policy convergence.

The plethora of social debates and policies emanating in the context of HGE demonstrate that across the globe, policy harmonisation remains a laudable objective. These efforts seek convergence in fundamental ethical safeguards for research participants – and future patients – coupled with criteria for regulating the application of these technologies. Throughout the world, and with diverse levels of success, governance mechanisms have been established empowering authorities with granting licences, conducting ethical oversight and enforcing compliance. However, for these requirements to be effective, consistent implementation is needed in a manner that respects scientific integrity and freedoms.

Harmonisation is therefore apparent in convergent criteria that bar or condemn HGI. Yet, in some cases, these positions are only transitory by virtue of established moratoria or other precautionary temporary measures. Thus, they remain effective only while extant safety and other technical concerns remain. In fact, some responses seemed to be solely based on our current state of knowledge, as exemplified below:

Although our report identifies circumstances in which genome interventions of this sort should not be permitted, we do not believe that there are absolute ethical objections that would rule them out in all circumstances, for all time. If this is the case, there are moral reasons to continue with the present lines of research and to secure the conditions under which heritable genome editing interventions would be permissible.Footnote 36

Additional examples of the latter are found in Singapore policy forbidding HGI due to ‘insufficient knowledge of potential long-term consequences’Footnote 37 and pending ‘scientific evidence that techniques to prevent or eliminate serious genetic disorders have been proven effective’.Footnote 38 The same rationale underpins Indian policy restricting ‘gene therapy for enhancement of genetic characteristics (so called designer babies)’ based on ‘insufficient information at present to understand the effects of attempts to alter/enhance the genetic machinery of humans’.Footnote 39

Despite diverse normative systems and societal contexts, the world seems to be disposed towards harmonisation.Footnote 40 Which factors help explain this phenomenon? Policy transfer and emulationFootnote 41 might be factors supporting policy growth and the emergence of global convergence. However, such consensus is still quite precarious as best exemplified by the level of international involvement and the strength of the response to recent developments.Footnote 42 Scepticism over the stability of an emerging or actual consensus is based on the fact that policy responses thus far are grounded in distinct rationale. While they all call for ‘action’ and ‘caution’, they legitimately differ in their significance and understanding of such terms. As we have seen, in some instances a cautious approach has been translated in voluntary moratoria. This is the temporarily halting of certain types of clinical interventions or in promoting public engagementFootnote 43 so as to allow for policy to reflect changes in scientific knowledge or societal values. In other instances, precautionary responses – under vigilant oversight – purposely do not deter or outlaw research given the need for evidence in quantifying risks and benefits. Finally, in other circumstances, caution has signified enacting blank legal prohibitions.Footnote 44

Conceptual misunderstandings between the notion of harmonisationFootnote 45and standardisation are often present.Footnote 46 As such, appeals for standardisation frequently do not realise that they entail the creation of uniform legal and ethical standards, which are not only highly unachievable, but also undesirable particularly with respect to HGE. In the latter, sovereignty and moral diversity must be respected. HarmonisationFootnote 47 processes do not seek uniformity as the end result, they rather entail substantial correspondence between fundamental ethical principles present across the continuum of normative responses. They aim to foster cross-jurisdictional collaboration and thus governance. Still, harmonisation is not without challenges, particularly in regards to criteria for evaluating policy convergence and assessing variations in the regulation of fundamental ethical requirements, where thresholds for determining the significance of a given policy can vary. The latter is of great importance as variations could potentially undermine the integrity of ethical safeguards or societal values.

34.4 Conclusion

For the sceptics, attempts to meaningfully engage a global community of stakeholders to adopt binding policy and governance will inevitably end in ‘pyrrhic’ victoriesFootnote 48 – as in the past. History seems to be full of examples to support this position.Footnote 49 Indeed, thus far the inability to form a representative community to reconcile conflicting interests – economic and otherwise – and to prevent egregious actions, has taught us that sole condemnation of a particular intervention is futile for preventing abuses absent morally binding obligations and ‘actionable’ regulatory frameworks. For the optimists, the level of societal engagement, emergent policy convergence and swift condemnatory responses following the most contemporaneous and appalling gross violations of human rights and scientific standardsFootnote 50 are grounds to believe that a level of policy harmonisation remain a realistic endeavour. Crisis provides the opportunity to significant alter the direction and strength of a given policy system, including reshaping governance mechanisms and reconfiguring the power of stakeholders. It therefore has the ability to transform more than policy; it can stir real change in collective behaviour. In the aftermath of this crisis, the central lesson must be that without defining and achieving societal consensus and governance at both the local and global level, no policy system would ever be completely effective.

35 Towards a Global Germline Ethics? Human Heritable Genetic Modification and the Future of Health Research Regulation

Sarah Chan
35.1 Introduction

Human germline genetic modification (HGGM) has been the subject of bioethical attention for over four decades. Recently, however, two areas of biomedical technology have revived debates over HGGM. First, the development of ‘mitochondrial replacement therapy’ (MRT) represents, some have argued, a form of HGGM, since it affects the genetic makeup of the resulting children in a way that may be passed on to future generations. Second, the advent of genome editingFootnote 1 technologies has made heritable genetic modification of humans for the first time a genuinely practicable possibility: one that was dramatically and prematurely realised when, in November 2019, it was announced that two genome-edited babies had already been born.Footnote 2Amid renewed scrutiny of human genome editing, emerging clinical uses of MRTs, and the increasing globalisation of science and of health technology markets, the question of how HGGM can and should be regulated has gained new salience. Moreover, having been so long contested and in relation to such fundamental concepts as ‘human dignity’ and ‘human nature’, the issue of germline modification has assumed a significance beyond its likely direct consequences for human health.

The current ‘regulatory moment’ with respect to HGGM thus perhaps represents something of a watershed for the global governance of science more generally. Further, both the potential impacts of the technology, and the moral and political power of the human genome as a metaphor through which to negotiate competing visions of human nature and society, require us to consider these issues at a global scale. This also creates an opportunity for critical exploration of novel approaches to regulation.

Following on from the previous chapter’s analysis, this chapter considers broader lessons we might learn from examining the challenges of HGGM for the future of health research regulation. HGGM, I suggest, is a contemporary global regulatory experiment-in-progress through which we can re-imagine the regulation of (in particular, ethically contentious) science and innovation: what it should address, what its purposes might be, and how, therefore, we should go about shaping global scientific regulation. Through examining this, I argue that such regulation should focus on processes and practices, rather than objects; and that its utility lies more in mediating these processes than in establishing absolute prohibitions or bright lines. Especially in the case of emerging and controversial technologies, regulation plays an important role in negotiating ideas of responsibility within the science–society discourse. In so doing, it also affects, and should be shaped by attention to, the global dynamics of science and the consequences for global scientific justice.

35.2 Germline Technologies: A Brief Overview

Earlier techniques used for genetic modification were inefficient and simply impractical to allow the creation of genetically engineered humans.Footnote 3 The ‘game-changing’ aspect of genome editing technologiesFootnote 4 is their ability to achieve more precise gene targeting, with much higher efficiency, in a wide range of cell types including human embryos. The best-known genome editing technology is the CRISPR/Cas9 system, the use of which was described in 2012.Footnote 5 Following the publication of the CRISPR/Cas9 method, it rapidly became clear that HGGM needed urgently to be reconsidered as a real possibility. In 2015, reports of the first genome editing of human embryosFootnote 6 spurred scientists to call for restrictions on the technology,Footnote 7 and prompted further investigations of the associated ethical and policy issues by various national and international groups.Footnote 8

Notably, many reports, including those of the US National AcademiesFootnote 9 and the UK’s Nuffield Council on Bioethics,Footnote 10 concluded that heritable human genome editing could be acceptable, providing certain conditions were met. These conditions included further research to ensure safety before proceeding to clinical application, and sufficient time for broad and inclusive engagement on governance. Neither of these conditions were fulfilled, however, when He Jiankui announced to the supposedly unsuspecting worldFootnote 11 that he had already attempted the procedure.

Somewhat before genome editing technologies came onto the scene, MRTs were already being developed as a treatment for certain forms of mitochondrial disease.Footnote 12 According to the possibility foreseen in the 2008 amendments to the Human Fertilisation and Authority Act, and following an extensive consultation process, in 2015 it became legal for MRT to be licensed in the UK.Footnote 13 In the USA, the Institute of Medicine likewise concluded that MRT could be acceptable,Footnote 14 though it is not currently legal in the USA. Pre-empting the regulatory process, however, in September 2016 John Zhang, an American scientist, announced that he had already performed the first successful use of MRT in the clinic,Footnote 15 using embryos created in the USA and shipped to Mexico for intra-uterine transfer.

While reams have been written on the ethics of HGGM, the most pressing questions with respect to HGGM no longer concern whether we ought to do it at all, but how; where; by and for whom; and with (or without) what authority it will be done. This is not a claim about the inadequacy of regulation in the face of technological inevitability but a statement of where things currently stand ethically and legally, as well as scientifically. MRT is legal and being carried out in a number of countries; heritable genome editing, while not yet legalised, has been deemed ethically acceptable in principle. One way or another, HGGM is becoming a reality; regulation can guide this process. To do so effectively will require careful consideration of what is regulated and how, with what justifications, and with whose participation.

35.3 What Are We Regulating? What Should We Regulate?

As pointed out in the previous chapter (see Isasi, Chapter 34 in this volume) regulation can serve to articulate normative concerns but does not always do so coherently or consistently. In setting out to regulate ‘germline modification’ or ‘the human genome’, what concerns might be entangled?

The term ‘germline modification’ is itself subject to interpretation. Technically speaking, ‘the germline’ can encompass any cell that is part of the germ lineage, including gametes and embryos; thus a prohibition on modifying the germline might be taken to preclude any use of genome editing in human embryos, including for basic research. Early calls for a moratorium favoured this highly restrictive approach: some argued that because ‘genome editing in human embryos … could have unpredictable effects on future generations … scientists should agree not to modify the DNA of human reproductive cells’.Footnote 16 This, however, ignores that genome editing of human embryos is only likely to have direct effects on future generations if those embryos become people! Context, in other words, is key.

Moreover, novel technologies may potentially render ‘the germline’ an impossibly broad category. It is now possible to reprogramme somatic cells to pluripotent cells,Footnote 17 and to turn pluripotent cells into gametes.Footnote 18 Any cell could therefore in theory become part of the germline, meaning any genetic modification of a somatic cell could potentially be a ‘germline’ modification. It is not, however, ‘the germline’ in the abstract, but the continuity or otherwise of particular, modified or unmodified germlines, that should be our concern.

The ‘human genome’ is likewise a nebulous concept: does it refer to an individual’s genome, or the combined gene pool of humanity? References to the human genome as the basis of ‘the fundamental unity of all members of the human family’ and ‘the heritage of humanity’Footnote 19 seem to suggest a collective account, but it is hard to see how the ‘collective genome’ could be regulated. Indeed, one might argue that the human genome, in the sense of the collective gene pool of humanity, would not be altered were genome editing to be used to introduce a sequence variant into an individual human genome that already exists within the gene pool.Footnote 20

Even the term ‘modification’ raises questions. MRT involves no change to DNA sequence, only a new combination of nuclear and mitochondrial DNA; the Institute of Medicine report, however, recommended that its use be limited to having male children only, to avoid this ‘modification’ being transmitted. Yet this combination of nuclear and mitochondrial DNA might also have arisen by chance rather than design, naturally rather than via MRT. The same can be said about genome editing to introduce existing genetic sequence variants. In regulating technology, we should consider whether the focus should be on outcomes, or the actions (or inactions) leading to them – and why.

The difficulty of regulating the ‘germline’ or the ‘human genome’ highlights the problem of regulating static objects rather than the dynamic relations and practices through which these objects move. In regulating a ‘thing’ in itself, the law tends to fix and define it, thereby rendering it inflexible and unable to evolve to match developments in technology (see McMillan, Chapter 37 in this volume). Especially in the area of biomedicine, both the pace of research and the propensity of science to discover new and often unexpected means to its ends can result in overly specific legislative provisions becoming rapidly obsolete or inapplicable.

Examples of this can be seen in previous legislative attempts to define ‘the human embryo’ and ‘cloning’. The original Human Fertilisation and Embryology Act (1990) s1(1)(a) defined an embryo as ‘a live human embryo where fertilisation is complete’. The development of somatic cell nuclear transfer technology, the process by which Dolly the sheep was cloned, immediately rendered this definition problematic, since embryos produced via this technique do not undergo fertilisation at all. Addressing this legal lacuna necessitated the hurried passage of the Human Reproductive Cloning Act 2001,Footnote 21 before the eventual decision of the House of Lords brought ‘Dolly’-style embryos back within the Act’s purview.

A similar situation occurred in Australia, before the passage of uniform federal laws: in the state of Victoria, the embryo was defined as ‘any stage of human embryonic development at and from syngamy’,Footnote 22 leaving unclear the status of embryos produced via nuclear transfer, in which syngamy never takes place. In South Australia, meanwhile, the law prohibited cloning, but defined ‘cloning’ specifically as referring to embryo splitting, again leaving nuclear transfer embryos unregulated.

These examples illustrate the pitfalls of over-determining the objects of regulation. Attempts to regulate HGGM, though, may suffer not only from being too specific in defining their objects, but also from being too vague. For example, references to ‘eugenic practices’ in national and international legal and policy instruments leave open the question of what actually constitutes a eugenic practice (see Isasi, Chapter 34 in this volume). Without any processes in place to determine how such terms should be interpreted, their inclusion tends to obfuscate rather than clarify the scope of regulation.

Similar examples abound: the EU Clinical Trials Directive declared: ‘No gene therapy trials may be carried out which result in modifications to the subject’s germ line genetic identity’,Footnote 23 a position further affirmed by the replacement Clinical Trials Regulation.Footnote 24 But what exactly is ‘germ line genetic identity’? UNESCO’s Declaration opposes ‘practices that could be contrary to human dignity, such as germ-line interventions’,Footnote 25 but does not indicate how or why germline interventions ‘could be contrary to dignity’ – making it difficult to distinguish whether they actually are.

The requirements of being neither too specific nor too vague may seem a Goldilocks-style demand with respect to defining the appropriate targets of regulation. What this illustrates, however, is that regulation is important for the processes and practices it establishes, as much as the definitions of objects to which these pertain.

35.4 Research or Reproduction? The Importance of Context

In regulating HGGM, our concern should be, not whether a modification is in principle heritable but whether it is in fact inherited. The context, both social and scientific, in which the modification procedure is carried out therefore matters a great deal. Attempting to regulate this solely in terms of permitting or prohibiting particular technologies would be extremely limiting.

Instead, we should consider how our concerns can be addressed by regulating practices with respect to assisted reproduction; and relations between healthcare practitioners, healthcare systems, patients, research participants and the market. Such practices and relations are key to the processes by which future generations, and our relationships with them, are created. Regulation of this sort can be effective at transnational as well as intra-national level: cross-border surrogacy is another situation where particular practices and relations, not just the technology itself, create ethical concerns – India’s regulatory response represents an example of correlative attempts to address them.

Focusing our regulatory attention on processes and relations also allows us better to distinguish desirable versus undesirable contexts for the application of technology. Basic research on embryos never destined for implantation is very different to the creation of genetically modified human beings; regulation ought accordingly to enable us clearly to separate these possibilities. This might be done in various ways, as can be seen by considering UK and USA examples.

The UK’s Human Fertilisation and Embryology Act to some extent regulates embryos relationally and in terms of practices: what may be done to or with an embryo depends on the relationships among actors connected to the embryo, their relationships with the embryo itself, and the embryo’s own relational context, in terms of its origin, ontology, and history. By creating the category of ‘permitted embryos’ as the only embryos that may be implanted, the Act effectively separates reproductive use from other applications.Footnote 26

In comparison, US federal regulation affecting HGGM incorporates aspects of both object-focused and contextual regulation. Laws prohibiting federal funding of human embryo research apply to research with any and all embryos, regardless of origin, context, or intended destination.Footnote 27 When it comes to genetically modifying those embryos, however, context matters: via the FDA’s jurisdiction, current laws effectively prevent any clinical applications involving modified embryos,Footnote 28 while basic research falls outside this domain.

Looking ahead to the possible futures of HGGM, what sorts of purposes and processes might we be concerned to regulate? Many of the worries that have been expressed over HGGM can be addressed via regulation (in the broad sense) of processes across different contexts. For example, the dystopian vision of a society in which parents visit a ‘baby supermarket’ to choose their perfect designer child is quite different to one in which the healthcare system permits parents to access reproductive interventions that have been accepted as safe (enough) for particular purposes within defined contexts. This being the case, it is far from evident that our response to these possible futures should be to forgo exploring the potential benefits of gene therapy for fear of ‘designer babies’: the two possibilities may be mediated via the same technologies but involve very different contexts, relationships, roles and practices. These can be differently regulated; and regulation in turn can shape which practices emerge and how they evolve.

One possible regulatory position, often motivated by a ‘slippery slope’ argument, is that we should prohibit all embryo genome editing research, in order to avoid the extreme dystopian futures it might one day enable. As argued above, however, context is key and focusing on technology alone fails to take account of this. Restricting research today in order to prevent one of the distant possible futures it might enable also forecloses any beneficial outcomes it might produce. To prohibit something that is prima facie acceptable merely because it may make possible the unacceptable is drawing the line in the wrong place.

Although concerns over technological development often invoke the ‘slippery slope’, this metaphor ignores the fact that science is not a single, uni-directional process with a defined endpoint. Instead, research and the applications it might enable are more like a ‘garden of forking paths’,Footnote 29 a labyrinth of infinite possibilities. Regulatory slippery-slopeism, for fear of one of those possibilities, would foreclose the remainder.

That said, it might be true that what seems unacceptable from our present perspective will, from halfway down the slope, be less so. Studies of public opinions show greater acceptance of novel genetic technologies among younger demographics who have grown up in the age of IVF and genetic screening; and it was suggested with respect to MRT that this might be a slippery slope to other forms of HGGM such as genome editingFootnote 30 – though this would be difficult to prove with certainty.

The response to slippery-slope fears is often to try to draw a ‘bright line’. Any lines we might draw, however, are liable to suffer from the above-mentioned problem of either over-determination or vagueness. Some distinctions themselves may be less clear. For example, a commonly held position in relation to genome editing is that it should be used only for therapy, not for enhancement; but as much bioethical scholarship has revealed, the line between therapy and enhancement is not so easily defined. In a similar way, however, the definition of ‘serious’ disability, illness, or medical condition, for which the HFEA permits pre-implantation embryo testing, is subject to interpretation; yet its provisions have, nonetheless, been effective because there is a regulatory process for legitimate decision-making in the case of ambiguity.

Moreover, as the scientific and ethical landscape shifts, bright lines may eventually become grey areas: for example, the fourteen-day rule on embryo research was a regulatory (if not ethical) bright line for decades yet is now once again the subject of discussion(see McMillan, Chapter 37 in this volume). We should not assume that we are currently at the pinnacle of ethical understanding such that the only way is down: what we now perceive as ‘slipping’ might in future generations be understood as moral progress. Regulation on the slippery slope might sometimes involve drawing lines, but these should be seen as pragmatic necessities, not moral absolutes.

One supposed ‘bright line’ in HGGM that may not prove so clear is the somatic / germline distinction: is this really as legible or significant as it has been made out to be? Publics might not think so: recent engagement initiatives have shown widespread acceptance for therapeutically oriented genome editing, including heritable HGGM,Footnote 31 while in the wake of He’s attempt at creating genetically modified babies, crossing this supposed ethical Rubicon, the projected public backlash does not seem to have manifested. Moreover, in considering the possible consequences and balance of risks involved in somatic versus germline modification, we might argue that the two are not as dissimilar as might be assumed.Footnote 32 Neither then in regulation should the germline be assumed to be a bright line in perpetuity: as for the fourteen-day rule, its importance as a line lies in the processes invoked when considering whether to cross it.Footnote 33

35.5 Regulation, Responsibility and Cooperative Practice

Given the above, what is the justification for regulating HGGM? Clearly, it is not absolute protection of the germline or genome itself: nothing stops someone visiting a plutonium refinery and exposing their germ cells to ionising radiation, or wearing too-tight underwear, and then subsequently engaging in reproduction via natural means, even though both of these processes are likely to result in heritable genetic changes. Nor would we consider it appropriate to attempt to regulate such activities.Footnote 34 What, then, is regulation doing here?

It may seem peculiar to allow reckless random genetic modification by individuals while the much more controlled deliberate use of directed technology should be prohibited. But the aim of regulation is not simply, or not only, to prevent certain factual outcomes. A shift in the language points us to what is at stake here: before the era of genome editing, HGGM was considered ‘too risky’; now, instead, ‘it would be irresponsible’.

The question then becomes what ‘responsibility’ requires and how should it be enacted. This highlights an important role of regulation in relation to risk, specifically in determining how we understand risk and responsibility when something goes wrong. Regulatory responsibility is not about assigning blame to individual actors, be they scientists or clinicians, but instead deciding as a society how much and what kind of risk we are collectively willing to take responsibility for. That is, in regulating to permit something, we are implicitly accepting a certain degree of accountability for the practice and for its consequences. Even as scientific responsibility has been theorised in ways that go beyond the individual scientist to the collective community,Footnote 35 wider social responsibility for science requires a consideration of the interplay between social norms, regulation, and scientific practice.

Regulation therefore can also, and should, facilitate cooperative practices among different actors. At the Second International Summit on Human Genome Editing, David Baltimore described He Jiankui’s attempts to create genome-edited children despite all scientific and ethical advice to the contrary as representing ‘a failure of self-regulation’.Footnote 36 But this is only necessarily true if we understand the primary purpose of regulation as being the absolute prevention of particular outcomes. Moreover, for ‘the scientific community’ to assume all of the blame for failing adequately to police its members ignores the function of states and the existence of state regulation, while arrogating what is arguably a disproportionate level of self-governance to scientists.

In fact, as Chinese bioethicists and legal scholars were quick to assert,Footnote 37 there were various existing regulatory instruments that were breached by He’s work. Genome-edited babies may have been created, but the real test of regulation is what happens next: how regulators and policy-makers (broadly categorised) respond, to this case specifically and in terms of regulating HGGM more generally, will determine whether regulation can be judged to have succeeded or failed. Notably, the imposition of a prison sentence for HeFootnote 38 signals that, although scientific convention and existing oversight mechanisms may not have been sufficient deterrent beforehand, the criminal law was, nonetheless, capable of administering appropriate post hoc judgment. While the criminal law in regulating science may serve a partly symbolic function in assuring social licence for morally contentious research,Footnote 39 its value in this role must be backed by a willingness to invoke its ‘teeth’ when needed: He’s punishment aptly demonstrates this.

As this case illustrates, regulation is not just about absolute prevention, but involves mediating a complex set of relationships. Rather than viewing scientific self-regulation as a law unto itself and a separate domain, we should consider how scientists can effectively contribute to the broad project of regulation, understood as a combination of law and policy, process and practice, at multiple levels from individual to community, local to global.

35.6 Global Regulation and Scientific Justice

A common theme in discussions of the regulation of HGGM is that decisions about these technologies need to involve global participation. In order to determine what ‘global participation’ ought to consist of, it is worth considering why a global approach is appropriate.

Some have suggested a global approach is required because in affecting the human genome, HGGM affects all of us. George Annas and colleagues, for example, write that ‘a decision to alter a fundamental characteristic in the definition of human should not be made … without wide discussion among all members of the affected population … Altering the human species is an issue that directly concerns all of us’.Footnote 40 Yet the sum of the reproductive choices being made by millions of individual humans in relation to ‘natural reproduction’ is vastly greater than the potential effect of what will be, in the short term at least, a tiny proportion of parents seeking to use genome editing or MRTs.

In fact, many areas of science and policy will have far-reaching consequences for humanity, some probably much more so and more immediately than HGGM. Environmental policy, for example, and the development of renewable energy sources are likely to have far greater impact on the survival and future of our species, and affect far more people now and in the future, than HGGM at the scale it is likely initially to be introduced.

Doom-laden predictions overstating the possible consequences of HGGM for ‘the human race’ or ‘our species as a whole’ tend to demand precaution, in the sense of a presumption against action, as a global approach. Such calls to action have rhetorical force and appeal to emotion, but rest on shaky premises. Overblown claims that in altering the genome we are somehow interfering with the fundamental nature of humanity are a form of genetic essentialism in themselves, implying that what makes us worthy of respect as persons and what should unite us as a moral community is nothing but base (literally!) biology.

Nevertheless, the political history of genetics has reified the moral and metaphorical power of the ‘germline’ concept, as something quintessential and common to all humans. This history has seen heredity, ‘the germline’, and ‘the genome’ used as a tool both for division and for unification, from eugenics to the Human Genome Project,Footnote 41 imbuing genetics with a significance well beyond the mere scientific. While the biological genome as the ‘heritage of humanity’ and the basis of human dignityFootnote 42 does not stand up well to analysis, the political genome, as object of multiple successive sociotechnical imaginaries, has acquired tremendous power as a regulatory fulcrum.

It is therefore genome alteration as a social and political practice, not its direct biological consequences, that we should be concerned to regulate. Beyond just requiring a global approach, this creates an opportunity to develop one. HGGM represents a socio-techno-regulatory ‘event horizon’, the significance of which has been contributed to by the long historical association of genetics with politics, and which aligns with a broader trend towards engaging publics in discourse over science with the aim of democratising its governance. The immediate consequences of HGGM for human reproduction are likely to be fairly small-scale, and while opening up the possibility of clinical applications of genome editing will no doubt influence the direction of the field, as MRTs are already doing for related technologies, human genome editing is just one area of the vast landscape of scientific endeavour. Yet, in providing both opportunity and momentum to produce new approaches to global regulation, HGGM may have much wider implications for the broader enterprise of science as a whole.

An important feature of any attempt to develop a global approach to regulation is that it should account for and be responsive to transnational dynamics. This requires attention to equity in terms of scientific and regulatory capacity, as well as the ability to participate in and develop ethical and social discourses over science. When it comes to emerging and contested technologies such as embryo research, cell and gene therapies, and now genome editing, countries with more advanced scientific capacity have tended also to lead in developing regulation, and to dominate ethical discussion. The resulting global regulatory ‘patchwork’ creates the possibility of scientific tourism, which in turn combines with uneven power over regulatory and ethical discourse to reproduce and increase global scientific inequities. This can be seen, for example, in the consequences of Zhang’s Mexican MRT tourism and its effects on global scientific justice.Footnote 43

Another feature of the variegated regulatory landscape for controversial technologies is concern, among countries with high scientific capacity but restrictive regulation, about remaining internationally competitive, when researchers in other countries may take advantage of lower regulatory thresholds to forge ahead. This was a prominent factor in the embryo research debates of the early 2000s. Examining the expressions of concern with respect to human embryo genome editing research in China, about ‘the science … going forward before there’s been the general consensus after deliberation that such an approach is medically warranted’,Footnote 44 versus in the UK being described ‘an important first … a strong precedent for allowing this type of research’,Footnote 45 it seems that international dynamics and ‘keeping pace’ may also be a consideration here. Dominant actors may seek to control this pace to their advantage by re-asserting ethical and regulatory superiority, in the process reinforcing existing hegemonies.

The significance of the present ‘regulatory moment’ with respect to HGGM is that it offers opportunities to disrupt and re-evaluate these hegemonies, across geographic, cultural, disciplinary, political, and epistemic boundaries. This should include critical attention to the internalised narratives of science: in particular, the problem of characterising science as a competitive activity. The race to be first, the scientific ‘cult of personality’ and narratives of scientific heroism (or in the case of He Jiankui, anti-heroism) may serve to valorise and promote scientific achievement, but also drive secrecy and create incentives for ‘rogue science’. What alternative narratives might we develop, to chart a better course?

35.7 Conclusion: Where Next for Global Germline Regulation?

Seeking a new paradigm for global health research regulation will require a conscious effort to be more inclusive. We need to examine what constitutes effective engagement in a global context and how to achieve this, across a plurality of cultural and political backgrounds, varying levels of scientific capacity and science capital, and different existing regulations.

Consider, for example, the contrasts that might emerge between the UK and China, where expectations over discourse, governance and participation differ from those in which UK public engagement has been theorised and developed.Footnote 46 Distinct challenges will arise for engagement in Latin America, where the politics of gender and reproduction overtly drive regulation, and where embryo research and reproductive technologies are heavily contested. The discourse is not necessarily uniform among all countries: discussing embryo genome editing is more challenging where IVF itself is still controversial. In approaching these issues, we need also to be aware of potential negative impacts the discussion may have, for example on women’s access to reproductive health services.

Furthermore, we need to engage not just with publics and not just ‘the scientific community’ but with scientific communities. As we recognise in the field of engagement that there is not just a single unitary Public but a wide range of publics with different perspectives, values and beliefs, we need also to acknowledge pluralism of values, practices and motivations among scientists. In thinking about the governance of science, we must consider what factors might influence these and how, as an indirect form of regulating research. Some attention has already been given to the potential of actors such as journal publishers and funders to shape scientific culture and influence behaviour; further research might more clearly delineate these evolving regulatory roles, their limitations and how they work in tandem with ‘harder’ forms of regulation such as criminal law.

At the time of writing, the various proposed approaches to global governance of human genome editing have yet to coalesce into a single solution. The He incident triggered renewed calls for a moratorium.Footnote 47 While the scientific academies behind the International Summits were already considering aspects of regulation, the process was probably likewise hastened by these events, resulting in the formation of an International Commission; the WHO have launched their own inquiry;Footnote 48; and numerous statements have been published over the past five years,Footnote 49 with many initiatives ongoing and proposals issued.

It seems clear that a moratorium is unlikely to emerge as the answer, despite reactions to He’s transgression. In the first place, it is far from clear that a moratorium would have prevented He’s experiment: the consensus of scientific communities publicly expressed was already that it should not be done. A moratorium without enforcement mechanisms would have been no more effective than the existing guidelines; and any symbolic value a moratorium would have would be rapidly eroded if it were not respected. Other proposals include, as per the WHO, a registry to promote greater information and transparency and facilitate the involvement of wider scientific players including funders and publishers; a global observatoryFootnote 50 is another proposed mechanism to enable governance. With any of these, we will still need to attend to the dynamics of discourse, which interests are represented and how.

With that in mind, perhaps a single solution is not what we should be seeking. The proliferation of initiatives aimed at determining principles and frameworks for acceptable governance of HGGM may lead some to wonder whether we really need so many cooks for this broth, and to raise objections regarding potential inconsistencies when multiple bodies are charged with a similar task. Yet, even among the number of bodies that currently exist, the full range of diverse views has not been represented. The existence of multiple parallel discourses is not necessarily a bad thing: more can be better if it allows for broader representation. The meta-solution of integrating these is the challenge; approaching regulation as dynamic, constituted by practices and concerned with processes and relations, may be a way to meet it.

36 Cells, Animals and Human Subjects Regulating Interspecies Biomedical Research

Amy Hinterberger and Sara Bea
36.1 Introduction

The availability of new cellular technologies, such as human induced pluripotent stem cells (iPSCs), has opened possibilities to significantly ‘humanise’ the biology of experimental and model organisms in laboratory settings. With greater quantities of genetic sequences being manipulated and advances in embryo and stem cell technologies, it is increasingly possible to replace animal tissues and cells with human tissues and cells. The resulting chimeric embryos and organisms are used to support basic research into human biology. According to some researchers, such chimeras might be used to grow functional human organs for transplant inside an animal like a pig. These types of interspecies biomedical research confound long-established regulatory and legal orders that have traditionally structured biomedicine. In contexts where human cells are inserted into animal embryos, or in the very early stages of animal development, regulators face a conundrum: they need to continue to uphold the differences in treatment and protections between humans and animals, but they also want to support research that is producing ever-more intimate entanglements between human and animal species.

In research terms, human beings fall into the regulatory order of human subjects protection, a field of law and regulation that combines elements of professional care with efforts to preserve individual autonomy.Footnote 1 Animals, however, belong to a much different regulatory order and set of provisions relating to animal welfare.Footnote 2 To this end, animals have been used, and continue to be used, for understanding and researching human physiology and disease where such experiments would be unethical in humans. Researchers can do things to animals, and use animal cells, tissues and embryos in ways that are very different from human cells, tissues and embryos. Traditionally, ethical concerns and political protection have focused on the human subject in biomedical research, with ensuing allowances to address animal welfare and human embryos. However, such divisions are now under immense strain and are undergoing substantial revision. This chapter investigates these transformations in the area of interspecies mammalian chimera. We ask: what forms of regulation and law are drawn on to maintain boundaries between human research subjects and experimental animals in interspecies research? What kinds of reasoning are explicitly and implicitly used? What kinds of expertise are invoked and legitimised?

We will begin with a brief overview of the chimeric organisms in the context of new cellular technologies. We will then explore significant national moments and debates in the UK and USA that highlight the tacit presumptions of regulatory institutions to explore where disagreement and contestation have arisen and how resolutions were reached to accommodate interspecies chimeras within the existing regulatory landscape. Through these national snapshots, the chapter will explore how human–animal chimeras become objects of regulatory controversy and agreement depending on the concepts, tools and materials used to make them. The final sections of the chapter provide some reflections on the future of chimera-based research for human health that, as we argue, calls forth a reassessment of regulatory boundaries between human subjects and experimental animals. We argue that interspecies research poses pressing questions for the regulatory structures of biomedicine, especially health research regulation systems’ capacity to simultaneously care for and realign the human and animal vulnerabilities at stake within interspecies chimera research and therapeutic applications.

36.2 From Dish to Animal Host

Chimeric organisms, containing both human and non-human animal cells, sit at the interface between different regulatory orders. The ‘ethical choreography’ that characterises health research regulation on interspecies mixtures is densely populated with human and animal embryos, pluripotent stem cells, human subjects and experimental animals.Footnote 3 Much depends on the types of human cells being used, the species of the host animal that will receive the cells, along with age of the animal and the region where human cells are being delivered. Regulation thus includes institutional review board approval for using human cells from living human subjects. There also needs to be approval from animal care and use committees that assess animal welfare issues. Depending on the country, there might also be review from a stem cell oversight committee, which must deliberate on whether the insertion of human cells into an animal may give it ‘human contributions’.Footnote 4 There are significant national differences in regulatory regimes, making for diverse legal and regulatory environments at both national and international levels because countries regulate human and animal embryos, stem cells and animal welfare very differently.Footnote 5

In the biological sciences, the term chimera is a technical term, but it does not necessarily refer to one specific entity or process. Generally speaking, chimeras are formed by mixing together whole cells originating from different organisms.Footnote 6 It is a polyvalent term and can refer to entities resulting from both natural and engineered processes.Footnote 7 Historians of science have explored how species divides, especially between humans and other animals, are culturally produced and historically situated both inside and outside the laboratory.Footnote 8 The regulatory practices we explore in this chapter are not separate, but rather embedded in these larger structures of cultural norms about differences – and similarities – between humans and animals. As the life sciences continue to create new types of organisms, there are currently many groups and regulatory actors in different countries involved in producing definitions and forms of regulation for new human-animal mixtures. As we will see below, it is precisely the debates over the naming and classification of these new entities where the regulatory boundary work between the human and animal categories is illuminated.

36.3 Animals Containing Human Material

In the following two sections, we will explore national snapshots from advisory and regulatory bodies in the UK and USA. We will examine how they are confronting issues of responsibility and jurisdiction for boundary crossing entities that cannot easily be siphoned into the traditional legal and regulatory orders of either human or animal. We will show that while each country’s response via report or guidelines focuses on the human and animal division as primary to maintain in research practices, they each provide different solutions to the problems raised by interspecies mammalian chimera. These two sections of the chapter thus illuminate how interspecies chimera confound long-standing regulatory divisions in health research that challenge the law’s capacity to simply encompass new entities.

In 2011, the UK’s Academy of Medical Sciences released what is regarded as the first comprehensive recommendations to regulate the creation and use of chimeric organisms, called Animals Containing Human Material. The central conclusion of the Academy’s report is that research that uses animals containing human material is likely to advance basic biology and medicine without compromising ethical boundaries. The report itself was part of a much longer history of deliberation around the status of the human embryo in the UK where specific forms of human–animal mixtures have been proposed, debated and, in the end, legislated. The UK regulatory landscape is significant in this respect as no other nation has written into law human–animal mixtures – which in UK law are called ‘human admixed embryos’. The term human admixed embryo was introduced in 2008 amendments to the Human Fertilisation and Embryology Act 1990 (HFE Act). While it was the ‘cybrid embryo’ debate that became the most controversial and well-known related to these new legal entities, the legislation outlines a number of different kinds of human and animal mixtures that fall under its remit, including chimeric embryos. According to the Act, a human admixed embryo is any embryo that ‘contains both nuclear or mitochondrial DNA of a human and nuclear or mitochondrial DNA of an animal but in which the animal DNA does not predominate’.Footnote 9

A 2008 debate in the House of Lords over the revised HFE Act and the term ‘human admixed’ highlights the classification conundrums of how boundaries between human and animal are drawn. The Parliamentary Under-Secretary of State for the Department of Health explained, regarding the term ‘human admixed embryo’: ‘It was felt that the word “human” should be used to indicate that these entities are at the human end of the spectrum of this research’.Footnote 10 Responding to this notion of the spectrum, the Archbishop of Canterbury responded that:

‘the human end of the spectrum’ seems to introduce a very unhelpful element of uncertainty. Given that some of the major moral reservations around this Bill … pivot upon the concern that this legislation is gradually but inexorably moving towards a more instrumental view of how we may treat human organisms, any lack of clarity in this area seems fatally compromising and ambiguous.Footnote 11

This lack of clarity referred to by the Archbishop, which may ‘be fatally compromising’, sought to be addressed by the Animals Containing Human Material report.Footnote 12 Clarity, in this case, is provided by carefully considered boundaries and robust regulation, to remove elements of uncertainty. In the UK, the Human Fertilisation and Embryology Authority (HFEA) is the central body responsible for addressing proposals for embryo research in the UK. It is the body that licenses human embryonic stem cell research, oversees IVF treatment and the use of human embryos.

Violations of the licensing requirements of the HFEA are punishable under criminal law, which is both a literal and symbolic marker of respect for the conflicting and contested views on embryo research in the UK.Footnote 13 However, the HFEA only has jurisdiction over human embryos (not animal embryos). Research on animal embryos is governed and regulated by an entirely different body, The Home Office, which regulates the use of animals in scientific procedures through the Animals (Scientific Procedures) Act 1986 (ASPA).

Assessing whether the human or animal DNA is most predominant may be harder with chimeric research embryos since their cellular make-up may change over time. Thus, it can become unclear whether their regulation should fall within the remit of the HFEA or the Home Office. Any mixed embryo judged to be ‘predominantly human’ is regulated by HFEA and cannot be kept beyond the 14-day stage, whereas currently in the UK an animal embryo, or one judged to be predominantly animal, is unregulated until the mid-point of gestation and can in principle be kept indefinitely. Whether or not an admixed embryo is predominantly ‘human’ is, according to the Academy’s report, an expert judgement. However, it recommended that the Home Office and HFEA, two government bodies that had not previously been connected, needed to work together to create an operational interface at the boundaries of their new areas of responsibility.

The Academy report purifies, both through language and regulatory approach, ambiguities raised by chimeric organisms by trying, as best as possible, to compartmentalise research into human or animal regulatory orders. The term ‘animals containing human material’ itself highlights this goal. According to the report, animals containing human material are animals first and foremost. In this respect, the report places the regulatory responsibility for these new chimeric entities squarely in the already regulated domain of animal research. To this end, the UK remains a highly regulated but permissive research environment for different types of chimera-based research, and is the only country to formerly write into law the protection of biological chimeras containing human and animal cells.

36.4 Assessing ‘Human Contributions’ to Experimental Animals

Unlike the UK, in the USA there is no formal legal regulation of interspecies chimera research. The 2005 National Academy of Science (NAS) Guidelines continue to be the cornerstone of scientific research involving embryos, stem cell biology and mammalian development. The Academy is not a governmental agency, nor does it have enforcement power but the guidelines are viewed to be binding by governmental and institutional authorities. The NAS guidance acts as the principal reference on the recommendations applicable to research using interspecies chimera involving human embryonic stem cells and other stem cell types.

Stem Cell Research Oversight (SCRO) committees are the localised bodies that put into action the NAS Guidelines. During the stem cell controversies that characterised the USA, the Academy recommended that all research involving the combination of human stem cells with non-human embryos, fetuses, or adult vertebrate animals must be submitted to not only the local Institutional Animal Care and Use Committee (IACUC) for review of animal welfare issues, but also to a Stem Cell Research Oversight (SCRO) Committee for consideration of the consequences of the ‘human contributions’ to any non-human animal.Footnote 14 Thus, SCRO committees need to meet to discuss any experiment where there is a possibility that human cells could contribute in a major organised way to the brain or reproductive capacities particularly.

In late September 2015, the National Institutes of Health (NIH) in the USA declared a moratorium on funding chimeric research where human stem cells are inserted into very early embryos from other animals. However, like other instances where federal research monies were removed from controversial research – e.g. human embryonic stem cell lines – such research continued, but with private monies. The moratorium was met with scepticism and criticism of researchers working in this domain who, in a letter to Science, argued that such a moratorium impeded the progress of regenerative medicine.Footnote 15 Following a consultation period in 2016, the NIH announced that it would replace the moratorium with a new kind of review for specific types of chimera research, including experiments where human stem cells are mixed with nonhuman vertebrate embryos and for studies that introduce human cells into the brains of mammals – except rodents, which will be exempt from extra review.

As the UK’s predominant predicament demonstrated, currently it is difficult to predict how and where human cells will populate in another species – when cells are added at the embryonic or very early fetal stages of life. This predicament was recently characterised as the problem of ‘off-target’ humanised tissues in non-human animals.Footnote 16 Currently, animal embryos with human cells are only allowed to develop for a period of twenty-eight days – four weeks – in the USA. As we explained above, animal embryos fall under a separate legal and regulatory structure from human embryos, which traditionally have been allowed to develop for fourteen days, though this number is subject to increasing debate (see McMillan, Chapter 37 in this volume).Footnote 17 In practice, this means that assessments of human contributions to an animal embryo are restricted to counting human cells in an animal embryo. Current published research puts the human contribution to the host animal embryo at 0.01–.1.Footnote 18 This is primarily because a chimeric embryo is only allowed to gestate for twenty-eight days.

In 2017, privately funded researchers in the US published findings from the first human–pig embryos. While labs have previously created human–animal chimeras, such as mice transplanted with human cancer cells or immune systems or even brain cells, this new experiment was unique because the researchers placed human stem cells — which can grow to become any of the different types of cells in the human body — into animal embryos at their earliest stages of life. Broadly, the making of these human–pig chimeras included collecting pig zygotes (eggs), that were then fertilised in vitro to become blastocysts – a progressive phase in embryonic development. Human induced pluripotent stem cells were then pipetted into the developing pig embryo that had been genetically modified. That embryo was then put into a female pig and left to develop for twenty-eight days. After twenty-eight days the animal was sacrificed, and the entire reproductive tract of the animal removed and studied to see where the human cells developed and grew in the embryo.

This study, and others like it, raised ethical concerns relating to ‘off-target’ humanised tissue with concern for an animal’s central nervous system (brain) and reproductive capacities. In the below excerpt, the study leader explains how these concerns of ‘off-target’ humanisation can be handled in the development of human-pig chimeric embryos:

… we must pay special attention to three types – nerves, sperm and eggs – because humanizing these tissues in animals could give rise to creatures that no one wants to create … We can forestall that problem by deleting the genetic program for neural development from all human iPSCs before we inject them. Then, even if human stem cells managed to migrate to the embryonic niche responsible for growing the brain, they would be unable to develop further. The only neurons that could grow would be 100 percent pig.Footnote 19

Scientists are developing a variety of techniques to ensure ‘on target’ organ complementation so that a fully human organ can be grown inside an animal, and to avoid any ‘off-target’ problems that could potentially confer human qualities to the non-human experimental animal.

Possible ethical breaches relating to human research subjects and chimeras have been intensely discussed by scholars; however, until recently, concerns over animal welfare have largely taken a back seat in the regulatory and ethical debates over interspecies chimera. When we turn from the regulation of stem cells to the regulation of the organism – or animal – a new set of concerns open. The overwhelming emphasis on avoiding risky humanisation by measuring and counting the number of human cells in a non-human animal can obfuscate the crucial discussion about how animal welfare staff members might monitor changes in behaviour and attributes of experimental chimeric animals. For example, bioethicist Insoo HyunFootnote 20 has argued that people tend to assume the presence of human cells in an animal’s brain might enhance it above its typical species functioning. This ‘anthropocentric arrogance’ is, he points out, completely unfounded.Footnote 21 Why, he asks ‘should we assume that the presence of human neural matter in an otherwise nonhuman brain will end up improving the animal’s moral and cognitive status?’Footnote 22 The much more likely outcome, he suggests, of neurological chimerism is not a cognitive humanisation of the animal but ‘rather an increased chance of animal suffering and acute biological dysfunction and disequilibrium, if our experience with transgenic animals can be a guide’.Footnote 23

Animal care and use committees are less interested in cell counts and more interested in whether potential ‘human contributions’ may cause unnecessary pain and distress in an animal. Further, the question of how ‘human contributions’ might be measured in the behaviour of non-human animals is difficult and requires expert knowledge related to the species in question. If highly integrated chimeras are allowed to develop, the role of animal husbandry staff will be crucial in assessing and monitoring the behaviours and states of experimental animals – thus, animal behaviour and animal welfare knowledge may be a significant emerging component of measuring ‘humanisation’ in health research regulation.

36.5 Shifting Regulatory Boundaries between Cells, Human Subjects and Experimental Animals

As a domain of science that is continually reinventing and reconceiving the human body and its potentials, the futures of stem cell science and its regulation is not easy to predict or assess. However, it is in this context of ambiguity and change that we situate our discussion. First, theoretically and conceptually, chimera-based research has given rise to new living entities, from ‘animals containing human material’ to ‘human contributions to other animals’ that challenge assumed regulatory boundaries, rights, and protections provided for human subjects in contemporary societies. By tracing out how the categories human and animal are enacted in health research regulation we have shown that interspecies chimera requires a double-move on the part of regulators and researchers: animals must be kept animals, and humans must be kept humans. From this vantage point, we can see that interspecies chimera are not so marginal to health research regulation. The regulatory deliberations they elicit require re-examining the most basic and foundational structures of contemporary biomedical research – both human subjects research regulation, and animal care and welfare.

In health research regulation, animals are often defined in law; however, what constitutes or defines a human subject is generally not written down in law or legislation. What constitutes the human is, almost always, taken for granted or tacit regulatory knowledge. The national snapshots we examined here encompass Euro-American political and cultural contexts where regulatory containers, such as the human research subject, are shown to be potentially variable – or at least, drawn into question. For example, deliberations in the USA over what constitutes a ‘human contribution’ to another animal brings to light how the human subject is not a universal given, but a legal and regulatory designation that has the potential to be made and remade. Scientists, policy-makers and regulators approach the category human and animal differently across cellular and organismal levels, showing that these categories do not precede health research regulation but are actively co-produced within it.

Second, on the technical front, our review of current scientific practice shows how life scientists increasingly work according to the consensus that life is a continuum where species differences do not travel all the way down to the level of cells and tissues, thus destabilising assumed species differences and raising new questions about cell integration and containment across species. Third, politically, we are witnessing increasing agitation around both human and animal rights, in a context where bioscience is taking a significant role in the public sphere by not only informing debates about what life is, but also what life should be for.Footnote 24

The stem cell techniques we have discussed above were first developed not with human materials but with animal. While dilemmas over the humanisation of other animals may appear to be new, these technical possibilities only exist because of previous animal research, such as the creation of mice–rat chimeras. For example, rat embryonic stem cells were injected into a mouse blastocyst carrying a mutation that blocked the pancreas development of a mouse, resulting in mice with a pancreas entirely composed of rat cells. These rat–mouse chimeras developed into adult animals with a normal functional pancreas, demonstrating that xenogeneic organ complementation is achievable.Footnote 25 Recent media coverage of the first human-–pig interspecies chimera can conceal from view these longer and less discussed histories of biological research. To come to grips with the regulatory dilemmas elicited by interspecies chimera then, we must be attentive to biomedical research itself and the many kinds of living organisms used to advance scientific knowledge and to develop therapeutic applications for human health problems.

As we have shown, the USA established new private committees where members must assess whether an experiment might give ‘human contributions’ to experimental animals. Whereas governance in the UK clearly defines and names new legal and regulatory categories such as ‘human admixed embryo’ or ‘animals containing human material’. In contrast, the phrasing ‘human contributions’ in the USA is suggestive of more of a spectrum rather than new legal and regulatory containers for boundary-crossing biological objects, such as in the UK. Chimeric organisms embody new articulations about the plasticity of biology and the recognition that assumed species differences do not travel all the way down to the molecular level. Consequently, explicit deliberations for regulation and governing procedures are also pushed and pulled in new directions. This remodelling of boundaries in biological practice and state governance has consequences for humans and animals alike.

36.6 Conclusion: Realigning Human and Animal Vulnerabilities

With the advent of new and sophisticated forms of human and animal integration for the study of disease, drug development and generation of human organs for transplants, keeping the human separate from the animal, in regulation, becomes increasingly difficult. The disruptions posed by interspecies chimeras give rise to growing conundrums as disparate regulatory actors try to accommodate chimeric entities within existing health research regulation structures that enact a clear division between the human/animal opposition.

In Europe and North America, the regulation of therapeutically oriented biomedicine has historically been split into two vast and abstracted categories: human and animal. Numerous legal and regulatory processes work to disentangle human material, bodies and donors from organisms and parts categorised as animal. Regulators and policymakers thus find themselves in a tricky situation needing to sustain the regulatory and legal estrangement between humans and other animals, while facilitating basic and applied research on human health – such as the kind described above – that relies on the incorporation of human and animal material in new biological entities.

Our explorations above suggest that health research regulation will need to be sufficiently reflexive on the limits of boundaries that reify the foundational human/animal division and be flexible enough to allow a re-consideration of classificatory tools and instruments to measure the extent and consequences of prospective interspecies chimera research. If human/animal chimeras provide to be an efficient route to engineering human organs, as opposed to genetically modified pigs or organoids,Footnote 26 then the humanisation of experimental animals will likely develop further. An ethical and effective health research regulation system will need to be simultaneously reactive and protective of both human and animal vulnerabilities at stake.

In practice, this implies that regulatory efforts could be directed at fostering and maintaining dynamic collaborative relationships between regulatory actors that often work separately, such as stem cell research oversight, human subjects and animal care and use committees. Establishing efficient communicative pathways across disparate regulatory authorities and institutional bodies will demand a mutual disposition to consider and incorporate divergent and emerging concerns. The collaborative relationships should also be invested in the development of novel regulatory tools capable of addressing the present and coming challenges raised both at the level of the cell and at the level of the organism by interspecies research. This means going beyond the existing instruments to measure ‘human contributions’ at the cellular level to monitor ‘on target’ human organ generation as well as ‘off-target’ proliferation of human tissue in experimental animals. Collaboration between regulatory actors that have traditionally operated separately would also need to integrate the knowledge and expertise from animal behaviour and welfare professionals, such as animal husbandry staff.

A learning health research regulation system that operationalises the multi-level collaborative relationships across regulatory actors complemented by the introduction of animal care experts would be better prepared to engage in the disruptions that interspecies chimera research poses to existing regulatory mechanisms, actors, relations and tools. The direction and increased traction of stem cell biotechnologies clearly signposts that the development and growth of human health applications of interspecies chimera research requires a gradual intensification of entanglements between animals and humans. Health research regulation will thus need to reflect on the ethical and practical consequences for experimental animals’ and human research subjects’ vulnerabilities and address the shifting boundary between experimental animals and human subjects in biomedicine to make room for the new life forms in the making.

37 When Is Human? Rethinking the Fourteen-Day Rule

Catriona McMillan
37.1 Introduction

The processual, rapidly changing nature of the early stages of human life has provided recurring challenges for the way in which we legally justify the use of embryos in vitro for reproduction and research. When the latter was regulated under the Human Fertilisation and Embryology Act 1990 (as amended) (‘the HFE Act’), not only did regulators attempt to navigate what we should or should not do at the margins of human life, but they also tried to navigate the various thresholds that occur in embryonic and research processes. In doing so, the response of law-makers was to provide clear-cut boundaries, the most well-known of these being the fourteen-day rule.Footnote 1

This chapter offers an examination of this rule as a contemporary example of an existing mechanism in health research that is being pushed to its scientific limits. This steadfast legal boundary, faced by a relatively novel challenge,Footnote 2 requires reflection on appropriate regulatory responses to embryo research, including the revisitation of ethical concerns, and an examination of the acceptability of carrying out research on embryos for longer than fourteen days. The discussion below does not challenge the fourteen-day rule, or research and reproductive practices in vitro more generally per se, but rather explores the ways in which law could engage with embryonic (and legal) processes through attention to thresholds (as a key facet of these processes). This framing has the potential to justify extension, but not without proper public deliberation, and sound scientific and ethical basis. The deliberation and revisitation – not necessarily the revision – of the law is the key part to this liminal analysis.

To begin, this chapter gives an overview of how the fourteen-day rule came into fruition, before going on to summarise the research, published in early 2017, that has given rise to new discussions about the appropriateness of the rule, twenty-seven years after it first came into force. Thereafter, the rest of the chapter builds on the theme of ‘processes’ from Part I of this volume, and asks, briefly: what might we gain from thinking beyond boundaries in this context? Moreover, what might doing so add to contemporary ethical, legal and scientific discourse about research on human embryos? I argue that recognising the inherent link between processes and the regulation of the margins of human life, enables us to ask more nuanced questions about what we want for future frameworks, for example, ‘when is human?’,Footnote 3 one that legal discussion often shies away from. Instead I will argue that viewing regulation of embryo research as an instance of both processual regulation and regulating for process has the potential to disrupt existing regulatory paradigms in embryo research, and enable us to think about how we can, or perhaps whether we should, implement lasting frameworks in this field.

37.2 Behind the Fourteen-Day Rule: The Warnock Report, and a ‘Special Status’

The fourteen-day time limit on embryo research is of global significance. It is one of the most internationally agreed rules in reproductive science thus far,Footnote 4 with countries such as the UK, the USA, Australia, Japan, Canada, the Netherlands and India all upholding the rule in their own frameworks for embryo researchFootnote 5 The catalysts for the implementation of the rule into many of these public policies are often accredited to two key reports:Footnote 6 the US Report on embryo research of the Ethics Advisory Board to the Department of Health, Education and Welfare,Footnote 7 and the UK report of the Warnock Committee of Inquiry into Human Fertilisation and Embryology.Footnote 8 This chapter will focus on the latter.

In 1984, the Warnock Committee published the Report of the Committee of Inquiry into Human Fertilisation and Embryology, also known as ‘the Warnock Report’. This deliberative, interdisciplinary process was a keystone to law-making in this area in the UK. As a direct result of these deliberations, the use and production of embryos in vitro is governed by the HFE Act. This Act, which stands fast over thirty years later, brought legal and scientific practice out of uncertainty – due to the lack of a statutory framework for IVF and research pre-1990 – to a new state of being where embryos can be used, legally, for reproductive and research purposes under certain specified circumstances.

The Warnock Report was quite explicit that it was not going to tackle questions of the meaning of human ‘life’ or of ‘personhood’. Instead, it articulated its remit as ‘how it is right to treat the human embryo’.Footnote 9 The Report examined the arguments for and against the use of human embryos for research. Here, the Committee noted the plethora of views on the embryo’s status, evidenced by the submissions received prior to the Report. They discussed each position in turn, before concluding that while the embryo deserves some protection in law, this protection should not be absolute. Notably, the source of this protection is not entirely clear from the Report. It cited the state of law at the time, which afforded some protection to the embryo, but not absolute protection.Footnote 10 Nonetheless, one can glean from their recommendations that this protection is sourced – at least in part – by virtue of embryos membership of the human species.

It is important to note that the Warnock Report did not explicitly answer the question of ‘when does life begin to matter morally?’, but rather considered the viewpoints submitted and ‘provide[d] the human embryo with a special status without actually defining that moral status’.Footnote 11 Thus, in the HFE Act’s first iteration,Footnote 12 not only did regulators attempt to navigate what we should or should not do at the margins of human life, but also the rapidly changing nature of those margins. The regulatory response to this has been to provide clear-cut boundaries surrounding what researchers can and cannot do, in reference to embryos in vitro, the most well known of these being the subject of this chapter, the fourteen-day rule, as contained in s3(4) of the HFE Act. The rule reads as follows:

  1. 3. Prohibitions in connection with embryos.

    1. (1) No person shall bring about the creation of an embryo except in pursuance of a licence.

    2. (3) A licence cannot authorise—

      1. (a) keeping or using an embryo after the appearance of the primitive streak,

      2. (b) placing an embryo in any animal

      3. (c) keeping or using an embryo in any circumstances in which regulations prohibit its keeping or use,

    3. (4) For the purposes of subsection (3)(a) above, the primitive streak is to be taken to have appeared in an embryo not later than the end of the period of 14 days beginning with the day on which the process of creating the embryo began, not counting any time during which the embryo is stored. [emphasis added]

This section of the HFE Act also introduced the subsection that famously embodies the Warnock Report’s abovementioned ‘compromise position’, which affords human embryos some ‘respect’. It placed a clear boundary to the process of research: it is illegal to carry out research on an embryo beyond fourteen days, or after the primitive streak has formed, whichever occurs sooner. After that, the embryo cannot be used for any other purpose, and must be disposed of. In other words, as discussed further below, if decidedly an embryo created and/or used for research purposes, it may only ever be destroyed at the end of the research process.

Why fourteen days? The rule is based upon the evidence given to the Warnock Committee that it is around this stage that the ‘primitive streak’ tends to develop. It is also the approximate stage at which the embryonic cells can no longer split and thus produce twins or triplets, etc.Footnote 13 It was thus felt that this stage was morally significant, reinforced by the belief that this was the earliest known moment when the central nervous system was likely to have formed. This stage also marks the beginning of gastrulation, the process by which cell differentiation occurs. At the time, it was seen as a way to avoid, with absolute certainty, anyone carrying out research on those in the early stages of human life with any level of sentience or ability to experience pain.Footnote 14, Footnote 15 In this way, as a reflection of the Committee’s recommendations, embryos in vitro are often described as having a ‘special status’ in law; not that of one with personhood – attained at birth – but still protected in some sense. This in and of itself may be described as recognising the processual; it is implicit in the Committee’s efforts to replicate a somewhat gradualist approach that recognises embryonic development – and any ‘significant’ markers within it.

While many would agree that a ‘special status’ in law results from this rule, the word ‘status’ – or any other of similar meaning – does not appear at all in the HFE Act (as amended) in reference to the embryo. It is clear, however, that the recommendations of the Warnock Report, made in light of its proposal for a ‘special status’, are reflected in this steadfast piece of legislation to this day, operationalised through provisions such as the fourteen-day rule.

Despite their contentions (see above), the Committee can arguably be understood as implicitly having answered the question of ‘when life begins to matter’, by allowing research up to a certain stage in development.Footnote 16 In other words, they prescribed that ‘as the embryo develops, it should receive greater legal protection due to its increasing moral value and potential’.Footnote 17 This policy, known as the gradualist approach, is somewhat in line with the Abortion Act 1967, which affords more protection to the fetus as it reaches later stages in developmentFootnote 18 (although in other ways these laws do not align at all). In doing so, while not explicit, the Act captures the processual aspect of embryonic/fetal development.

It seems that the human embryo hovers between several normative legal categories, i.e. ‘subject’ and ‘object’.Footnote 19 While it clearly does not have a legally articulated ‘status’ under the HFE Act, it occupies a legal – and for some people, moral – threshold between all of these aforementioned categories, which we can see by the special status it has been given in law. Thus, while there is no explicit legal status of the embryo, what we have, legally, is still something. By virtue of giving the embryo in vitro legal recognition, with attached allowances and limits, it arguably has a status of sorts. Bearing in mind that the law adopted most of the Warnock Report’s recommendations, its status may indeed be described as ‘special’, as the Report prescribed. It is ‘not nothing’,Footnote 20 yet not a ‘person’: it is the quintessential liminal entity, betwixt and between. From what we have seen, its status remains ‘special’, the meaning of which is unclear except that it is afforded ‘respect’ of sorts. Beyond that, we can glean little regarding what the extent or nature of this from domestic law is. It does not have an explicit legal status, but, as some argue, it may have one implicitly.Footnote 21 This begs the question: what does it mean to have ‘legal status’? Is it enough to be protected by law? Recognised by law? Entitled to something through law? These are the types of questions we may want to consider for any amendments, or new frameworks, going forward.

37.3 Beyond fourteen days?

As we have seen, the fourteen-day rule is the key legal embodiment of the embryo’s decidedly ‘special status’ and the application of a legal and moral boundary at the earliest stages of human life. Yet throughout the incremental amendments to the HFE Act (e.g. the HFE Act 2008), there has been little enthusiasm among policy-makers for revisiting, let alone revising, the rule. For some, the latter did not necessarily matter, as, for twenty-seven years, this limit was ‘largely theoretical’;Footnote 22 up until very recently, no researcher had been able to culture an embryo up to this limit.

In early 2016, for the first time, research published in NatureFootnote 23 and Nature Cell BiologyFootnote 24 reported the successful culturing of embryos in vitro for thirteen days. With the possibility of finding out more about the early stages of human life beyond this two-week stage, calls have been made to revisit the fourteen-day rule.Footnote 25 Why? It appears that some valuable scientific knowledge may lie beyond this bright legal line in the sand, within this relatively unknown ‘black box of development’. For example, it would enable the study of gastrulation, which begins when the primitive streak forms (around fourteen days).Footnote 26

Yet what might all of this mean for compromise, respect, and the resulting ‘special’ legal status of the embryo? If this rule were to change, would the embryo still be ‘special’? Moreover, do we believe this matters? These questions should be addressed if we revisit the rule; it seems that discussions surrounding the fourteen-day rule are part of a broader issue that needs to be addressed. There is a very strong case for public and legal discourse on the meaning and ‘special’ moral status of the embryo in UK law. One question that we may want to revisit is: if we value the recommendations of the Warnock Committee (‘special’, ‘respect’, etc.), does it still have resonance with us today? For example, one might ask: even if the ‘special’ status has a justifiable source, how can we ‘value’ it in practice except by avoiding harm? It is arguable that the ‘special respect’ apparently afforded in law seems meaningless in practical terms.Footnote 27

It is difficult to enable a ‘middle position’ between protection and destruction in practice; we either allow embryos to be destroyed, or we do not. For some, the embryo’s ‘special status’ is thus, arguably, purely rhetorical; it does not oblige us to ‘act or refrain in any way’.Footnote 28 However, compromise is arguably more nuanced than allowing or disallowing destruction of embryos. Time is an essential component of legal boundaries within the 1990 Act (as amended). Either one can research the embryo for less than fourteen days, or one cannot. This means that we cannot research the embryo for any longer period of time, for example thirty days or sixty days. Rhetoric aside, the concept of the ‘special status’ is still very powerful and has acted as a tool to ‘stop us in our tracks’ with regards to research on embryos. It is arguably a precautionary position, which reflects that we as a society afford a degree of moral and legal value to embryos, and thus the special status caveat requires us to proceed cautiously, to reflect, to justify fully, to revisit, to revise and to continue to monitor as we progress scientifically. If we did not value the embryo at all, then we would have carte blanche to treat it however we wished. If that were the case, research at 30 or 60 or 180 days would not present a problem. Therefore, the embryo’s special status need not be an all-or-nothing brake on research, nor a green light position. It thus means something in that sense, however (admittedly) meaningless. The ‘special status’, then, is – in a way – not a ‘compromise’, but what I would term a legal and ethical comfort blanket.Footnote 29

This is not to criticise the language used by the Committee, however. The Warnock Committee’s emphasis on ‘compromise’ was made in the name of moral pluralism. In other words, it emerged as the Warnock Committee’s way of navigating the uncertainty/ambivalence surrounding how to treat embryos in vitro, legally. This is not to say that poles of opinion between which this compromise was set have changed. The rule, a reflection of this ‘compromise’ was, in many ways, a new boundary and threshold akin to its historical counterparts (such as quickening). Yet if we decide that it is worth considering this boundary and whether we want to change it, how can – or should – we rethink it? If we believe that the process embryonic and scientific development is a relevant factor in determining an appropriate regulatory response, what might further focus on these key points in transition bring to contemporary debates?

The rest of this chapter argues that if we want to think beyond the boundary of the fourteen-day rule, one way of framing discussion is by recognising the inherent link between processes and regulating of the margins of human life. When considering frameworks, the latter enables us to ask questions surrounding not only ‘what is human?’, but ‘when is human’? Asking ‘when?’ – used here as an example – allows us to re-focus on not only embryonic development as process, but questions surrounding we need to place different boundaries within that process.

37.4 Revisiting the Rule

Throughout the regulation of the early stages of human life, law has changed to reflect the changing boundaries of what is ‘certain’ and ‘uncertain’.Footnote 30 Where new uncertainties arise,Footnote 31 some old ones will always remain.Footnote 32 We have thus moved, in some ways, from one type of uncertainty to another when it comes to embryo regulation, and this is because what we are dealing with is an inherently processual entity, that in and of itself has not changed. In other words, the complex and relatively uncertain nature of embryos, the stage of human life at which development occurs at its fastest pace, continues to cause widespread ambivalenceFootnote 33 on how it is right to treat it.

When considering whether to alter the rule, multiple thresholds – such as the threshold for humanity – within embryonic and research processes come into consideration.Footnote 34 As we have seen, there was a strong nod to the gradualist approach in the thought behind the fourteen-day rule – an approach that recognises that human development is a process. The Report did not set out to answer ‘when is human?’, but pointed to an important stage in the process of becoming human, when limiting research to fourteen days. As we have seen, the Warnock Committee used ethical deliberation and evidence available at the time to suggest this boundary beyond which research could not pass. A key part of this deliberation, although not referred to in terms of ‘thresholds’ per se, were particular (perceived) biological thresholds, such as the threshold for experiencing pain – which they associated with the start of the primitive streak – and thresholds for being able to cause harm therein. One might say that if the fourteen-day rule is a limit or a boundary, then something that we may want to consider – if we deem it appropriate to revisit this rule – is the presence of thresholds therein, and the importance that we want to attribute to those thresholds.Footnote 35 While being in a liminal state or space connotes occupying a threshold, a key part of the liminal process is moving out of the liminal state, i.e. over or beyond that threshold. Thinking about liminal beings such as embryos in such terms highlights the presence of these boundaries and their potential for impermanence, especially in a legal context. For example, if we decide it is appropriate to consider extending the rule, these types of moral boundaries (i.e. harm, or sentience, etc.) may very well come into play again, for example if a ‘twenty-eight-day rule’ is proposed. Further, talks around extending it have already given rise to discussion surrounding another kind of boundary: would extending the rule be of adequate benefit to science? Some argue that there is much more that we can learn from extending the limit.Footnote 36 Yet, what amount of benefit is enough benefit to justify extension? Therein lies the threshold, a threshold of the reasonable prospect of sufficient scientific ‘benefit’.

If the crossing of thresholds within biological and research processes have been implicitly important for us thus far, what might we learn from this? With regards to the legal processes that we already have, attention to process – and therefore thresholds – highlight the following:Footnote 37

  • Once an embryo created in vitro passes the threshold of being determinedly a ‘research’ embryo, it cannot (legally) be led back past the said threshold, and it can only come out this process as something to be disposed of after being utilised.

  • In contrast, there are lots of thresholds that embryos are led through for a ‘reproductive’ path, for example: (non)selection after PGD, implantation, freezing and unfreezing, implantation, gestation etc. – indeed, this includes the possibility of crossing the threshold from ‘reproduction’ to ‘research’ if, say, PGD tests suggest non-suitability for reproduction.

  • When the progenitors of embryos are making decisions regarding what to do with their surplus embryos, they may cross various thresholds themselves, e.g to donate or not, either for research or to others seeking to reproduce.

Regarding the last point, persons/actors are, of course, an essential part of these processes and this should not be lost in any renewed discussions surrounding the rule. Considerations for the actors around embryos can be different for each threshold, i.e. we may consider different sets of factors depending on which threshold any particular embryo is at. For example, at the third threshold above, many factors come into consideration for donors, including their attitudes towards research and their feelings about and towards their surplus embryos and their future (non)uses.Footnote 38 Moreover, each threshold is coupled with clear boundaries, be it the fourteen-day rule for ‘research’ embryos, or rules around what may/may not be implanted for ‘reproductive’ embryos.

Thresholds – or indeed boundaries – are not necessarily ‘bad’ here, per this work’s analysis. Indeed, both moral and legal thresholds are of crucial importance. Rather, I suggest that we should be alive to their presence and their place among the broader network – of actors, or silos, etc. – in order to ask questions about the conditions we want in order to cross those thresholds.Footnote 39 As I have argued elsewhere: ‘Considering the multiplicity, variability, and in many ways, subjectivity of these thresholds might enable us to regulate in a more flexible and context-specific way that allows us to recognise the multiplicity of processes occurring within the framework of the [HFE Act].’Footnote 40 Attention to process cannot necessarily say how any revisitation might turn out.

It is important to revisit the intellectual basis for any law, but especially so in a field where technology and science advance so rapidly. If we do not, we cannot ask important questions in light of new information, for example: what or when is the threshold for ‘humanity’? Or if indeed we want ‘when is human’ to factor into how we regulate embryos, as it has done in the past. Discussing questions such as these would be a great disturbance to the policy norm of the past twenty-seven years or so, which have stayed away from these types of questions. But I argue that within disturbance, we can find resolution through proper legal and ethical deliberation, and public dialogue. In other words, it would not be beneficial of us to shy away from disruptions for fear of practice being shut down, as these disturbances present us with a chance to feed experiences and lessons of those involved in – and benefit from – research back into regulatory, research, and – eventually – treatment practice.

37.5 Conclusion

While the time limit on embryo research has undoubtedly been a success on many fronts, if it is to remain ‘effective and relevant’,Footnote 41 we must be open to revisiting it, with proper deliberation and public involvement, with openness and transparency.Footnote 42 Not only that, but when doing so, we must not shy away from asking difficult questions if law is to adapt to contemporary research.

The latter part of this chapter has argued that a focus on process has the potential to disrupt existing regulatory paradigms in embryo research and enable us to think about how we can, or perhaps whether we should, implement lasting frameworks in this field. The above did not challenge the pros and cons of the fourteen-day rule, or research and reproductive practices in vitro more generally per se, but rather briefly explored one of the ways which law could engage with embryonic (and legal) processes through attention to thresholds (as a key facet of these processes).

Overall, while the framing offered here has the potential to justify an extension of the fourteen-day rule, it cannot be done without proper public deliberation. This deliberation would need to be subject to sound scientific objections, and perhaps most importantly, subject to scrutiny regarding prevalent moral concern over pain and sentience.Footnote 43 This analysis challenges us to deliberate, and revisit – not necessarily the revise – the law surrounding this longstanding rule. Responsive regulation, per the title of this section of the volume, need not respond to every ‘shiny new thing’ (e.g. advances in research, such as those discussed in this chapter), but be reflexive (and reflective) so that HRR does not become stagnant.

38 A Perfect Storm Non-evidence-Based Medicine in the Fertility Clinic

Emily Jackson
38.1 Introduction

In vitro fertilisation (IVF) did not start with the birth of Louise Brown on 25 July 1978. Nine years earlier, Robert Edwards and others had reported the first in vitro fertilisation of human eggs,Footnote 1 and before Joy Brown’s treatment worked, 282 other women had undergone 457 unsuccessful IVF cycles.Footnote 2 None of these cycles was part of a randomised controlled trial (RCT), however. After decades of clinical use, it is now widely accepted that IVF is a safe and effective fertility treatment, but it is worth noting that there have been studies that have suggested that the live birth rate among couples who use IVF after a year of failing to conceive naturally is not, in fact, any higher than the live birth rate among those who simply carry on having unprotected sexual intercourse for another year.Footnote 3

Reproductive medicine is not limited to the relatively simple practice of fertilising an egg in vitro, and then transferring one or two embryos to the woman’s uterus. Rather, there are now multiple additional interventions that are intended to improve the success rates of IVF. Culturing embryos to the blastocyst stage before transfer, for example, appears to have increased success rates because by the five-day stage, it is easier to tell whether the embryo is developing normally.Footnote 4

Whenever a new practice or technique is introduced in the fertility clinic, in an ideal world, it would have been preceded by a sufficiently statistically powered RCT that demonstrated its safety and efficacy. In practice, large-scale RCTs are the exception rather than the norm in reproductive medicine.Footnote 5 There have been some large trials, and meta-analyses of smaller trials, but it would not be unreasonable to describe treatment for infertility as one of the least evidence-based branches of medicine.Footnote 6

In addition to an absence of evidence, another important feature of reproductive medicine is patients’ willingness to ‘try anything’. Inadequate NHS funding means that most fertility treatment is provided in the private sector, with patients paying ‘out of pocket’ for every aspect of their IVF cycle, from the initial consultation to scans, drugs and an ever-increasing list of ‘add-on’ services, such as assisted hatching; preimplantation genetic screening; endometrial scratch; time-lapse imaging; embryo glue and reproductive immunology. The combination of a poor evidence base, commercialisation and patients’ enthusiasm for anything that might improve their chance of success, results in a ‘perfect storm’ in which dubious and sometimes positively harmful treatments are routinely both under-researched and oversold.

Added to this, although clinics must have a licence from the Human Fertilisation and Embryology Authority (HFEA) before they can offer IVF, the HFEA does not have the power to license, or refuse to license the use of add-on treatments. Its powers are limited to ensuring that, before a patient receives treatment in a licensed centre, patients are provided with ‘such relevant information as is proper’, and that ‘the individual under whose supervision the activities authorised by a licence are carried on’ (referred to as the Person Responsible), ensures that ‘suitable practices’ are used in the clinic.Footnote 7 In this chapter, I will argue that, although giving patients information about the inadequacy of the evidence-base behind add-on treatments is important and necessary, this should not be regarded as a mechanism through which their inappropriate use can be controlled. Instead, it may be necessary for the HFEA to categorise non-evidence-based and potentially harmful treatments as ‘unsuitable’ practices, which should not be provided at all, rather than as treatments that simply need to be accompanied by a health warning.

38.2 A Perfect Storm?

In order to be appropriately statistically powered, it has been estimated that a trial of a new fertility intervention should recruit at least 2610 women.Footnote 8 Trials of this size are exceptional, however, and it is much more common for smaller statistically underpowered trials to be carried out. Nor does meta-analysis of these smaller trials necessarily offer a solution, in part because their outcomes are not always reported consistently, and the meta-analyses themselves may not be sufficiently large to overcome the limitations of the smaller studies.Footnote 9

Fertility patients are often keen to ‘try something new’, even if it has not been proven to be safe and effective in a large-scale RCT.Footnote 10 IVF patients are often in a hurry. Most people take their fertility for granted, and after years of trying to prevent conception, they assume that conception will happen soon after they stop using contraception. By the time a woman realises that she may need medical assistance in order to conceive, her plan to start a family will already have been delayed for a year or more. At the same time, women’s age-related fertility decline means that they are often acutely aware of their need to start treatment as soon as possible.

Although, in theory, fertility treatment is available within the NHS, it is certainly not available to everyone who needs it. The National Institute for Health and Care Excellence’s (NICE) 2013 clinical guideline recommended that the NHS should fund three full cycles of IVF (i.e. a fresh cycle followed by further cycles using the frozen embryos) for women under 40 years old, and one full cycle for women aged 40–42, who must additionally not have received IVF treatment before and not have low ovarian reserve.Footnote 11 Implementation of this NICE guideline is not mandatory, however, and in 2018 it was reported that only 13 per cent of Clinical Commissioning Groups (CCGs) provide three full cycles of IVF to eligible women; 60 per cent offer one NHS-funded cycle – most of which fund only one fresh cycle – and 4 per cent provide no cycles at all.Footnote 12

The majority of IVF cycles in the UK are self-funded,Footnote 13 and although the average cycle costs around £3350,Footnote 14 costs of more than £5000 per cycle are not uncommon. As well as simply wanting to have a baby, IVF patients are therefore commonly also under considerable financial pressure to ensure that each IVF cycle has the best possible chance of success. In these circumstances, it is not surprising that patients are keen to do whatever they can to increase the odds that a single cycle of IVF will lead to a pregnancy and birth.

One of the principal obstacles to making single embryo transfer the norm was that, for many patients, the birth of twins was regarded as an ideal outcome.Footnote 15 Most patients want to have more than one child, so if one cycle of treatment could create a two-child family, this appeared to be a ‘buy one, get one free’ bargain. In order to persuade women of the merits of the ‘one at a time’ approach, it was not enough to tell them about the risks of multiple pregnancy and multiple birth, both for them and their offspring. Many women are prepared to undergo considerable risks in pursuit of a much-wanted family. Instead, the ‘one at a time’ campaign emphasised the fact that a properly implemented ‘elective single embryo transfer’ policy did not reduce birth rates, and tried to persuade NHS funders that a full cycle of IVF was not just one embryo transfer, but that it should include the subsequent frozen embryo transfers.Footnote 16

Not only are patients understandably keen to try anything that might improve their chance of success, they are also paying for these extra services out of pocket. As consumers, we are used to paying more to upgrade to a better service, so this additional expense can appear to be a ‘sign of quality’.Footnote 17 Rather than putting patients off, charging them several hundred pounds for endometrial scratch and assisted hatching may make these additional services appear even more desirable. New techniques often generate extensive media coverage, leading patients actively to seek out clinics that offer the new treatment.Footnote 18 Clinics that offer the non-evidence-based new intervention are therefore able to say that they are simply responding to patient demand.

As well as the appeal of expensive high-tech interventions, patients are also attracted to simple and apparently plausible explanations for IVF failure. If an IVF cycle does not lead to a pregnancy because the embryo fails to attach to the lining of the woman’s uterus, it is easy to understand why patients might be persuaded of the benefits of ‘embryo glue’, in order to increase adhesion rates. Alternative therapists have also flourished in this market: acupuncturists are said to be able to ‘remove blocks to conception’; and hypnotherapists treat women ‘with a subconscious fear of pregnancy’.Footnote 19 In practice, however, the evidence indicates not only that complementary and alternative medicine (CAM) does not work, but that live birth rates are lower for patients who use CAM services.Footnote 20

Perhaps the most egregious example of an apparently simple and plausible explanation for IVF failure being used to market a non-evidence-based and potentially harmful intervention is reproductive immunology. The existence of the unfortunately named ‘natural killer cells’ in the uterus has helped to persuade patients that these cells might – unless identified by expensive tests and suppressed by expensive medications – ‘attack’ the embryo and prevent it from implanting. News stories with headlines like ‘The Killer Cells That Robbed Me of Four Babies’Footnote 21 and ‘My Body Tried to Kill My Baby’Footnote 22 suggest a very direct link between NK cells and IVF failure. The idea that the embryo is a genetically ‘foreign’ body that the woman’s uterine cells will attack, unless their immune response is suppressed, sounds plausible,Footnote 23 and as Datta et al. point out, ‘couples seeking a reason for IVF failure find the rationale of immune rejection very appealing’. It has no basis in fact, however.Footnote 24

There is no evidence that natural killer cells have any role in causing miscarriage; rather despite their name, they may simply help to regulate the formation of the placenta. As Moffett and Shreeve explain, regardless of this lack of evidence, ‘a large industry has grown up to treat women deemed to have excessively potent uterine “killers”’.Footnote 25 In addition to the absence of RCTs establishing that reproductive immunology increases success rates, the medicines used – which include intravenous immunoglobulins, TNF-α inhibitors, granulocyte-colony stimulating factor, lymphocyte immune therapy, leukaemia inhibitory factor, peripheral blood mononuclear cells, intralipids, glucocorticoids, vitamin D supplementation and steroids – may pose a risk of significant harm to women.Footnote 26

The lack of evidence for reproductive immunology, and the existence of significant risks, has been known for some time. In 2005, Rai and others described reproductive immunology as ‘pseudo-science’, pointing out that ‘Not only is there no evidence base for these interventions, which are potentially associated with significant morbidity, the rationale for their use may be false’.Footnote 27 The HFEA’s most recent advice to patients is also clear and unequivocal:

There is no convincing evidence that a woman’s immune system will fail to accept an embryo due to differences in their genetic codes. In fact, scientists now know that during pregnancy the mother’s immune system works with the embryo to support its development. Not only will reproductive immunology treatments not improve your chances of getting pregnant, there are risks attached to these treatments, some of which are very serious.Footnote 28

Despite this, patients continue to be persuaded by a simple, albeit false, explanation for IVF failure, and by ‘evidence’ from fertility clinics that is better described as anecdote. The Zita West fertility clinic blog, for example, contains accounts from satisfied ex-patients with headlines like ‘I Was Born to Be a Mum and Couldn’t Have Done It Without Reproductive Immunology’.Footnote 29 It is not uncommon for clinics’ websites to ‘speak of “dreams” and “miracles”, rather than RCTsFootnote 30 Spencer and others analysed 74 fertility centre websites, and found 276 claims of benefit relating to 41 different fertility interventions, but with only 16 published references to support these, of which only five were high level systematic reviews.Footnote 31

From the point of view of a for-profit company selling fertility services, why bother to do expensive large-scale RCTs, when it is possible to sell a new therapy to patients in the absence of such trials? If patients do not care about the lack of evidence, and are happy to rely upon a clinician’s anecdotal report that X therapy has had some success in their clinic, the clinic has no incentive to carry out trials, which may indicate that X therapy does not increase live birth rates.

A free market in goods and services relies upon consumers choosing not to buy useless products. If a mobile phone company were to produce a high-tech new phone that does not work, then after an initial flurry of interest in a shiny new product, its failings would become apparent and the market for it would disappear. Because there can be no guarantee that any cycle of IVF will lead to the birth of a baby, and almost every cycle is more likely to fail than it is to work, it is much harder for consumers of fertility services to tell whether an add-on service is worth purchasing. Rather than relying on individual patients ‘voting with their feet’ in order to crowd out useless interventions, it may be important instead for an expert regulator to choose for them.

38.3 Regulating Add-On Services

There are three mechanisms through which the provision of add-on services in the fertility clinic is regulated. First, if it involves the use of a medicinal product, that product must have a product licence from the European Medicines Agency or the Medicines and Healthcare products Regulatory Agency. The Human Medicines Regulations 2012 specify that, before a new medicine can receive a product licence, the licensing authority must be satisfied that ‘the applicant has established the therapeutic efficacy of the product to which the application relates’, and ‘the positive therapeutic effects of the product outweigh the risks to the health of patients or of the public associated with the product’.Footnote 32 In short, it must be established that the product works for the indication for which the product licence is sought, and that its benefits outweigh its risks.

In practice, however, the use of medicines as add-ons to fertility treatment generally involves their ‘off-label’ use. Reproductive immunology, for example, may involve the use of steroids, anticoagulants and monoclonal antibodies. Although efficacy and a positive risk–benefit profile may exist for these medicines’ licensed use, this is not the same as establishing that they work or are safe for their off-label use in the fertility clinic. There are comparatively few controls over doctors’ freedom to prescribe drugs off-label, even though, when there has not been any assessment of the safety or efficacy of a drug’s off-label use, it may pose an unknown and unjustifiable risk of harm to patients.

The General Medical Council (GMC) has issued guidance to doctors on the off-label prescription of medicines which states that:

You should usually prescribe licensed medicines in accordance with the terms of their licence. However, you may prescribe unlicensed medicines where, on the basis of an assessment of the individual patient, you conclude, for medical reasons, that it is necessary to do so to meet the specific needs of the patient (my emphasis).Footnote 33

The guidance goes on to set out when prescribing unlicensed medicines could be said to be ‘necessary’:

  1. a. There is no suitably licensed medicine that will meet the patient’s need …

  2. b. Or where a suitably licensed medicine that would meet the patient’s need is not available. This may arise where, for example, there is a temporary shortage in supply; or

  3. c. The prescribing forms part of a properly approved research project.Footnote 34

Doctors must also be satisfied that be ‘there is sufficient evidence or experience of using the medicine to demonstrate its safety and efficacy’, and patients must be given sufficient information to allow them to make an informed decision.Footnote 35 It is possible that a doctor who prescribed medications off-label could have his fitness to practise called into account, although, in practice, it seems likely that clinicians will simply maintain that these medicines ‘meet the patient’s need’, and that they have sufficient experience within their own clinic to ‘demonstrate safety and efficacy’.

Second, before receiving treatment services in a licensed centre, section 13(6) of the Human Fertilisation and Embryology Act 1990 specifies that patients must be provided with ‘such relevant information as is proper’, and that they must give consent in writing.Footnote 36 Although add-ons are not licensable treatments, it could be said that the clinician’s statutory duty to give patients clear and accurate information extends to the whole course of treatment they receive in the clinic, not just to the treatment for which an HFEA licence is necessary. Indeed, the HFEA’s Code of Practice specifies that:

Before treatment is offered, the centre should give the woman seeking treatment and her partner, if applicable, information about … fertility treatments available, including any treatment add ons which may be offered and the evidence supporting their use; any information should explain that treatment add ons refers to the technologies and treatments listed on the treatment add ons page of the HFEA website.Footnote 37

The Code of Practice also requires centres to give patients ‘a personalised costed treatment plan’, which should ‘detail the main elements of the treatment proposed – including investigations and tests – the cost of that treatment and any possible changes to the plan, including their cost implications’.Footnote 38 Before offering patient an add-on treatment, clinics should therefore be open and honest with patients about the risks, benefits and costs of the intervention.

In practice, however, patients will not necessarily be put off by underpowered trial data, especially when more optimistic anecdotal accounts of success are readily available online. In order to try to counter the circulation of misinformation about treatment add-ons, the HFEA has recently instituted a ‘traffic light’ system that is intended to provide clear and unambiguous advice to patients. At the time of writing, no add-on is green. Most are either amber (that is, ‘there is a small or conflicting body of evidence, which means further research is still required and the technique cannot be recommended for routine use’), or red (that is, ‘there is no evidence to show that it is effective and safe’). The HFEA further recommends that patients who want more detailed information ‘may want to contact a clinic to discuss this further with a specialist’.Footnote 39

It is, however, unsatisfactory to rely upon informed patient choice as a mechanism to control the over-selling of unproven add-on treatments. The fertility industry has ‘a pronounced predilection for over-diagnosis, over-use and over-treatment’, and the widespread adoption of a ‘right to try’ philosophy in practice translates into clinics profiting from the sale of unproven treatments.Footnote 40 For example, the HFEA gives intrauterine culture – in which newly fertilised eggs are placed in a device inside the woman’s womb – an amber rating, and informs prospective patients:

There’s currently no evidence to show that intrauterine culture improves birth rates and is safe. This is something you may wish to consider if you are offered intrauterine culture at an additional cost.

It could instead be argued that the fact that a treatment is expensive and is not known to be either safe or effective is not merely something that patient should ‘consider’ when deciding whether to purchase it, but rather is a reason not to make that treatment available outside of a clinical trial.

Third, while the HFEA does not license add-on services, Persons Responsible are under a duty to ensure that only ‘suitable practices are used in the course of the activities’.Footnote 41 If the HFEA were to decide that those add-on services that it ranks as red are not suitable practices, then clinicians should not use them in the clinic. It has not (yet) done this.

38.4 Won’t Patients Go Elsewhere?

Given patients’ interest in add-on services, many of the 70 per cent of UK clinics that offer at least one of these treatments claim to be responding to patient choice.Footnote 42 Reputable clinicians maintain that if they cease to offer add-on services, patients are likely to go instead to clinics that do provide these treatments, either within the UK – where a clinic does not need a licence from the HFEA if it is only providing add-on services – or overseas.Footnote 43 If patients are going to pay for these treatments elsewhere anyway, then, so the argument goes, it is better to provide them in safe, hygienic, regulated clinics, rather than abandoning patients to the wild west of unregulated fertility services.

The easiest way to see why this argument should be dismissed is to imagine that it is being made about a different sort of non-evidence based treatment, such as stem cell therapies for the treatment of spinal injury. Although stem cell therapies hold very great promise for the treatment of a wide range of conditions, most are still at the experimental stage. That does not stop unregulated clinics overseas from marketing stem cell therapies for the treatment of a wide range of conditions, and as a miraculous cure for ageing.Footnote 44

If a UK doctor was to justify injecting stem cells into a patient’s spinal column, on the grounds that, if he did not do so, the patient would be likely to travel to China for unproven stem cell treatment, it could be predicted that the GMC might be likely to investigate his fitness to practise. The argument that, if he did not offer unproven and unsafe treatment in the UK, patients might choose to undergo the same unsafe treatment in a foreign clinic, would be likely to be given short shrift.

38.5 Conclusion

It is important to remember that add-on treatments are not simply a waste of patients’ money, though they are often that as well. Many add-on treatments are also risky. Despite this, patients are enthusiastic purchasers of additional services for which there is little or no good evidence. In such circumstances, where the lack of robust clinical trial data does not appear to dent patients’ willingness to buy add-on treatments, there is little ‘bottom-up’ incentive to carry out large-scale RCTs.

The HFEA’s information for patients is clear and authoritative, but it is not the only information that patients will see before deciding whether to pay for additional treatment services. Patients embarking upon fertility treatment also seek out information from other patients and from a wide variety of online sources. It is increasingly common for ill-informed ‘discourses of hope’ about unproven treatments to circulate in blogs and in Facebook groups, coexisting and competing with evidence-based information from scientists and regulators.Footnote 45 Fertility patients often report doing their own ‘research’ before embarking on treatment, and this generally means gathering material online, from sources where the quality and accuracy of information may be distinctly variable.Footnote 46

In this perfect storm, it is unreasonable to expect patients to be able to protect themselves from exploitation through the application of the principle of caveat emptor.Footnote 47 On the contrary, what is needed instead is a clear message from the regulator that the routine selling of unproven treatments should not just prompt patients to ask additional questions, but that these treatments should not be sold in the first place. Of course, it is important that reproductive medicine does not stand still, and that new interventions to improve the chance of success are developed. But these should first be tried in the clinic as part of an adequately powered clinical trial. Trial participants must be properly informed that the treatment is still at the experimental stage, and they should not be charged to participate. The GMC also has a role to play in investigating the fitness to practise of doctors who routinely sell, for profit, treatments that are known to be risky and ineffective. As Moffett and Sheeve put it: ‘it is surely no longer acceptable for licensed medical practitioners to continue to administer and profit from potentially unsafe and unproven treatments, based on belief and not scientific rationale’.Footnote 48

39 Medical Devices Regulation New Concepts and Perspectives Needed

Shawn H. E. Harmon
39.1 Introduction

This section of the Handbook explores how technological innovations and/or social changes create disturbances within regulatory approaches. This chapter considers how innovations represent disturbances with which regulatory frameworks must cope, focusing on innovations that can be characterised as ‘enhancing’. Human enhancement can no longer be dismissed as something with which serious regulatory frameworks need not engage. Enhancing pursuits increasingly occupy the very centre of human experience and ‘being’; one can observe widespread student use of cognitively enhancing stimulants, the increasing prevalence of implanted technologies, and great swathes of people absently navigating the physical while engrossed in the digital.

Given the diversity of activities and technologies implicated, the rise and mores of the ‘maker movement’,Footnote 1 and the capacity of traditional – commercial – health research entities to locate innovation activities to jurisdictions with desirable regulation, it is impossible to point to a single regulatory framework implicated by enhancement research and innovation. Candidates include those governing human tissue use and pharmaceuticals, but could also include those governing intellectual property, data use, or consumer product liability. The medical devices framework, one might think, should offer a good example of a regime that engages directly and usefully with the concepts implicated by enhancement and the socio-technical changes wrought by enhancing technologies. As such, this chapter focuses on the recently reformed European medical devices regime.

After identifying some enhancements that are available and highlighting what they mean for the person, the chapter introduces two concepts that are deeply implicated by enhancing technologies: ‘identity’ and ‘integrity’. If regulation fails to engage with them, it will remain blind to matters that are profoundly important to those people who are using or relying on these technologies. Their observance in EU Regulation 2017/745 on Medical Devices (MDR),Footnote 2 and EU Regulation 2017/746 on In Vitro Diagnostic Medical Devices (IVDR),Footnote 3 is examined. It is concluded that they are, unfortunately, too narrowly framed and too innovator driven, and are therefore largely indifferent to these concepts.

39.2 Enhancing Innovations

Since the first use of walking canes, false teeth and spectacles, we have been ‘enhancing’ ourselves for both medical and social purposes, but the so-called technological ‘revolutions’ of late modernity – which have relied on and facilitated innovations in computing, biosciences, materials sciences and more – have prompted changes in the nature and prevalence of the enhancements that we adopt. We now redesign and extend our physical scaffolding, we alter its physiological functioning, we extend the will, and we push the potential capacities of the mind and body by linking the biological with the technological or by embedding the latter into the former.

In the 1960s, Foucault anticipated the erasure of the human being.Footnote 4 We might now understand this erasure to be the rise of the enhanced human, which includes the techno-human hybrid (cyborg).Footnote 5 This ‘posthuman’ thinks of the body as the original prosthesis, so extending or replacing it with other prostheses becomes a continuation of a process that began pre-natally.Footnote 6 Even if one does not subscribe to the posthuman perspective, ‘enhancing’ technologies are commonly applied,Footnote 7 and are becoming more complex and more intrusive, nestling within the body, and performing not only for us, but also on and within us.Footnote 8 Examples include a wide range of smart physiological sensors, cochlear implants, implanted cardiac defibrillators, deep brain stimulators, complex prostheses like retinal and myoelectric prosthetics, mind stimulating/expanding interventions like nootropic drugs, neuro-prosthetics, and consciousness-insinuating constructs like digital avatars, which allow us to build and explore wholly new cyber-environments.

These technologies have many labels, but they all become a part of the person through processes of bodily ‘incorporation’, ‘extension’ or ‘integration’.Footnote 9 Depending on the technology, they allow the individual to generate, store, access and transmit data about the physiological self, or the physical or digital realms they occupy/access, making the individual an integral element of the ‘internet of things’.Footnote 10 The resultant ‘enhanced human’ not only has new material characteristics, but also new sentient and sapient capacities (i.e. to experience sensation or to reason and cultivate insight). In all cases, the results are new forms of co-dependent human-technology embodiment. Even more radical high-conscious beings can be envisioned. Examples include genetically designed humans, synthetically constructed biological beings, and artificially intelligent constructs with consciousness and self-awareness.Footnote 11 The possibility of more radical high-conscious beings raises questions about status that are beyond the scope of this chapter.Footnote 12

39.3 Core Concepts Implicated by the New Human Assemblage

This increasingly complex and commonplace integration of bodies and technologies has given rise to theories of posthumanism and new materialism to which the law remains largely ignorant.Footnote 13 For example, there is a growing understanding of the person as an ‘assemblage’, a variably integrated collection of physiological, technological and virtual elements that are in fluid relation to one another, with some elements becoming prominent in some contexts and others in other contexts, with no one element being definitive of the ‘person’.Footnote 14 The person has become protean, with personhood-defining/shaping characteristics that are always shifting, often at the instigation of enhancing technologies. This conditional state – or variable assemblage – with its integration and embodiment of the technological, makes concepts such as autonomy, privacy, integrity, and identity more socially and legally significant than ever before. For reasons of space, I consider just integrity and identity.

Integrity often refers to wholeness or completeness, which has both physical and emotional elements, both of them health-influencing. Having physical integrity is often equated with conformity to the ‘normal’ body. The normativity of this concept has resulted in prosthetic users being viewed as lacking physical integrity.Footnote 15 However, there is a growing body of literature suggesting that physical integrity need not impose compliance with the ‘normal’ body.Footnote 16 Tied to the state of physical integrity – however we might define it – is the imperative to preserve physical integrity (i.e. to respect the individual and avoid impinging on bodily boundaries), and this implicates emotional/mental integrity. One study uncovered twelve conceptions of integrity, concluding that integrity is supported or undermined by one’s view of oneself, by others with whom one interacts, and by relationships.Footnote 17 Ultimately, integrity is a state of physical and emotional/existential wellness, both of which are influenced by internalities and externalities, including one’s relationship with oneself and others. Critical elements of integrity – feelings of wholeness, of being ‘onself’, or of physical security,Footnote 18 notions of optimal functioning, interactions with others,Footnote 19 and so on – are agitated when technologies are introduced into the body, and there is scope for the law to modulate this agitation, and encourage wellness.

Identity has been described as a mix of ipse and idemFootnote 20 (see also Postan, Chapter 23 of this volume). Ipse refers to ‘self-identity’, the sense of self of the human person, which is reflexive and influenced by internalities such as values and self-perceptions. It is the point from which the individual sees the world and herself; there is nothing behind or above it, it is just there at the source of one’s will and energy, and it is persistent, continuous through time and space but by no means stable.Footnote 21 Idem refers to ‘sameness identity’, or the objectification of the individual that stems from categorisation. One might hold several idem identities depending on the social, cultural, religious or administrative groups to which one belongs (i.e. the range of public statuses that may be assigned at birth or throughout life, or imposed by others). It expresses the belonging of one to a category, facilitating social integration. Ultimately, identity is both internal and fluid, and external and equally fluid, but also potentially static.Footnote 22 It can be constructed, chosen or imposed. It can be fragmented and aggregated, and it can be commodified. Both ipse and idem elements will be shaped by enhancing technologies, both mechanical and biological, which have been described as ‘undoing the conventional limits of selfhood and identity’.Footnote 23 Empirical research has found that both elements of identity in prosthetic users, for example, are deeply entangled with their devices.Footnote 24

Of course, neither integrity nor identity are unknown to the law. Criminal law seeks to protect our physical integrity, and it punishes incursions against it. Human rights law erects rights to private and family life, which encompass moral and physical integrity and the preservation of liberty.Footnote 25 Health law erects rights to physical and mental integrity through mechanisms such as consent, best interests and least restrictive means.Footnote 26 Law is also a key external shaper of identity, creating groups based on factors such as developmental status (i.e. rights of fetuses to legal standing and protection),Footnote 27 sexual orientation (i.e. right to marriage or work benefits)Footnote 28 and gender (i.e. right to gender identity recognition).Footnote 29 It also defines ‘civil identity’, a common condition for access to basic services.Footnote 30 And notions of identity have been judicially noticed in relation to new technologies: in Rose v Secretary of State for Health,Footnote 31 which concerned disclosure of information about artificial insemination, the court found that information about biological identity went to the heart of identity and the make-up of the person, and that identity included details of origins and opportunity to understand them, physical and social identity, and also psychological integrity.

Unfortunately, these two increasingly important concepts have not been well-handled by the law. They are subject to very different interpretations depending on one’s view of human rights as negative or positive.Footnote 32 Severe limits have been placed on the law being used to enable or impose those conditions that facilitate individuals living lives of meaning and becoming who they are (or wish to be). Narrow views as to what counts as a life of worth have resulted in limitations being placed on what individuals can do to become who they wish to be, with decision-makers often blind to the choices actually available (e.g. consider discourses around a ‘good death’ and medical assistance in dying). Thus, at present, neither integrity nor identity are consistently articulated or enabled by law. This could be the result of their multifaceted nature, or of the negative approach adopted in protecting them,Footnote 33 or of the indirectness of the law’s interest in them.Footnote 34 The question of their treatment in health research regulation remains, and it is to this that we now turn.

39.4 Core Concepts and Medical Device Regulation

The market authorisation framework for medical devices is an example of health research regulation that shapes the nature, application and integrative characteristics of many enhancing technologies. Thus, it is profoundly linked to practices aimed at expanding and diversifying the human assemblage, and so it might be expected to appreciate, define and/or facilitate the concepts identified above as being critical to the person. In Europe, the development and market authorisation of medical devices is governed by the previously noted MDR and IVDR, both of which came into force in May 2017, but which will not be fully implemented until May 2020 and May 2022 respectively.Footnote 35

As will be clear from other chapters, the framing of regulatory frameworks is critical. Framing signals the regime’s subject and objective; it shapes how its instruments articulate problems, craft solutions and measure success. It has been observed that the identification, definition and control of ‘objects’ is a common aim of regulatory instruments; specific objects are chosen because they represent an opportunity for commerce, a hazard to human health, or a boon – or danger – to social architecture.Footnote 36 Certain fields focus on certain objects, with the result that silos of regulation emerge, each defined by its existence-justifying object, which might be data, devices, drugs, tissue and embryos, etc., and the activity in relation to that object around which we wish to create boundaries (i.e., production, storage, use).

The MDR and IVDR are shaped by EU imperatives to strengthen the common market and promote innovation and economic growth, and are thus framed as commercial instruments.Footnote 37 Their subject is objects (e.g. medical devices), not people, not health outcomes and not well-being. MDR Article 1 articulates this frame and subject, stating that it lays down rules concerning placing or making available on the market, or putting into service, medical devices for human use in the EU.Footnote 38 MDR Article 2(1) defines medical device as any instrument, apparatus, appliance, software, implant, reagent, material or other article intended to be used, alone or in combination, for human beings for a range of specified medical purposes (e.g. diagnosis, prevention, monitoring, prediction, prognosis, treatment, alleviation of disease, injury or disability, investigation, replacement or modification of the anatomy, providing information derived from the human body) that does not achieve its principal intended action by pharmacological, immunological, or metabolic means.

The MDR and IVDR construct their objects simultaneously as ‘risk objects’, ‘innovation objects’ and ‘market objects’, highlighting one status or another depending on the context and the authorisation stage reached. All three constructions can be seen in MDR Recital 2 (which is mirrored by IVDR Recitals 1 and 2):

This Regulation aims to ensure the smooth functioning of the internal market as regards medical devices, taking as a base a high level of protection of health for patients and users, and taking into account the … enterprises that are active in this sector. At the same time, this Regulation sets high standards of quality and safety for medical devices in order to meet common safety concerns as regards such products. Both objectives are being pursued simultaneously and are inseparably linked …

They then classify devices on a risk basis, and robust evidence and post-market surveillance is imposed to protect users from malfunction. For example, MDR Recital 59 acknowledges the insufficiency of the old regime, stating that it is necessary to introduce specific classification rules sensitive to the level of invasiveness and potential toxicity of devices that are composed of substances that are absorbed by, or locally dispersed in, the human body; where the device performs its action, where it is introduced or applied, and whether a systemic absorption is involved are all factors going to risk that must be assessed. MDR Recital 63 states that safety and performance requirements must be complied with, and that, for class III and implantable devices, clinical investigations are expected.Footnote 39 These directions are operationalised in MDR Chapters V (Classification and Conformity Assessments),Footnote 40 and VI (Clinical Evaluation and Clinical Investigations).Footnote 41 IVDR Recitals 55 and 61, and Chapters V and VI are substantively similar.

The above framing imposes a substantial fetter on what these instruments are intended to do, or are capable of doing. It serves to largely erase the person and personal experience from their perspective and remit. The recipient of a device is constructed as little more than a consumer who must be protected from the harm of a malfunctioning device. An example of the impoverished position of the person is the Regulations’ treatment of risk. They rely on a narrow understanding of risk, framing it as commercial object safety at various stages of development and roll-out. Other types of risks and harms are marginalised or ignored.Footnote 42 Thus, there is no acknowledgement that their objects – medical devices – will not always be – and will really only briefly be – ‘market objects’. Many devices will become ‘physiological objects’ that are profoundly personal to, and intimate with, the recipient. Indeed, many will cease to be ‘objects’ altogether, becoming instead components of the human assemblage, undermining or facilitating integrity, and exerting pressures on identity. As such, the nature of the risks they pose changes relatively quickly, and more so over time.

Had broader human well-being or flourishing been foregrounded, then greater attention to public interest beyond device safety might have been expected. Had legislators given any consideration to the consequences of these technologies once integrated with the person and becoming a part of that human assemblage, then further conditions for approval might have been expected. Developers might have been asked to present social evidence about the actual need for the device, or the potential social acceptance of the device, or how the device is expected to interact with other major – or common – health or social technologies, systems, or practices. In short, the patient, or the non-patient user, may have featured in the market access assessment.

The one exception to the Regulations’ ignorance of social experience is that relating to post-market surveillance. MDR Recital 74 requires manufacturers to play an active role in the post-market phase by systematically gathering information on experiences with their devices via a comprehensive post-market surveillance system. It is operationalised in Chapter VII.Footnote 43 However, while these provisions are useful, they fail to acknowledge the now embodied condition of the regulatory object, and the new personal, social, ethical and cultural significance that it holds. In other words, they evince an extremely ‘bounded’ perspective of their objects. Such has been criticised:

The attention of law and regulation on ‘bounded objects’ … should be questioned on at least two counts: first, for the fallacy of attempting to ‘fix’ such regulatory objects, and to divorce them from their source and the potential impact on identity for the subjects themselves; and, second, for the failure to see such objects as also experiencing liminality.Footnote 44

This is pertinent to situations where technologies are integrated into the body, situations which exemplify van Gennep’s pattern of experience: separation from existing order; liminality; re-integration into a new world.Footnote 45 The features of this new world are that the regulatory object (device) becomes embodied and incorporated in multiple ways – physical, functional, psychological and phenomenological.Footnote 46 Both the object (device) and subject (host) are transformed as a result of this incorporation such that the typical subject–object dichotomy entrenched in the law is not appropriate;Footnote 47 the Regulations’ object-characterisations are no longer apropos and their indifference to the subject is potentially unjust given the ‘new world’ that now exists.

This cursory assessment suggests that the Regulations are insufficient and misdirected from the perspective of ensuring that the full public interest is met through the regulated activity. As previously observed, new and emerging technologies can be conceptually, normatively and practically disruptive.Footnote 48 Technologies applied to humans for purposes of integration – treatment or enhancement – are disruptive on all three fronts, particularly once they enter society. Conceptually, they disrupt existing definitions and understandings of the regulatory objects, which are transformed once they form part of the human assemblage. Normatively, they disrupt existing regulatory concepts like risk, which are exposed as being too narrow in light of how these objects might interact with and harm individuals. Practically, they disrupt existing medical practice – blurring the lines between treatment and enhancement – and regulatory practices – troubling the oft-relied-on human/non-human and subject/object dichotomies.

This assessment also suggests that the historical boundaries between, or categories of, ‘devices’ and ‘medicines’, are increasingly untenable because of the types of devices being designed (e.g. implanted mechanical devices and mixed material devices that interact with the physiological, sometimes through the release of medicines). This area of human health research therefore highlights both fault-lines within instruments and empty spaces between them. It might be that the devices and medicines regimes need to be brought together, with a realignment of the regulatory objects and a better understanding of where these objects are destined to operate.

39.5 Conclusions

As the enhanced human becomes more ubiquitous, and the radical posthuman comes into being, narrow or negative views of integrity and identity become ever more attenuated from the technologically shaped lived experience. Moreover, the greater the human/technology integration, the greater the engagement of integrity and identity. Insufficient attention to these concepts in regulatory frames, norms and decisions raises the likelihood that such will undermine rather than support or protect human well-being. Only with clear recognition will the self-creation – the being and becoming – that they underwrite be facilitated through the positive shaping of social conditions.Footnote 49

The MDR and IVDR are directly implicated in encouraging, assessing and rolling out integrative technologies destined for social and clinical uses, but they do not match the technical innovation they manage with sufficient regulatory recognition of the integrity and identity that is engaged. Despite their recent reform, they do not evince a greater regulatory understanding of the common natures and consequences of tissue, organ and technological artifacts, and they therefore do not represent a significantly improved – more holistic and less silo-reliant – regulatory framework.

Had they adopted a broader perspective and value base, they would have taken notice of people as subjects, and crafted a framework that contributed to the development of innovations that are not only safe, but also supportive of – or at least not corrosive to – what people value, including integrity and identity. At base, they would have benefited from:

  • a clearer and broader value base;

  • an emphasis on decisional principles rather than narrow (technical) objects on which rules are imposed; and

  • greater notice of what the devices become once they are through the market-access pipeline.

Ultimately, medical device regulation is an example of health research regulation that operates in an area where innovation has created disturbances, and those disturbances have not been resolved. Though some have been acknowledged – leading to the new regime – the real disturbances have hardly been appreciated.

Afterword What Could a Learning Health Research Regulation System Look Like?

Graeme Laurie
1 Introduction

This final chapter of the Cambridge Handbook of Health Research Regulation revisits the question posed in the Introduction to the volume: What could a Learning Health Research Regulation System look like? The discussion is set against the background of debates about the nature of an effective learning healthcare system,Footnote 1 building on the frequently expressed view that any distinction between systems of healthcare and health research should be collapsed or at the very least minimised as far as possible. The analysis draws on many of the contributions in this volume about how health research regulation can be improved, and makes an argument that a framework can be developed around a Learning Health Research Regulation System (LHRRS). Central to this argument is the view that successful implementation of an LHRRS requires full integration of insights from bioethics, law, social sciences and the humanities to complement and support the effective delivery of health and social value from advances in biomedicine, as well as full engagement with those who regulate, are regulated, and are affected by regulation.

2 Lessons from Learning Healthcare Systems and Regulatory Science

The US Institute of Medicine is widely credited for making seminal contributions to debates about the nature of learning healthcare systems, primarily through a series of expert workshops and reports examining the possible contours of such systems. A central feature of the normative frameworks proposed relies on the collapsing – or at least a blurring – of any distinction between objectives in the delivery of healthcare and the objectives of realising value from human health research. The normative ideal has been articulated as follows:

… a system in which advancing science and clinical research would be natural, seamless, and a real-time byproduct of each individual’s care experience; highlighted the need for a clinical data trust that fully, accurately, and seamlessly captures health experience and improves society’s knowledge resource; recognized the dynamic nature of clinical evidence; noted that standards should be tailored to the data sources and circumstances of the individual to whom they are applied; and articulated the need to develop a supporting research infrastructure.Footnote 2

It is the challenge of developing and delivering a ‘supporting research infrastructure’ that is the core concern of all contributions to this volume. We have stated at the outset that our approach is determinedly normative in tackling what we believe to be the central features of any ecosystem of health research regulation. The structure and content of the sections of this volume reflect our collective belief that the design and delivery of any effective and justifiable system of human health research must place the human at the centre of its endeavours. Also, when seeking to design systems from the bottom-up, so to speak, we contend that this human-centred approach to systems must go beyond patient-centredness and exercises in citizen engagement. In no way is this to suggest that these objectives are unimportant; rather, it is to recognise that these endeavours are only part of the picture and that a commitment to delivering a whole system approach must integrate both these and other elements into any system design.

There is, of course, the fundamental question of where does one begin when attempting system design? Each discipline and field of enquiry will have its own answer. As an illustration, we can consider a further workshop held in 2011 under the auspices of the Institute of Medicine and other bodies; this was a Roundtable on value and science-driven healthcare that sought to ‘apply systems engineering principles in the design of a learning healthcare system, one that embeds real-time learning for continuous improvement in the quality, safety, and efficiency of care, while generating new knowledge and evidence about what works best’.Footnote 3 Once again, these are manifestly essential elements of any well-designed system, but it is striking that this report makes virtually no mention of the ethical issues at stake. To the extent that ethics are mentioned, this is presented as part of the problem of current fragmentation of systems,Footnote 4 rather than as any part of a systems solution: ‘[e]ach discipline has its own statement of its ethics, and this statement is nowhere unified with another. There is no common, shared description of the ethical center of healthcare that applies to everybody, from a physician to a radiology technician to a manager’.Footnote 5

Furthermore, while the Roundtable was styled as being about value- and science-driven healthcare, it is crucial to ask what is meant by ‘value’ in this context of systems design. Indeed, the Roundtable participants did call for greater enquiry into the terms, but as it was characterised in various presentations and discussions, the term was used variously to refer to:

  • - Value to consumers;

  • - Value from ‘substantially expanded use of clinical data’;Footnote 6

  • - Value in accounting for costs in outcomes and innovation

  • - Value in ‘health returned for dollars invested’;Footnote 7

  • - Value as something to be measured for inclusion in decision-making processes.Footnote 8

As extensively demonstrated by the chapters in this volume, there is a crucial distinction between ‘value’ seen in these terms and the ‘values’ that underpin any structure or system designed to deliver individual and social benefit through improved health and well-being. This distinction is, accordingly, the focus of the next section of this chapter.

Before this, a further important distinction between healthcare systems and health research systems must be highlighted. As an earlier Institute of Medicine report noted, patient-centred care is of paramount importance in identifying and respecting the preferences, needs and values of patients receiving healthcare.Footnote 9 This position has rightly been endorsed in subsequent learning systems reports.Footnote 10 However, for LHRRS, and from a values perspectives, there is arguably a wider range of interests and values at stake in conducting health research and delivering benefits to society.Footnote 11 This is the principal reason why this volume begins with an account of key concepts in play in human health research (see Section IA), because this provides a solid platform on which to conduct multidisciplinary, multisector discussions about what is important, what is at risk, and what accommodations should be made to take into account the range of interests that are engaged in health research. This is a further reason why, in Section IIA, our contributors engage critically and at length with the private and public dimensions of health research regulation.

From this, two important top-level lessons arise from this volume:

  • - There is considerable value in taking a multi- and inter-disciplinary approach to systems design that places bioethics, social sciences and humanities at the centre of discussions because these disciplinary perspectives are crucial to ensuring that the human remains at the focus of human health research; indeed, an aspiration to trans-disciplinary contributions would not be remiss here.

  • - There is a need for further and fuller enquiry into ways in which the values underpinning healthcare and health research do, and do not, align, and how these can be mobilised to improve regulatory design.

On this last point, we can look to recent initiatives in Europe and the UK that have as their focus ‘regulatory science’ and we can ask further how the contributions in this volume can add to these debates.

A July 2020 report from the UK advocated for innovation in ‘regulatory science’ as it relates to healthcare in order to complement the nation’s industrial strategy, to enable accelerated routes to market; to increase benefits to public health; to assure greater levels of patient safety; to influence international practice; and to promote investment in the UK (Executive Summary).Footnote 12 ‘Regulatory science’ is defined therein as ‘[t]he application of the biological, medical and sociological sciences to enhance the development and regulation of medicines and devices in order to meet the appropriate standards of quality, safety and efficacy’.Footnote 13 The authors prefer this definition among others as a good starting point for further deliberation and action, both for its breadth and inclusiveness as to what should be considered to be in play. The report offers a very full account of the present regulatory landscape in the UK and offers a strong set of recommendations for improvement in four areas: (i) strategic leadership and coordinated support, (ii) enabling innovation, (iii) implementation and evaluation, and (iv) workforce development. However, a striking omission from the report is any direct and explicit mention of how ‘sociological sciences’, let alone bioethical inquiry, can contribute to the delivery of these objectives.

Similarly, in March 2020, the European Medicines Agency (EMA) published its strategy, ‘Regulatory Science to 2025’. The stated aim is ‘to build a more adaptive regulatory system that will encourage innovation in human and veterinary medicine’. For the EMA, regulatory science refers to

the range of scientific disciplines that are applied to the quality, safety and efficacy assessment of medicinal products and that inform regulatory decision-making throughout the lifecycle of a medicine. It encompasses basic and applied biomedical and social sciences and contributes to the development of regulatory standards and tools.Footnote 14

As with the UK report, however, there is no more than a cursory mention of the concrete ways in which social sciences and bioethics contribute to these objectives.Footnote 15

This returns us to the key questions that frame this Afterword: where is the human in human health research? Also, what would a whole-system approach look like when we begin with the human values at stake and design systems accordingly?

3 From Value to Values

Currently, there is extensive discussion and funding of data-driven innovation, and undoubtedly, there is considerable value in the raw, aggregate, and Big Data themselves. However, given that in biomedicine the data in question predominantly come from citizens in the guise of their personal data emanating from a growing number of areas of their private lives, it is the contention of this Afterword, and indeed the tenor of this entire volume, that it is ethics and values that must drive the regulation that accompanies the data science, and not a science paradigm. As the conclusion to the Introduction of this volume makes clear, public trust is vital to the success of the biomedical endeavour, and any system of regulation of biomedical research must prove itself to be trustworthy. As further demonstrated by various contributions to this volume,Footnote 16 a failure to address underlying public values and concerns in health research and wider uses of citizens’ data can result in a net failure to secure social licence and doom the initiatives themselves.Footnote 17 This has been re-enforced most recently in February 2020 by an independent report commissioned by Understanding Patient Data and National Health Service (NHS) England that found, among other things, that NHS data sharing should be undertaken by partnerships that are transparent and accountable, and that are governed by a set of shared principles (principles being a main way in which values are captured and translated into starting points for further deliberation and action).Footnote 18

Thus, we posit that any learning system for human health research must be values-driven. To reiterate, this explains and justifies the contributions in Section IA of the volume that seek to identify and examine the key values and core concepts that are at stake. Normatively, it would not be helpful or appropriate for this chapter to attempt to suggest or prescribe any particular configuration of values to deliver a justifiable learning system. This depends on myriad social, cultural, economic, institutional and ethical factors within a given country or jurisdiction seeking to implement an effective system for itself. Rather, we suggest that values engagement is required amongst all stakeholders implicated in, and affected by, such a system in its given context, and this is the work done by Section IB of the volume in identifying key actors, including publics, and demonstrating through examples how regulatory tools and concepts have been used to date to regulate human health research. Many of these remain valid and appropriate after years of experiences, albeit that the analysis herein also reveals limitations and caveats to existing approaches, for example with consentFootnote 19 and proportionality,Footnote 20 while also demonstrating the means by which institutions can show trustworthinessFootnote 21 and/or conduct meaningful engagement with publics and other stakeholders.Footnote 22

But to understand what it means for a system to be truly effective in self-reflection and learning, we can borrow once again from discussion in the learning healthcare context. As Foley and Fairmichael have pointed out: ‘Learning Healthcare Systems can take many forms, but each follows a similar cycle of assembling, analysing and interpreting data, followed by feeding it back into practice and creating a change’.Footnote 23 The same is true for a HRRLS. Thus, a learning system is one that consists not only of processes designed to deliver particular outcomes, but also one that has feedback loopsFootnote 24 and processes of capturing evidence of what has worked less well.Footnote 25 Self-evidently from the above discussion, ‘data’ in this context will include data and information about values failureFootnote 26 or incidents or points in the regulatory processes where sight has been lost of the original values that underpin the entire enterprise.

From the perspective of regulatory theory and practice, this issue relates to the ever-present issue of sequencing: when, and at what point in a series of processes should certain actions or instruments be engaged to promote key regulatory objectives?Footnote 27 In regulatory theory, sequencing is often concerned with escalation of regulatory intervention, that is, invoking a particular regulatory response when (and only when) other regulatory responses fail. However, this need not be the case. Early and sequential feedback loops in the design and delivery of a system can help to prevent wider systemic failure at a later point in time. This is especially the case if ethical sensitivities to core values remain logically prior to techno-scientific considerations of risk management and are part of risk-benefit analysis.Footnote 28 Indeed, as pointed out by Swierstra and Rip, human agency can make a difference at an early stage of development/innovation, when issues and directions are still unclear, but much less so in later stages when ‘alignments have sedimented’.Footnote 29

Key among the ethical objectives of any health research system is the need to deliver social value (or at least that prospective research has a reasonable chance of doing so).Footnote 30 Some of us have argued elsewhere that there is at present an unmet need to appraise social value iteratively throughout the entire research lifecycle,Footnote 31 and this builds on existing arguments to see social value as a dynamic concept. The implications of this for a LHRRS are that the research ecosystem would extend from the research design stage through publication and dissemination of research results, to data storage and sharing of findings and new data for future research. This means that social value is not merely something promissory and illusive that is dangled before a research ethics committee as it pores over a research protocol,Footnote 32 but that it is potentially generated and transformed multiple times and by a range of actors throughout the entire process of research: from idea to impact. Seen in this way, social value itself becomes a potential metric of success (or failure) of a learning health research system, and opens the possibility that value might emerge at times and in spaces previously unforeseen. Indeed, Section IIB of this volume is replete with examples of the importance of time within good governance and regulation, whether this be about timely research interventions in the face of emergencies,Footnote 33 the appropriateness and timing of effective oversight of clinical innovation,Footnote 34 or the challenge of ‘evidence’ when attempting to regulate traditional and non-conventional medicines.Footnote 35

As a final crucial point about how an ethical ‘system’ might be constructed with legitimacy and with a view to justice for all, we cannot overlook what Kipnis has called infrastructural vulnerability:

At the structural level, essential political, legal, regulative, institutional, and economic resources may be missing, leaving the subject open to heightened risk. The question for the researcher is, ‘Does the political, organizational, economic, and social context of the research setting possess the integrity and resources needed to manage the study?’

… [c]learly the possibility of infrastructural vulnerability calls for attention to the contexts within which the research will be done.Footnote 36

Questions of the meanings and implication of vulnerability are addressed early in this volume as a crucial framing for the entire volume.Footnote 37 It is also clear that this concern is not one for researchers alone. This brings us to the important question of who is implicated in the design and delivery of a LHRSS ecosystem?

4 Who is Implicated in this Ecosystem, and With Which Consequences?

In 2019, Wellcome published its Blueprint for Dynamic Oversight of emerging science and technologies.Footnote 38 This is aimed determinedly at the UK government, and it is founded on four principles with which few could take exception.

Dynamic oversight can be delivered by reforms underpinned by the following principles:

Inclusive: Public groups need to be involved from an early stage to improve the quality of oversight while making it more relevant and trustworthy. The Government should support regulators to involve public groups from an early stage and to maintain engagement as innovation and its oversight is developed.

Anticipatory: Identifying risks and opportunities early makes it easier to develop a suitable approach to oversight. Emerging technology often develops quickly and oversight must develop with it. UK regulators must be equipped by government to anticipate and monitor emerging science and technologies to develop and iterate an appropriate, proportionate approach.

Innovative. Testing experimental oversight approaches provides government and regulators with evidence of real-world impacts to make oversight better. Achieving this needs good collaboration between regulators, industry, academia and public groups. The UK is beginning to support innovative approaches, but the Government needs to create new incentives for the testing of new oversight approaches.

Proportionate. Oversight should foster the potential benefits of emerging science and technologies at the same time as protecting against harms, by being proportionate to predicted risk. The UK should keep up its strong track record in delivering proportionate oversight. These changes will only be delivered effectively if there is clear leadership and accountability for oversight. This requires the Government to be flexible and decisive in responding to regulatory gaps.

Wellcome, A Blueprint for Dynamic Oversight, (2019)

We can contrast this top-down framework with a bottom-up study conducted by the members of the Liminal Spaces team as part of the project funding this volume. The team undertook a Delphi policyFootnote 39 study to generate empirical data and a cross-cutting analysis of health research regulation as experienced by stakeholders in the research environment in the United Kingdom. In short, the project found that:

[t]he evidence supports the normative claim that health research regulation should continue to move away from strict, prescriptive rules-based approaches, and towards flexible principle-based regimesFootnote 40 that allow researchers, regulators and publics to coproduce regulatory systems serving core principles.Footnote 41

As a concrete illustration of why this is important, we can consider the last criterion listed as part of the Wellcome Dynamic Oversight framing: proportionality. The Delphi study revealed novel insights about how proportionality as a regulatory tool is seen and operationalised in practice. In contrast to the up-front risk management framing offered above, the Delphi findings suggest that proportionality is often treated as an ethical assessment of the values and risks at stake at multiple junctures in the research trajectory. That is, while it can be easy to reduce proportionality to a techno-bureaucratic risk/benefit assessment, this is to miss the point that the search for proportionality is a moral assessment of whether, when, and how to proceed in the face of uncertainty. Furthermore, the realisation that a role for proportionality can arise at multiple junctures in the research ecosystem, including into the phase about data accessFootnote 42 and potential feedback of results to research participants, highlights that the range of actors involved in these processes are diverse and often unconnected. For example, Delphi participants frequently stated that reporting of adverse events was a downstream disproportionate activity:

the definitions of adverse events result in vast numbers of daily events being classed as reportable with result that trials gets bogged down in documenting the utter unrelated trivia that are common in patients with some disorders and unrelated to the drug to the neglect of collecting complete and high quality baseline and outcome data on which the reliability of the results depend (25, researcher).Footnote 43

However, this should be contrasted with the possible identity interests of patients and citizens, which can be impacted by (non)access to biomedical information about them, as argued elsewhere in this volume.Footnote 44 The ethics of what is in play are by no means clear-cut. The implication, then, is that a regulatory tool such as proportionality might have far wider reach and significance than has been previously thought; as part of a LHHRS this not only has consequences for a wider range of actors, but the ethical dimensions and sensitivities that surround their (in)action must be duly accounted for.

Regarding possible means to navigate growing complexity within a research regulation ecosystem, some further valuable ideas emerged from the Delphi study. For example, one participant supported the notion of ‘regulators etc. becoming helpers and guiding processes to make approval more feasible. Whilst having a proportionate outlook’ (27, clinician). Other survey respondents called for ‘networked governance’ whereby, among other things, ‘regulatory agencies in health (broadly understood) would need to engage more with academics and charities, and to look to utilise a broader range of expertise in designing and implementing governance strategies and mechanisms’ (5, researcher).Footnote 45

Manifestly, all of this suggests that a robustly designed LHRRS is a complex beast. In the final part of this section, we offer regulatory stewardship as a means of better navigating this complexity for researchers, sponsors, funders and publics, and of closing feedback loops for all stakeholders.

Regulatory stewardshipFootnote 46 has no unitary meaning, but our previous research has demonstrated that examples from the literature nevertheless point to a commonality of views that cast stewardship as being about ‘guiding others with prudence and care across one or more endeavours – without which there is risk of impairment or harm – and with a view to collective betterment’.Footnote 47 More work needs to be done on whether and when this role is already undertaken within research ecosystems by certain key actors who may not see themselves as performing such a task nor receiving credit for it. One of the Liminal Spaces team has argued that ethics review bodies take on this role to a certain extent – empirical evidence from NHS Research Ethics Committees (RECs) in the UK suggests a far more supportive and less combative relationship with researchers than is anecdotally reported.Footnote 48 However, by definition, ethics review bodies can only operate largely at the beginning of the research lifecycle – who is there to assess whether social value was ever actually realised, let alone maximised to the range of potential beneficiaries, including the redressing of social injustices relating to health and even health/wealth generation?

Further empirical research has shown that productive regulation is often only ‘instantiated’ through practice;Footnote 49 that is, it is generated as a by-product of genuine cooperation of regulators and a range of other actors, including researchers, attempting to give effect to regulatory rules or statutory diktats. We suggest, therefore, that there might be a role for regulatory stewardship as part of an LHRRS as a means of giving effect to the multiple dimensions that must interact to give such a system of operating in a genuinely responsive, self-reflexive, and institutionallyFootnote 50 auto-didactic way.

5 What Could a Learning Health Research Regulation System Look Like?

In light of the above, we suggest that the following key features are examples of what we might expect to find in an LHRRS system:

  • A system that is values-driven, wherein the foundational values of the system reflect those of the range of stakeholders involved;

  • A demonstrable commitment to inclusivity and meaningful participation in regulatory design, assessment and reform, particularly from patients and publics;

  • Robust mechanisms for evidence gathering for assessment and review of the workings regulatory processes and relevant laws;

  • Systems-level interconnectivity to learn lessons across regulatory siloes, perhaps supported by a robust system of regulatory stewardship;

  • Clear lines of responsibility and accountability of actors across the entire trajectory of the research enterprise;

  • Coordinated efforts to ensure ethical and regulatory reflexivity, that is, processes of self-reference of examination and action, requiring institutions and actors to look back at their own regulatory practices, successes and failures;

  • Existence of, and where appropriate closing of, regulatory feedback loops to deliver authentic learning back to the system and to its users;

  • Appropriate incentives for actors to contribute to the whole-system approach, whether this be through recognition or reward or by other means, and eschewing a compliance culture that drives a fear of sanction supplanting it with a system that seeks out and celebrates best practice, while not eschewing errors and lessons from failure.Footnote 51

  • Transparency and demonstrated trustworthiness in the integrity of the regulatory system as a whole;

  • Regulatory responsiveness to unanticipated events (particularly those that are high risk both as to probability and as to magnitude of impact). The COVID-19 pandemic is one such example – the clamour for a vaccines puts existing systems of regulation and protection under considerable strain, not least for the truncated timeframe for results that is now expected. Values failure in the system itself is something to be avoided at all cost when such events beset our regulatory systems.

6 Conclusion

As indicated at the outset of this volume, the golden thread that runs through the contributions is the challenge of examining the possible contours of a Learning Health Research Regulation System. This, admittedly ambitious, task cannot be done justice in a single Afterword, and all chapters in this volume must be read alone for their individual merit. Notwithstanding, an attempt has been made here to draw elements together that re-enforce – and at times challenge – other work in the field that is concerned with how systems learn and to suggest possible ways forward for human health research. And, even if the ambition of a fully-integrated learning system is too vaulting, we suggest nonetheless that adopting a Whole System Approach to health research regulation can promote more joined-up, reflective and responsive systems of regulation. By Whole System Approach we mean that regulatory attention should be paid to capturing and sharing evidence across the entire breadth and complexity of health research, not just of what works well and what does not, but principally of identifying where, when, and how human values are engaged across the entire research lifespan. This approach, we contend, holds the strongest prospect of delivering on the twin ambitions of protecting research participants as robustly as possible while promoting the social value from human health research as widely we possible.

Footnotes

23 Changing Identities in Disclosure of Research Findings

1 This chapter will not discuss responsibilities actively to pursue findings, or disclosures to family members in genetic research, nor is it concerned with feedback of aggregate findings. For discussion of researchers’ experiences of encountering and disclosing incidental findings in neuroscience research see Pickersgill, Chapter 31 in this volume.

2 S. M. Wolf et al., ‘Managing Incidental Findings in Human Subjects Research: Analysis and Recommendations’, (2008) The Journal of Law, Medicine & Ethics, 36(2), 219248.

3 L. Eckstein et al., ‘A Framework for Analyzing the Ethics of Disclosing Genetic Research Findings’, (2014) The Journal of Law, Medicine & Ethics, 42(2), 190207.

4 B. E. Berkman et al., ‘The Unintended Implications of Blurring the Line between Research and Clinical Care in a Genomic Age’, (2014) Personalized Medicine,11(3), 285295.

5 E. Parens et al., ‘Incidental Findings in the Era of Whole Genome Sequencing?’, (2013) Hastings Center Report, 43(4), 1619.

6 For example, in addition to sources cited elsewhere in this chapter, see R. R. Fabsitz et al., ‘Ethical and Practical Guidelines for Reporting Genetic Research Results to Study Participants’, (2010) Circulation: Cardiovascular Genetics, 3(6),574580; G. P. Jarvik et al., ‘Return of Genomic Results to Research Participants: The Floor, the Ceiling, and the Choices in Between’, (2014) The American Journal of Human Genetics, 94(6), 818826.

7 C. Weiner, ‘Anticipate and Communicate: Ethical Management of Incidental and Secondary Findings in the Clinical, Research, and Direct-to-Consumer Contexts’, (2014) American Journal of Epidemiology, 180(6), 562564.

8 Medical Research Council and Wellcome Trust, ‘Framework on the Feedback of Health-Related Findings in Research’, (Medical Research Council and Wellcome Trust, 2014).

9 Berkman et al., ‘The Unintended Implications’.

10 Eckstein et al., ‘A Framework for Analyzing’.

11 Wolf et al., ‘Managing Incidental Findings’.

13 Eckstein et al., ‘A Framework for Analyzing’.

14 Medical Research Council and Wellcome Trust, ‘Framework on the Feedback’.

16 Berkman et al., ‘The Unintended Implications’.

17 S. M. Wolf et al., ‘Mapping the Ethics of Translational Genomics: Situating Return of Results and Navigating the Research-Clinical Divide’, (2015) Journal of Law, Medicine & Ethics, 43(3), 486501.

18 G. Laurie and N. Sethi, ‘Towards Principles–Based Approaches to Governance of Health–Related Research Using Personal Data’, (2013) European Journal of Risk Regulation, 4(1), 4357. Genomics England, ‘The 100,000 Genomes Project’, (Genomics England), www.genomicsengland.co.uk/about-genomics-england/the-100000-genomes-project/.

19 Eckstein et al., ‘A Framework for Analyzing’.

20 A. L. Bredenoord et al., ‘Disclosure of Individual Genetic Data to Research Participants: The Debate Reconsidered’, (2011) Trends in Genetics, 27(2), 4147.

21 Wolf et al., ‘Mapping the Ethics’.

22 In the UK, the expected standard of duty of care is assessed to what reasonable members of the profession would do as well as what recipients want to know (see C. Johnston and J. Kaye, ‘Does the UK Biobank Have a Legal Obligation to Feedback Individual Findings to Participants?’, (2004) Medical Law Review, 12(3), 239267.

23 D. I. Shalowitz et al., ‘Disclosing Individual Results of Clinical Research: Implications of Respect for Participants’, (2005) JAMA, 294(6), 737740.

24 Laurie and Sethi, ‘Towards Principles–Based Approaches’.

25 G. Laurie and E. Postan, ‘Rhetoric or Reality: What Is the Legal Status of the Consent Form in Health-Related Research?’, (2013) Medical Law Revue, 21(3), 371414.

26 Odièvre v. France (App. no. 42326/98) [2003] 38 EHRR 871; ABC v. St George’s Healthcare NHS Trust & Others [2017] EWCA Civ 336.

27 J. Marshall, Personal Freedom through Human Rights Law?: Autonomy, Identity and Integrity Under the European Convention on Human Rights (Leiden: Brill, 2008).

28 A. M. Farrell and M. Brazier, ‘Not So New Directions in the Law of Consent? Examining Montgomery v Lanarkshire Health Board’, (2016) Journal of Medical Ethics, 42(2), 8588.

29 G. Laurie, ‘Liminality and the Limits of Law in Health Research Regulation: What Are We Missing in the Spaces In-Between?’, (2016) Medical Law Review, 25 (1), 4772.

30 J. Kaye et al., ‘From Patients to Partners: Participant-Centric Initiatives in Biomedical Research’, (2012) Nature Reviews Genetics, 13(5), 371.

31 J. Harris, ‘Scientific Research Is a Moral Duty’, (2005) Journal of Medical Ethics, 31(4), 242248.

32 S. C. Davies, ‘Chief Medical Officer Annual Report 2016: Generation Genome’, (Department of Health and Social Care, 2017), p. 4.

33 Wolf et al., ‘Mapping the Ethics’.

34 F. G. Miller et al., ‘Incidental Findings in Human Subjects Research: What Do Investigators Owe Research Participants?’, (2008) The Journal of Law, Medicine & Ethics, 36(2), 271279.

36 In Chapter 39 of this volume, Shawn Harmon presents a parallel argument that medical device regulations are similarly premised on a narrow conception of harm that fails to account for identity impacts.

37 Eckstein et al., ‘A Framework for Analyzing’.

38 E. Postan, ‘Defining Ourselves: Personal Bioinformation as a Tool of Narrative Self-Conception’, (2016) Journal of Bioethical Inquiry, 13(1), 133151.

39 M. Schechtman, The Constitution of Selves (New York: Cornell University Press, 1996).

40 Postan, ‘Defining Ourselves’.

41 C. Mackenzie, ‘Introduction: Practical Identity and Narrative Agency’ in K. Atikins and C. Mackenzie (eds), Practical Identity and Narrative Agency (Abingdon: Routledge, 2013), pp. 128.

42 L. d’Agincourt-Canning, ‘Genetic Testing for Hereditary Breast and Ovarian Cancer: Responsibility and Choice’, (2006) Qualitative Health Research, 16(1), 97118.

43 P. Carter et al., ‘The Social Licence for Research: Why care.data Ran into Trouble’, (2015) Journal of Medical Ethics, 41(5), 404409.

44 E. M. Bunnik et al., ‘Personal Utility in Genomic Testing: Is There Such a Thing?’, (2014) Journal of Medical Ethics, 41(4), 322326.

45 S. M. Wolf et al., ‘Managing Incidental Findings and Research Results in Genomic Research Involving Biobanks and Archived Data Sets’, (2012) Genetics in Medicine, 14(4), 361384.

24 Health Research and Privacy through the Lens of Public Interest A Monocle for the Myopic?

1 The idea that both privacy and health research may be described as ‘public interest causes’ is also compelling developed in W. W. Lowrance, Privacy, Confidentiality, and Health Research (Cambridge University Press, 2012) and the relationship between privacy and the public interested in C. D. Raab, ‘Privacy, Social Values and the Public Interest’ in A. Busch and J. Hofmann (eds), Politik und die Regulierung von Information [Politics and the Regulation of Information] (Baden-Baden, Germany: Politische Vierteljahresschrift, Sonderheft 46, 2012), pp. 129151.

2 V. P. Held, The Public Interest and Individual Interests (New York: Basic Books, 1970).

3 F. J. Sorauf, ‘The Public Interest Reconsidered’, (1957), The Journal of Politics, 19(4), 616639.

4 M. J. TaylorHealth Research, Data Protection, and the Public Interest in Notification’, (2011) Medical Law Review, 19(2), 267303; M. J. Taylor and T. Whitton, ‘Public Interest, Health Research and Data Protection Law: Establishing a Legitimate Trade-Off between Individual Control and Research Access to Health Data’, (2020) Laws, 9(1), 6.

5 M. Meyerson and E. C. Banfield, cited by Sorauf ‘The Public Interest Reconsidered’, 619.

6 J. Bell, ‘Public Interest: Policy or Principle?’ in R. Brownsword (ed.), Law and the Public Interest: Proceedings of the 1992 ALSP Conference (Stuttgart: Franz Steiner, 1993) cited in M. Feintuck, Public Interest in Regulation (Oxford University Press, 2004), p. 186.

7 There is a connection here with what has been described by Rawls as ‘public reasons’: limited to premises and modes of reasoning that are accessible to the public at large. L. B. Solum, ‘Public Legal Reason’, (2006) Virginia Law Review, 92(7), 14491501, 1468.

8 ‘The virtue of public reasoning is the cultivation of clear and explicit reasoning orientated towards the discovery of common grounds rather than in the service of sectional interests, and the impartial interpretation of all relevant available evidence.’ Nuffield Council on Bioethics, ‘Public Ethics and the Governance of Emerging Biotechnologies’, (Nuffield Council on Bioethics, 2012), 69.

9 G. Gaus, The Order of Public Reason: A Theory of Freedom and Morality in a Diverse and Bounded World (Cambridge University Press, 2011), p. 19. Note the distinction Gaus draws here between the Restricted and the Expansive view of Freedom and Equality.

10 We here associate legitimacy with ‘the capacity of the system to engender and maintain the belief that the existing political institutions are the most appropriate ones for the society’ S. M. Lipset, Political Man: The Social Bases of Politics (Baltimore, MD: John Hopkins University Press, 1981 [1959]), p. 64. This is consistent with recognition that the ‘liberal principle of legitimacy states that the exercise of political power is justifiable only when it is exercised in accordance with constitutional essentials that all citizens may reasonably be expected to endorse in the light of principles and ideals acceptable to them as reasonable and rational’, Solum, ‘Public Legal Reason’, 1472. See also D. Curtin and A. J. Meijer, ‘Does Transparency Strengthen Legitimacy?’, (2006) Inform Polity II, 11(2), 109122, 112 and M. J. Taylor, ‘Health Research, Data Protection, and the Public Interest in Notification’, (2011) Medical Law Review, 19(2), 267303.

11 The argument offered is a development of one originally presented in M. J. Taylor, Genetic Data and the Law (Cambridge University Press, 2012), see esp. pp. 2934.

12 The term ‘accept’ is chosen over ‘prefer’ for good reason. M. J. Taylor and N. C. TaylorHealth Research Access to Personal Confidential Data in England and Wales: Assessing Any Gap in Public Attitude between Preferable and Acceptable Models of Consent’, (2014) Life Sciences, Society and Policy, 10(1), 124.

13 A ‘real definition’ is to be contrasted with a nominal definition. A real definition may associate a word or term with elements that must necessarily be associated with the referent (a priori). A nominal definition may be discovered by investigating word usage (a posteriori). For more, see Stanford Encyclopedia of Philosophy, ‘Definitions’, (Stanford Encyclopedia of Philosophy, 2015), www.plato.stanford.edu/entries/definitions/.

14 G. Laurie recognises privacy to be a state of non-access. G. Laurie, Genetic Privacy: A Challenge to Medico-Legal Norms (Cambridge University Press, 2002) p. 6. We prefer the term ‘exclusivity’ rather than ‘separation’ as it recognises a lack of separation in one aspect does not deny a privacy claim in another. E.g. one’s normative expectations regarding use and disclosure are not necessarily weakened by sharing information with health professionals. For more see M. J. Taylor, Genetic Data and the Law: A Critical Perspective on Privacy Protection (Cambridge University Press, 2012), pp. 1340.

15 See, C. J. Bennet and C. D. Raab, The Governance of Privacy: Policy Instruments in Global Perspective (Ashgate, 2003), p. 13.

17 S. T. MargulisConceptions of Privacy: Current Steps and Next Steps’, (1977) Journal of Social Issues, 33(3), 521, 10.

18 S. T. MargulisPrivacy as a Social Issue and a Behavioural Concept’, (2003) Journal of Social Issues, 9(2), 243261, 245.

19 Department of Health, ‘Summary of Responses to the Consultation on the Additional Uses of Patient Data’, (Department of Health, 2008).

20 Our argument has no application to aggregate data that does not relate to a group until or unless that association is made.

21 A number are described for example by V. Eubanks, Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor (New York: St Martin’s Press, 2018).

22 See e.g. Foster v. Mountford (1976) 14 ALR 71. (Australia)

23 An example of the kind of common purpose that privacy may serve relates to the protection of culturally significant information. A well-known example of this is the harm associated with the research conducted with the Havasupai Tribe in North America. R. Dalton, ‘When Two Tribes Go to War’, (2004) Nature, 430(6999), 500502; A. Harmon, ‘Indian Tribe Wins Fight to Limit Research of Its DNA’, The New York Times (21 April 2010). Similar concerns had been expressed by the Nuu-chahnulth of Vancouver Island, Canada, when genetic samples provided for one purpose (to discover the cause of rheumatoid arthritis) were used for other purposes. J. L. McGregor, ‘Population Genomics and Research Ethics with Socially Identifiable Groups’, (2007) Journal of Law and Medicine, 35(3), 356370, 362. Proposals to establish a genetic database on Tongans floundered when the ethics policy focused on the notion of individual informed consent and failed to take account of the traditional role played by the extended family in decision-making. B. Burton, ‘Proposed Genetic Database on Tongans Opposed’, (2002) BMJ, 324(7335), 443.

24 P. M. Regan, Legislating Privacy: Technology, Social Values, and Public Policy (University of North Carolina Press, 1995) p. 221.

25 L. Taylor et al. (eds), Group Privacy: New Challenges of Data Technologies (New York: Springer, 2017), p. 5.

26 Taylor et al., ‘Group Privacy’, p. 7.

27 Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data, Strasbourg, 28 January 1981, in force 1 October 1985, ETS No. 108, Protocol CETS No. 223.

28 To protect every individual, whatever his or her nationality or residence, with regard to the processing of their personal data, thereby contributing to respect for his or her human rights and fundamental freedoms, and in particular the right to privacy.

29 ‘Convention for the Protection of Individuals’, Article 2(a).

31 M. J. Taylor and T. Whitton, ‘Public Interest, Health Research and Data Protection Law: Establishing a Legitimate Trade-Off between Individual Control and Research Access of Health Data,’ (2020) Laws, 9(6), 124, 17–19; J. Rawls, The Law of Peoples (Harvard University Press, 1999), pp. 129180.

32 E.g. The conception of public interest proposed in this chapter would allow concerns associated with processing in a third country, or an international organisation, to be taken into consideration where associated with issues of group privacy. Article 49(5) of the General Data Protection Regulation, Regulation (EU) 2016/679, OJ L 119, 4 May 2016.

33 Although data protection law seeks to protect fundamental rights and freedoms, in particular the right to respect for a private life, without collapsing the concepts of data protection and privacy.

34 Earl Spencer v. United Kingdom [1998] 25 EHRR CD 105.

35 W v. Egdell [1993] 1 All ER 835. Ch. 359.

36 R (W, X, Y and Z) v. Secretary of State for Health [2015] EWCA Civ 1034, [48].

37 Campbell v. MGN Ltd [2004] UKHL 22, [2004] ALL ER (D) 67 (May), per Lord Nicholls [11].

38 Footnote Ibid. [14].

39 Footnote Ibid. [50].

40 Niemietz v. Germany [1992] 13710/88 [29].

41 Regan, ‘Legislating Privacy’, p. 8.

42 [1999] EWCA Civ 3011.

43 M. J. Taylor, ‘R (ex p. Source Informatics) v. Department of Health [1999]’ in J. Herring and J. Wall (eds), Landmark Cases in Medical Law (Oxford: Hart, 2015), pp. 175192; D. Beyleveld, ‘Conceptualising Privacy in Relation to Medical Research Values’ in S. A. M. MacLean (ed.), First Do No Harm (Farnham, UK: Ashgate, 2006), p. 151. It is interesting to consider how English Law may have something to learn in this respect from the Australian courts e.g. Foster v. Mountford (1976) 14 ALR 71.

44 M. Richardson, The Right to Privacy (Cambridge University Press, 2017), p. 120.

45 Footnote Ibid., p. 122.

46 Footnote Ibid., p. 119.

25 Mobilising Public Expertise in Health Research Regulation

1 NHS, ‘News: NHS England sets out the next steps of public awareness about care.data’, (NHS, 2013), www.england.nhs.uk/2013/10/care-data/.

2 F. Godlee, ‘What Can We Salvage From care.data?’, (2016) BMJ, 354(i3907).

3 F. Caldicott et al., ‘Information: To Share Or Not to Share? The Information Governance Review’, (UK Government Publishing Service, 2013).

4 A. Irwin and B. Wynne, Misunderstanding Science. The Public Reconstruction of Science and Technology (Abingdon: Routledge, 1996); S. Locke, ‘The Public Understanding of Science – A Rhetorical Invention’, (2002) Science Technology & Human Values, 27(1), 87111.

5 K. C. O’Doherty and M. M. Burgess, ‘Developing Psychologically Compelling Understanding of the Involvement of Humans in Research’, (2019) Human Arenas 2(6), 118.

6 J. F. Caron-Flinterman et al., ‘The Experiential Knowledge of Patients: A New Resource for Biomedical Research?’, (2005) Social Science and Medicine, 60(11), 25752584; M. De Wit et al., ‘Involving Patient Research Partners has a Significant Impact on Outcomes Research: A Responsive Evaluation of the International OMERACT Conferences’, (2013) BMJ Open, 3(5); S. Petit-Zeman et al., ‘The James Lind Alliance: Tackling Research Mismatches’, (2010) Lancet, 376(9742), 667669; J. A. Sacristan et al., ‘Patient Involvement in Clinical Research: Why, When, and How’, (2016) Patient Preference and Adherence, 2016(10), 631640.

7 C. Mitton et al., ‘Health Technology Assessment as Part of a Broader Process for Priority Setting and Resource Allocation’, (2019) Applied Health Economics and Health Policy, 17(5), 573576.

8 M. Aitken et al., ‘Consensus Statement on Public Involvement and Engagement with Data-Intensive Health Research’, (2018) International Journal of Population Data Science, 4(1), 16; C. Bentley et al., ‘Trade-Offs, Fairness, and Funding for Cancer Drugs: Key Findings from a Public Deliberation Event in British Columbia, Canada’, (2018) BMC Health Services Research, 18(1), 339362; S. M. Dry et al., ‘Community Recommendations on Biobank Governance: Results from a Deliberative Community Engagement in California’, (2017) PLoS ONE 12(2), 114; R. E. McWhirter et al., ‘Community Engagement for Big Epidemiology: Deliberative Democracy as a Tool’, (2014) Journal of Personalized Medicine, 4(4), 459474.

9 J. Brett et al., ‘Mapping the Impact of Patient and Public Involvement on Health and Social Care Research: A Systematic Review’, (2012) Health Expectations, 17(5), 637650; R. Gooberman-Hill et al., ‘Citizens’ Juries in Planning Research Priorities: Process, Engagement and Outcome’, (2008) Health Expectations, 11(3), 272281; S. Oliver et al., ‘Public Involvement in Setting a National Research Agenda: A Mixed Methods Evaluation’, (2009) Patient, 2(3), 179190.

10 S. Sherwin, ‘Toward Setting an Adequate Ethical Framework for Evaluating Biotechnology Policy’, (Canadian Biotechnology Advisory Committee, 2001). As cited in M. M. Burgess and J. Tansey, ‘Democratic Deficit and the Politics of “Informed and Inclusive” Consultation’ in E. Einsiedel (ed.), From Hindsight to Foresight (Vancouver: UBC Press, 2008), pp. 275288.

11 A. Irwin et al., ‘The Good, the Bad and the Perfect: Criticizing Engagement Practice’, (2013) Social Studies of Science, 43(1), 118135; S. Jasanoff, The Ethics of Invention: Technology and the Human Future (Manhattan, NY: Norton Publishers, 2016); B. Wynne, ‘Public Engagement as a Means of Restoring Public Trust in Science: Hitting the Notes, but Missing the Music?’, (2006) Community Genetics 9(3), 211220.

12 J. Gastil and P. Levine, The Deliberative Democracy Handbook: Strategies for Effective Civic Engagement in the Twenty-First Century (Plano, TX: Jossey-Bass Publishing, 2005).

13 I. M. Young, Inclusion and Democracy (Oxford University Press, 2000), p. 136.

14 M. Berger and B. De Cleen, ‘Interpellated Citizens: Suggested Subject Positions in a Deliberation Process on Health Care Reimbursement’, (2018) Comunicazioni Sociali, 1, 91103; L. Althusser, ‘Ideology and Ideological State Apparatuses: Notes Towards an Investigation’ in L. Althusser (ed.) Lenin and Philosophy and Other Essays (Monthly Review Press, 1971), pp. 173174.

15 H. L. Walmsley, ‘Mad Scientists Bend the Frame of Biobank Governance in British Columbia’, (2009) Journal of Public Deliberation, 5(1), Article 6.

16 M. E. Warren, ‘Governance-Driven Democratization’, (2009) Critical Policy Studies, 3(1), 313, 10.

17 G. Smith and C. Wales, ‘Citizens’ Juries and Deliberative Democracy’, (2000) Political Studies, 48(1), 5165.

18 S. Chambers, ‘Deliberative Democratic Theory’, (2003) Annual Review of Political Science, 6, 307326.

19 M. M. Burgess et al., ‘Assessing Deliberative Design of Public Input on Biobanks’ in S. Dodds and R. A. Ankeny (eds) Big Picture Bioethics: Developing Democratic Policy in Contested Domains (Switzerland: Springer, 2016), pp. 243276.

20 R. E. Goodin and J. S. Dryzek, ‘Deliberative Impacts: The Macro-Political Uptake of Mini-Publics’, (2006) Politics & Society, 34(2), 219244.

21 H. Longstaff and M. M. Burgess, ‘Recruiting for Representation in Public Deliberation on the Ethics of Biobanks’, (2010) Public Understanding of Science, 19(2), 212–24.

22 D. Steel et al., ‘Multiple Diversity Concepts and Their Ethical-Epistemic Implications’, (2018) The British Journal for the Philosophy of Science, 8(3), 761780.

23 K. Beier et al., ‘Understanding Collective Agency in Bioethics’, (2016) Medicine, Health Care and Philosophy, 19(3), 411422.

24 Longstaff and Burgess, ‘Recruiting for Representation’.

25 S. M. Dry et al., ‘Community Recommendations on Biobank Governance’.

26 Burgess et al., ‘Assessing Deliberative Design’, pp. 270–271.

27 A. Kadlec and W. Friedman, ‘Beyond Debate: Impacts of Deliberative Issue Framing on Group Dialogue and Problem Solving’, (Center for Advances in Public Engagement, 2009); H. L. Walmsley, ‘Mad Scientists Bend the Frame of Biobank Governance in British Columbia’, (2009) Journal of Public Deliberation, 5(1), Article 6.

28 M. M. Burgess, ‘Deriving Policy and Governance from Deliberative Events and Mini-Publics’ in M. Howlett and D. Laycock (eds), Regulating Next Generation Agri-Food Biotechnologies: Lessons from European, North American and Asian Experiences (Abingdon: Routledge, 2012), pp. 220236; D. Nicol, et al., ‘Understanding Public Reactions to Commercialization of Biobanks and Use of Biobank Resources’, (2016) Social Sciences and Medicine, 162, 7987.

29 J. Abelson et al., ‘Bringing ‘The Public’ into Health Technology Assessment and Coverage Policy Decisions: From Principles to Practice’, (2007) Health Policy, 82(1), 3750; T. Nabatchi et al., Democracy in Motion: Evaluating the Practice and Impact of Deliberative Civic Engagement (Oxford University Press, 2012).

30 D. Caluwaeerts and M. Reuchamps, The Legitimacy of Citizen-led Deliberative Democracy: The G1000 in Belgium (Abingdon: Routledge, 2018).

31 G. Gigerenzer and P. M. Todd, ‘Ecological Rationality: The Normative Study of Heuristics’, in P. M. Todd and G. Gigerenzer (eds), Ecological Rationality: Intelligence in the World (Oxford University Press, 2012).

32 E. Christofides et al., ‘Heuristic Decision-Making About Research Participation in Children with Cystic Fibrosis’, (2016) Social Science & Medicine, 162, 3240; O’Doherty and Burgess, ‘Developing Psychologically Compelling Understanding’;M. M. Burgess and K. C. O’Doherty, ‘Moving from Understanding of Consent Conditions to Heuristics of Trust’, (2019) American Journal of Bioethics, 19(5), 2426.

26 Towards Adaptive Governance in Big Data Health Research Implementing Regulatory Principles

1 fitbit Inc., ‘National Institutes of Health Launches Fitbit Project as First Digital Health Technology Initiative in Landmark All of Us Research Program (Press Release)’, (fitbit, 2019).

2 D. C. Collins et al., ‘Towards Precision Medicine in the Clinic: From Biomarker Discovery to Novel Therapeutics’, (2017) Trends in Pharmacological Sciences, 38(1), 2540.

3 A. Giddens, The Third Way: The Renewal of Social Democracy (New York: John Wiley & Sons, 2013), p. 69.

4 E. Vayena and A. Blasimme, ‘Health Research with Big Data: Time for Systemic Oversight’, (2018) The Journal of Law, Medicine & Ethics, 46(1), 119129.

5 C. Folke et al., ‘Adaptive Governance of Social-Ecological Systems’, (2005) Annual Review of Environment and Resources, 30, 441473.

6 T. Dietz et al., ‘The Struggle to Govern the Commons’, (2003) Science, 302(5652), 19071912.

7 C. Ansell and A. Gash, ‘Collaborative Governance in Theory and Practice’, (2008) Journal of Public Administration Research and Theory, 18(4), 543571.

8 J. J. Warmink et al., ‘Coping with Uncertainty in River Management: Challenges and Ways Forward’, (2017) Water Resources Management, 31(14), 45874600.

9 R. J. McWaters et al., ‘The Future of Financial Services-How Disruptive Innovations Are Reshaping the Way Financial Services Are Structured, Provisioned and Consumed’, (World Economic Forum, 2015).

10 R. A. W. Rhodes, ‘The New Governance: Governing without Government’, (1996) Political Studies, 44(4), 652667.

11 J. Black, ‘The Rise, Fall and Fate of Principles Based Regulation’, (2010) LSE Legal Studies Working Paper, 17.

13 Vayena and Blasimme, ‘Health Research’.

14 B. Walker et al., ‘Resilience, Adaptability and Transformability in Social–Ecological Systems’, (2004) Ecology and Society, 9 (2), 4.

15 E. Vayena and A. Blasimme, ‘Biomedical Big Data: New Models of Control over Access, Use and Governance’, (2017) Journal of Bioethical Inquiry, 14(4), 501513.

16 See, for example, S. Arjoon, ‘Striking a Balance between Rules and Principles-Based Approaches for Effective Governance: A Risks-Based Approach’, (2006) Journal of Business Ethics, 68(1), 5382; A. Kezar, ‘What Is More Important to Effective Governance: Relationships, Trust, and Leadership, or Structures and Formal Processes?’, (2004) New Directions for Higher Education, 127, 3546.

17 J. Rijke et al., ‘Fit-for-Purpose Governance: A Framework to Make Adaptive Governance Operational’, (2012) Environmental Science & Policy, 22, 7384.

18 R. A. W. Rhodes, Understanding Governance: Policy Networks, Governance, Reflexivity, and Accountability (Buckingham: Open University Press, 1997); R. A. W. Rhodes, ‘Understanding Governance: Ten Years On’, (2007) Organization Studies, 28(8), 12431264.

19 F. Gille et al. ‘Future-proofing biobanks’ governance’,  (2020) European Journal of Human Genetics, 28, 989–996.

20 E. Ostrom, ‘A Diagnostic Approach for Going beyond Panaceas’, (2007) Proceedings of the National Academy of Sciences, 104(39), 1518115187.

21 D. A. DeCaro et al., ‘Legal and Institutional Foundations of Adaptive Environmental Governance’, (2017) Ecology and Society: A Journal of Integrative Science for Resilience and Sustainability, 22 (1), 1.

22 B. Chaffin et al., ‘A Decade of Adaptive Governance Scholarship: Synthesis and Future Directions’, (2014) Ecology and Society, 19(3), 56.

23 Rijke et al., ‘Fit-for-Purpose Governance’.

24 A. Blasimme et al., ‘Democratizing Health Research Through Data Cooperatives’, (2018) Philosophy & Technology, 31(3), 473479.

25 A. Bandura and R. H. Walters, Social Learning Theory, vol. 1 (Prentice-hall Englewood Cliffs, NJ, 1977).

26 Ostrom, ‘A Diagnostic Approach’.

27 D. Swanson et al., ‘Seven Tools for Creating Adaptive Policies’, (2010) Technological Forecasting and Social Change, 77(6), 924939, 925.

28 D. Berthiau, ‘Law, Bioethics and Practice in France: Forging a New Legislative Pact’, (2013) Medicine, Health Care and Philosophy, 16(1), 105113.

29 G. Silberman and K. L. Kahn, ‘Burdens on Research Imposed by Institutional Review Boards: The State of the Evidence and Its Implications for Regulatory Reform’, (2011) The Milbank Quarterly, 89(4), 599627.

30 G. T. Laurie et al., ‘Charting Regulatory Stewardship in Health Research: Making the Invisible Visible’, (2018) Cambridge Quarterly of Healthcare Ethics, 27(2), 333347.

31 O. O’Neill, ‘Trust with Accountability?’, (2003) Journal of Health Services Research & Policy, 8(1), 34.

27 Regulating Automated Healthcare and Research Technologies First Do No Harm (to the Commons)

1 See, further, R. Brownsword, Law, Technology and Society: Re-imagining the Regulatory Environment (Abingdon: Routledge, 2019), Ch. 4.

2 Nuffield Council on Bioethics, ‘Non-invasive Prenatal Testing: Ethical Issues’, (March 2017); for discussion, see R. Brownsword and J. Wale, ‘Testing Times Ahead: Non-Invasive Prenatal Testing and the Kind of Community that We Want to Be’, (2018) Modern Law Review, 81(4), 646672.

3 Nuffield Council on Bioethics, ‘Genome Editing and Human Reproduction: Social and Ethical Issues’, (July 2018).

4 Nuffield Council on Bioethics, ‘Non-Invasive Prenatal Testing’, para 5.20.

5 Compare N. J. Wald et al., ‘Response to Walker’, (2018) Genetics in Medicine, 20(10), 1295; and in Canada, see the second phase of the Pegasus project, Pegasus, ‘About the Project’, www.pegasus-pegase.ca/pegasus/about-the-project/.

6 See, e.g., J. Harris and D. R. Lawrence, ‘New Technologies, Old Attitudes, and Legislative Rigidity’ in R. Brownsword et al. (eds) Oxford Handbook of Law, Regulation and Technology (Oxford University Press, 2017), pp. 915928.

7 Nuffield Council on Bioethics, ‘Genome Editing: An Ethical Review’, (September 2016).

8 Nuffield Council on Bioethics, ‘Genome Editing and Human Reproduction’, paras 3.72–3.78.

9 Compare, J. Rockström et al., ‘Planetary Boundaries: Exploring the Safe Operating Space for Humanity’ (2009) Ecology and Society, 14(2); K. Raworth, Doughnut Economics (Random House Business Books, 2017), pp. 4353.

10 P. Aldrick, ‘Make No Mistake, One Way or Another NHS Data Is on the Table in America Trade Talks’, The Times, (8 June 2019), 51.

11 See R. Brownsword, ‘Human Dignity from a Legal Perspective’ in M. Duwell et al. (eds), Cambridge Handbook of Human Dignity (Cambridge University Press, 2014), pp. 122.

12 For such a view, see R. Brownsword, ‘Human Dignity, Human Rights, and Simply Trying to Do the Right Thing’ in C. McCrudden (ed), Understanding Human Dignity – Proceedings of the British Academy 192 (The British Academy and Oxford University Press, 2013), pp. 345358.

13 See R. Brownsword, ‘From Erewhon to Alpha Go: For the Sake of Human Dignity Should We Destroy the Machines?’, (2017) Law, Innovation and Technology, 9(1), 117153.

14 See D. Beyleveld and R. Brownsword, Human Dignity in Bioethics and Biolaw (Oxford University Press, 2001);R. Brownsword, Rights, Regulation and the Technological Revolution (Oxford University Press, 2008).

15 NHS, ‘NHS Long Term Plan’, (January 2019), www.longtermplan.nhs.uk.

16 Footnote Ibid., 91.

18 Department of Health and Social Care, ‘NHSX: New Joint Organisation for Digital, Data and Technology’, (19 February 2019), www.gov.uk/government/news/nhsx-new-joint-organisation-for-digital-data-and-technology.

19 Generally, see R. Brownsword, ‘Law, Technology and Society’, Ch. 12; D. Schönberger, ‘Artificial Intelligence in Healthcare: A Critical Analysis of the Legal and Ethical Implications’, (2019) International Journal of Law and Information Technology, 27(2), 171203.

For the much-debated collaboration between the Royal Free London NHS Foundation Trust and Google DeepMind, see, J. Powles, ‘Google DeepMind and healthcare in an age of algorithms’, (2017) Health and Technology, 7(4), 351367.

20 European Commission, ‘Ethics Guidelines for Trustworthy AI’, (8 April 2019).

22 Footnote Ibid., emphasis added.

23 R. Dworkin, Taking Rights Seriously, revised edition (London: Duckworth, 1978).

24 S. Hawking, Brief Answers to the Big Questions (London: John Murray, 2018) p. 188.

25 Footnote Ibid., p. 189.

26 Footnote Ibid., p. 194.

27 See, N. Bostrom, Superintelligence (Oxford University Press, 2014), p. 281 (Footnote note 1);M. Ford, The Rise of the Robots (London: Oneworld, 2015), Ch. 9.

28 For an indication of the range and breadth of this concern, see e.g. ‘Resources on Existential Risk’, (2015), www.futureoflife.org/data/documents/Existential%20Risk%20Resources%20(2015-08-24).pdf.

29 See, for example, D. J. Solove, Understanding Privacy (Cambridge, MA: Harvard University Press, 2008); H. Nissenbaum, Privacy in Context (Palo Alto, CA: Stanford University Press, 2010).

30 B. Koops, ‘Privacy Spaces’, (2018) West Virginia Law Review, 121(2), 611665, 621.

31 Compare, too, M. Brincker, ‘Privacy in Public and the Contextual Conditions of Agency’ in T. Timan, et al. (eds), Privacy in Public Space (Cheltenham: Edward Elgar, 2017), pp. 6490; M. Hu, ‘Orwell’s 1984 and a Fourth Amendment Cybersurveillance Nonintrustion Test’, (2017) Washington Law Review, 92(4), 18191904, 1903–1904.

32 Compare K. Yeung and M. Dixon-Woods, ‘Design-Based Regulation and Patient Safety: A Regulatory Studies Perspective’, (2010) Social Science and Medicine, 71(3), 502509.

33 Compare R. Brownsword, ‘Regulating Patient Safety: Is It Time for a Technological Response?’, (2014) Law, Innovation and Technology, 6(1), 129.

34 See M. Cook, ‘Bedside Manner 101: How to Deliver Very Bad News’, Bioedge (17 March 2019), www.bioedge.org/bioethics/bedside-manner-101-how-to-deliver-very-bad-news/12998.

28 When Learning Is Continuous Bridging the Research–Therapy Divide in the Regulatory Governance of Artificial Intelligence as Medical Devices

1 C. Reed, ‘How Should We Regulate Artificial Intelligence?’, (2018) Philosophical Transactions of the Royal Society, Series A 376(2128), 20170360.

2 G. Laurie, ‘Liminality and the Limits of Law in Health Research Regulation: What Are We Missing in the Spaces In-Between?’, (2016) Medical Law Review, 25(1), 4772; G. Laurie, ‘What Does It Mean to Take an Ethics+ Approach to Global Biobank Governance?’, (2017) Asian Bioethics Review, 9(4), 285300; S. Taylor-Alexander et al., ‘Beyond Regulatory Compression: Confronting the Liminal Spaces of Health Research Regulation’, (2016) Law Innovation and Technology, 8(2), 149176.

3 Laurie, ‘Liminality and the Limits of Law’, 69.

4 Taylor-Alexander et al., ‘Beyond Regulatory Compression’, 172.

5 R. Brownsword et al., ‘Law, Regulation and Technology: The Field, Frame, and Focal Questions’ in R. Brownsword et al. (eds), The Oxford Handbook of Law, Regulation and Technology (Oxford University Press, 2017), pp. 136.

6 D. G. Bates and F. Plog, Cultural anthropology. (New York: McGraw-Hill, 1990), p. 7.

7 R. Brownsword, Law, Technology and Society: Re-Imaging the Regulatory Environment (Abingdon: Routledge, 2019), p. 45.

8 B. Babic et al., ‘Algorithms on Regulatory Lockdown in Medicine: Prioritizing Risk Monitoring to Address the ‘Update Problem’’, (2019) Science, 366(6470), 12021204.

9 Food and Drug Administration, ‘FDA Permits Marketing of Artificial Intelligence-Based Device to Detect Certain Diabetes-related Eye Problems’, (FDA New Release, 11 April 2018).

10 Food and Drug Administration, ‘Classification of Products as Drugs and Devices & Additional Product Classification Issues: Guidance for Industry and FDA Staff’, (FDA, 2017).

11 Federal Food, Drug, and Cosmetic Act (25 June 1938), 21 USC §321(h).

12 The regulatory approaches adopted in the European Union and the United Kingdom are broadly similar to that of the FDA. See: J. Ordish et al., Algorithms as Medical Devices (Cambridge: PHG Foundation, 2019).

13 Food and Drug Administration, ‘Regulatory Controls’, (FDA, 27 March 2018), www.fda.gov/medical-devices/overview-device-regulation/regulatory-controls.

14 G. A. Van Norman, ‘Drugs and Devices: Comparison of European and US Approval Processes’, (2016) JACC Basic to Translational Science, 1(5), 399412.

15 Details in Table 28.2 are adapted from the following sources: International Organization of Standards, ‘Clinical Investigation of Medical Devices for Human Subjects – Good Clinical Practice’, (ISO, 2019), ISO/FDIS 14155 (3rd edition); Genesis Research Services, ‘Clinical Trials – Medical Device Trials’, (Genesis Research Services, 5 September 2018), www.genesisresearchservices.com/clinical-trials-medical-device-trials/; B. Chittester, ‘Medical Device Clinical Trials – How Do They Compare with Drug Trials?’, (Master Control, 7 May 2020), www.mastercontrol.com/gxp-lifeline/medical-device-clinical-trials-how-do-they-compare-with-drug-trials-/.

16 J. P. Jarow and J. H. Baxley, ‘Medical Devices: US Medical Device Regulation’, (2015) Urologic Oncology: Seminars and Original Investigations, 33(3), 128132.

17 A. Bowser et al., Artificial Intelligence: A Policy-Oriented Introduction (Washington, DC: Woodrow Wilson International Center for Scholars, 2017).

18 M. D. Abràmoff et al., ‘Pivotal Trial of an Autonomous AI-Based Diagnostic System for Detection of Diabetic Retinopathy in Primary Care Offices’, (2018) NPJ Digital Medicine, 39(1), 1.

19 P. A. Keane and E. J. Topol, ‘With an Eye to AI and Autonomous Diagnosis’, (2018) NPJ Digital Medicine, 1, 40.

20 K. P. Murphy, Machine Learning: A Probabilistic Perspective (Cambridge, MA: MIT Press, 2012).

21 M. L. Giger, ‘Machine Learning in Medical Imaging’, (2018) Journal of the American College of Radiology, 15(3), 512520.

22 E. Topol, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again (New York: Basic Books, 2019); A. Tang et al., ‘Canadian Association of Radiologists White Paper on Artificial Intelligence in Radiology’, (2018) Canadian Association of Radiologists Journal, 69(2), 120135.

23 Babic et al., ‘Algorithms’; M. U. Scherer, ‘Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies and Strategies’, (2016) Harvard Journal of Law & Technology, 29(2), 354400.

24 Executive Office of the President, Artificial Intelligence, Automation and the Economy (Washington, DC: US Government, 2016); House of Commons, Science and Technology Committee (2016), Robotics and Artificial Intelligence: Fifth Report of Session 2016–17, HC 145 (London, 12 October 2016).

25 C. W. L. Ho et al., ‘Governance of Automated Image Analysis and Artificial Intelligence Analytics in Healthcare’, (2019) Clinical Radiology, 74(5), 329337.

26 IMDRF Software as a Medical Device (SaMD) Working Group, ‘Software as a Medical Device: Possible Framework for Risk Categorization and Corresponding Considerations’, (International Medical Device Regulators Forum, 2014), para. 4.

27 Footnote Ibid., p. 14, para. 7.2.

28 Food and Drug Administration, ‘Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD): Discussion Paper and Request for Feedback’, (US Department of Health and Human Services, 2019).

29 IMDRF SaMD Working Group, ‘Software as a Medical Device (SaMD): Key Definitions’, (International Medical Decive Regulators Forum, 2013).

30 International Organization of Standards, ‘ISO/IEC 14764:2006 Software Engineering – Software Life Cycle Processes – Maintenance (2nd Edition)’, (International Organization of Standards, 2006).

31 IMDRF SaMD, ‘Software as a Medical Device (SaMD): Application of Quality Management System. IMDRF/SaMD WG/N23 FINAL 2015’, (International Medical Device Regulators Forum, 2015), para. 7.5.

32 Food and Drug Administration, ‘Software as a Medical Device (SAMD): Clinical Evaluation’, (US Department of Health and Human Services, 2017).

33 A. Riles, The Network Inside Out (Ann Arbor, MI: University of Michigan Press, 2001), pp. 5859 and p. 68.

34 C. W. L. Ho, Juridification in Bioethics (London: Imperial College Press, 2016).

35 Riles, The Network, p. 69.

36 M. Valverde et al., Legal Knowledges of Risk. In Law Commission of Canada, Law and Risk (Vancouver, BC: University of British Columbia Press, 2005), pp. 86120, p. 103 and p. 106.

37 Footnote Ibid., p. 106.

39 N. Nelson et al., ‘Introduction: The Anticipatory State: Making Policy-relevant Knowledge About the Future’, (2008) Science and Public Policy, 35(8), 546550.

40 H. Gusterson, ‘Nuclear Futures: Anticipator Knowledge, Expert Judgment, and the Lack that Cannot Be Filled’, (2008) Science and Public Policy, 35(8), 551560.

42 G. Laurie et al., ‘Foresighting Futures: Law, New Technologies, and the Challenges of Regulating for Uncertainty’, (2012) Law, Innovation and Technology, 4(1), 133.

43 Laurie, ‘Liminality and the Limits of Law’, 68–69; Taylor-Alexander et al., ‘Beyond Regulatory Compression’, 158.

44 Laurie, ‘Liminality and the Limits of Law’, 71.

29 The Oversight of Clinical Innovation in a Medical Marketplace

1 W. Lipworth et al., ‘The Need for Beneficence and Prudence in Clinical Innovation with Autologous Stem Cells’, (2018) Perspectives in Biology and Medicine, 61(1), 90105.

2 P. L. Taylor, ‘Overseeing Innovative Therapy without Mistaking It for Research: A Function‐Based Model Based on Old Truths, New Capacities, and Lessons from Stem Cells’, (2010) The Journal of Law, Medicine & Ethics, 38(2), 286302.

3 B. Salter et al., ‘Hegemony in the Marketplace of Biomedical Innovation: Consumer Demand and Stem Cell Science’, (2015) Social Science & Medicine, 131, 156163.

4 N. Ghinea et al., ‘Ethics & Evidence in Medical Debates: The Case of Recombinant Activated Factor VII’, (2014) Hastings Center Report, 44(2), 3845.

5 C. Davis, ‘Drugs, Cancer and End-of-Life Care: A Case Study of Pharmaceuticalization?’, (2015) Social Science & Medicine, 131, 207214; D. W. Light and J. Lexchin, ‘Pharmaceutical Research and Development: What Do We Get for All That Money?’, (2012) BMJ, 345, e4348; C. Y. Roh and S. H. Kim, ‘Medical Innovation and Social Externality’, (2017) Journal of Open Innovation: Technology, Market, and Complexity, 3(1), 3; S. Salas-Vega et al., ‘Assessment of Overall Survival, Quality of Life, and Safety Benefits Associated with New Cancer Medicines’, (2017) JAMA Oncology, 3(3), 382390.

6 K. Hutchinson and W. Rogers, ‘Hips, Knees, and Hernia Mesh: When Does Gender Matter in Surgery?’, (2017) International Journal of Feminist Approaches to Bioethics, 10(1), 26.

7 Davis, ‘Drugs, Cancer’; T. Fojo et al., ‘Unintended Consequences of Expensive Cancer Therapeutics – The Pursuit of Marginal Indications and a Me-Too Mentality that Stifles Innovation and Creativity: The John Conley Lecture’, (2014) JAMA Otolaryngology – Head and Neck Surgery, 140(12), 12251236; S. C. Overley et al., ‘Navigation and Robotics in Spinal Surgery: Where Are We Now?’, (2017) Neurosurgery, 80(3S), S86.

8 D. Cohen, ‘Devices and Desires: Industry Fights Toughening of Medical Device Regulation in Europe’, (2013) BMJ, 347, f6204; C. Di Mario et al., ‘Commentary: The Risk of Over-regulation’, (2011) BMJ, 342, d3021; O. Dyer, ‘Trump Signs Bill to Give Patients Right to Try Drugs’, (2018) BMJ, 361, k2429; S. F. Halabi, ‘Off-label Marketing’s Audiences: The 21st Century Cures Act and the Relaxation of Standards for Evidence-based Therapeutic and Cost-comparative Claims’, (2018) American Journal of Law & Medicine, 44(2–3), 181196; M. D. Rawlins, ‘The “Saatchi Bill” will Allow Responsible Innovation in Treatment’, (2014) BMJ, 348, g2771; Salter et al., ‘Hegemony in the Marketplace’.

9 Salter et al., ‘Hegemony in the Marketplace’.

10 Rawlins, ‘The “Saatchi Bill”’.

11 Dyer, ‘Trump Signs Bill’.

12 Salter et al., ‘Hegemony in the Marketplace’.

13 Footnote Ibid., 159.

15 T. Cockburn and M. Fay, ‘Consent to Innovative Treatment’, (2019) Law, Innovation and Technology, 11(1), 3454; T. Hendl, ‘Vulnerabilities and the Use of Autologous Stem Cells for Medical Conditions in Australia’, (2018) Perspectives in Biology and Medicine, 61(1), 7689.

16 Medical Professionalism Project, ‘Medical Professionalism in the New Millennium: A Physicians’ Charter’, (2002) Lancet, 359(9305), 520522.

17 H. Iijima et al., ‘Effectiveness of Mesenchymal Stem Cells for Treating Patients with Knee Osteoarthritis: A Meta-analysis Toward the Establishment of Effective Regenerative Rehabilitation’, (2018) NPJ Regenerative Medicine, 3(1), 15.

18 D. Sipp et al., ‘Clear Up this Stem-cell Mess’, (2018) Nature, 561, 455457.

19 M. Munsie et al., ‘Open for Business: A Comparative Study of Websites Selling Autologous Stem Cells in Australia and Japan’, (2017) Regenerative Medicine, 12(7); L. Turner and P. Knoepfler, ‘Selling Stem Cells in the USA: Assessing the Direct-to-Consumer Industry’, (2016) Cell Stem Cell, 19(2), 154157.

20 I. Berger et al., ‘Global Distribution of Businesses Marketing Stem Cell-based Interventions’,(2016) Cell Stem Cell, 19(2), 158162; D. Sipp et al., ‘Marketing of Unproven Stem Cell–Based Interventions: A Call to Action’, (2017) Science Translational Medicine, 9(397); M. Sleeboom-Faulkner and P. K. Patra, ‘Experimental Stem Cell Therapy: Biohierarchies and Bionetworking in Japan and India’, (2011) Social Studies of Science, 41(5), 645666.

21 G. Bauer, et al., ‘Concise Review: A Comprehensive Analysis of Reported Adverse Events in Patients Receiving Unproven Stem Cell‐based Interventions’, (2018) Stem Cells Translational Medicine, 7(9), 676685; T. Lysaght et al., ‘The Deadly Business of an Unregulated Global Stem Cell Industry’, (2017) Journal of Medical Ethics, 43, 744746.

22 Sipp et al., ‘Clear Up’.

23 T. Caulfield et al., ‘Confronting Stem Cell Hype’, (2016) Science, 352(6287), 776777; A. K. McLean et al., ‘The Emergence and Popularisation of Autologous Somatic Cellular Therapies in Australia: Therapeutic Innovation or Regulatory Failure?’, (2014) Journal of Law and Medicine, 22(1), 6589; Sipp et al., ‘Clear Up’.

24 Munsie et al., ‘Open for Business’; Sipp et al., ‘Marketing’.

25 A. Petersen et al., ‘Therapeutic Journeys: The Hopeful Travails of Stem Cell Tourists’, (2014) Sociology of Health and Illness, 36(5), 670685.

26 Worldhealth.net, ‘Why Is Stem Cell Therapy So Expensive?’, (WorldHealth.Net, 2018), www.worldhealth.net/news/why-stem-cell-therapy-so-expensive/.

27 D. Sipp, ‘Pay-to-Participate Funding Schemes in Human Cell and Tissue Clinical Studies’, (2012) Regenerative Medicine, 7(6s), 105111.

28 Sipp et al., ‘Clear Up’.

29 Sipp et al., ‘Marketing’.

30 R. T. Bright, ‘Submission to the TGA Public Consultation: Regulation of Autologous Stem Cell Therapies: Discussion Paper for Consultation’, (Macquarie Stem Cell Centres of Excellence, 2015), 4, www.tga.gov.au/sites/default/files/submissions-received-regulation-autologous-stem-cell-therapies-msc.pdf.

31 Adult Stem Cell Foundation, ‘Adult Stem Cell Foundation’, www.adultstemcellfoundation.org; M. Berman and E. Lander, ‘A Prospective Safety Study of Autologous Adipose-Derived Stromal Vascular Fraction Using a Specialized Surgical Processing System’, (2017) The American Journal of Cosmetic Surgery, 34(3), 129142; International Cellular Medicine Society, ‘Open Treatment Registry’, (ICMS, 2010), www.cellmedicinesociety.org/attachments/184_ICMS%20Open%20Treatment%20Registry%20-%20Overview.pdf.

32 Sipp et al., ‘Marketing’.

33 P. F. Stahel, ‘Why Do Surgeons Continue to Perform Unnecessary Surgery?’, (2017) Patient Safety in Surgery, 11(1), 1.

34 J. Wise, ‘Show Patients Evidence for Treatment “Add-ons”, Fertility Clinics are Told’, (2019) BMJ, 364, I226.

35 P. Sugarman et al., ‘Off-Licence Prescribing and Regulation in Psychiatry: Current Challenges Require a New Model of Governance’, (2013) Therapeutic Advances in Psychopharmacology, 3(4), 233243.

36 T. E. Chan, ‘Legal and Regulatory Responses to Innovative Treatment’, (2012) Medical Law Review, 21(1), 92130; T. Keren-Paz and A. J. El Haj, ‘Liability versus Innovation: The Legal Case for Regenerative Medicine’, (2014) Tissue Engineering Part A, 20(19–20), 25552560; J. Montgomery, ‘The “Tragedy” of Charlie Gard: A Case Study for Regulation of Innovation?’, (2019) Law, Innovation and Technology, 11(1), 155174; K. Raus, ‘An Analysis of Common Ethical Justifications for Compassionate Use Programs for Experimental Drugs’, (2016) BMC Medical Ethics, 17(1), 60; P. L. Taylor, ‘Innovation Incentives or Corrupt Conflicts of Interest? Moving Beyond Jekyll and Hyde in Regulating Biomedical Academic-Industry Relationships’, (2013) Yale Journal of Health Policy, Law, and Ethics, 13(1), 135197.

37 Chan, ‘Legal and Regulatory Responses’; Taylor, ‘Innovation Incentives’.

38 Chan, ‘Legal and Regulatory Responses’.

39 T. Lysaght et al., ‘A Roundtable on Responsible Innovation with Autologous Stem Cells in Australia, Japan and Singapore’, (2018) Cytotherapy, 20(9), 11031109.

40 Cockburn and Fay, ‘Consent’; Keren-Paz and El Haj, ‘Liability versus Innovation’.

41 J. Pace et al., ‘Demands for Access to New Therapies: Are There Alternatives to Accelerated Access?’, (2017) BMJ, 359, j4494.

42 S. Devaney, ‘Enhancing the International Regulation of Science Innovators: Reputation to the Rescue?’, (2019) Law, Innovation and Technology, 11(1), 134154.

30 The Challenge of ‘Evidence’ Research and Regulation of Traditional and Non-Conventional Medicines

1 P. Lannoye, ‘Report on the Status of Non-Conventional Medicine’, (Committee on the Environment, Public Health and Consumer Protection, 6 March 1997).

2 WHO, ‘WHO Global Report on Traditional and Complementary Medicine 2019’, (WHO, 2019).

3 E. Ernst, ‘Commentary on: Close et al. (2014) A Systematic Review Investigating the Effectiveness of Complementary and Alternative Medicine (CAM) for the Management of Low Back and/or Pelvic Pain (LBPP) in Pregnancy’, (2014) Journal of Advanced Nursing, 70(8), 17021716; WHO, ‘General Guidelines for Methodologies on Research and Evaluation of Traditional Medicine’, (WHO, 2000).

4 WHO, ‘Global Report on Traditional and Complementary Medicine 2019’, (WHO, 2019).

5 House of Lords, Select Committee on Science and Technology: Sixth Report (2000, HL).

6 M. K. Sheppard, ‘The Paradox of Non-evidence Based, Publicly Funded Complementary Alternative Medicine in the English National Health Service: An Explanation’, (2015) Health Policy, 119(10), 13751381.

7 The International Bioethics Committee (IBC) of the United Nations Educational, Scientific and Cultural Organization (UNESCO), the World Intellectual Property Organisation (WIPO), the World Trade Organisation (WTO) and WHO have stated support for the protection of traditional knowledges, including traditional medicines.

8 Such as the European Red List of Medicinal Plants, which documents species endangered by human economic activities and loss of biodiversity.

9 K. Hansen and K. Kappel, ‘Complementary/Alternative Medicine and the Evidence Requirement’ in M. Solomon et al. (eds), The Routledge Companion to Philosophy of Medicine (New York and Abingdon: Routledge, 2016).

10 M. Zhan, Other Wordly: Making Chinese Medicine through Transnational Frames (London: Duke University Press, 2009); C. Schurr and K. Abdo, ‘Rethinking the Place of Emotions in the Field through Social Laboratories’, (2016) Gender, Place and Culture, 23(1), 120133.

11 D. L. Sackett et al., ‘Evidence Based Medicine: What It Is and What It Isn’t’, (1996) British Medical Journal, 312(7023), 7172.

12 R. Porter, The Greatest Benefit to Mankind: A Medical History of Humanity from Antiquity to the Present (New York: Fontana Press, 1999).

13 A. Wahlberg, ‘Above and Beyond Superstition – Western Herbal Medicine and the Decriminalizing of Placebo’, (2008) History of the Human Sciences, 21(1), 77101; A. Harrington, ‘The Many Meanings of the Placebo Effect: Where They Came From, Why They Matter’, (2006) BioSocieties, 1(2), 181193; P. Friesen, ‘Mesmer, the Placebo Effect, and the Efficacy Paradox: Lessons for Evidence Based Medicine and Complementary and Alternative Medicine Medicine’, (2019) Critical Public Health, 29(4), 435447.

14 Friesen, ‘Mesmer’, 436.

15 B. Goldacre, ‘The Benefits and Risks of Homeopathy’, (2007) Lancet, 370(9600), 16721673.

16 E. Cloatre, ‘Regulating Alternative Healing in France, and the Problem of “Non-Medicine”’, (2018) Medical Law Review, 27(2), 189214.

17 WHO, ‘Declaration of Alma-Ata, International Conference on Primary Health Care, Alma-Ata, USSR, 6–12’, (WHO, September 1978).

18 S. Langwick, ‘From Non-aligned Medicines to Market-Based Herbals: China’s Relationship to the Shifting Politics of Traditional Medicine in Tanzania’, (2010) Medical Anthropology, 29(1), 1543.

19 WHO, ‘General Guidelines for Methodologies on Research and Evaluation of Traditional Medicine’, (WHO, 2000).

21 Ibid., 42.

22 O. Akerele et al. (eds) Conservation of Medicinal Plants (Cambridge University Press, 1991).

23 M. Saxer, Manufacturing Tibetan Medicine: The Creating of an Industry and the Moral Economy of Tibetanness (New York: Berghan Books, 2013).

24 Directive 2004/24/EC of the European Parliament and of the Council of 31 March 2004 amending, as regards traditional herbal medicinal products, Directive 2001/83/EC on the Community code relating to medicinal products for human use, OJ 2004 No. L136, 30 April 2004.

25 T. P. Fan et al., ‘Future Development of Global Regulations of Chinese Herbal Products’, (2012) Journal of Ethnopharmacology, 140(3), 568586.

26 V. Fønnebø et al., ‘Legal Status and Regulation of CAM in Europe Part II – Herbal and Homeopathic Medicinal Products’, (CAMbrella, 2012).

27 Charity Commission for England and Wales, ‘Operational Guidance (OG) 304 Complementary and Alternative Medicine’, (Charity Commission for England and Wales, 2018).

28 S. Harrison and K. Checkland, ‘Evidence-Based Practice in UK Health Policy’ in J. Gabe and M. Calnan (eds), The New Sociology of Health Service (Abingdon: Routledge, 2009).

29 Footnote Ibid., p. 126.

30 R. McDonald and S. Harrison, ‘The Micropolitics of Clinical Guidelines: An Empirical Study’, (2004) Policy and Politics, 32(2), 223239.

31 The Good Thinking Society, ‘NHS Homeopathy Spending’, (The Good Thinking Society, 2018), www.goodthinkingsociety.org/projects/nhs-homeopathy-legal-challenge/nhs-homeopathy-spending/.

32 UK Government and Parliament, ‘Stop NHS England from Removing Herbal and Homeopathic Medicines’, (UK Government and Parliament, 2017), www.petition.parliament.uk/petitions/200154.

33 Professional Standards Authority, ‘Untapped Resources: Accredited Registers in the Wider Workforce’, (Professional Standards Authority, 2017).

34 M. Jacob, ‘The Relationship between the Advancement of CAM Knowledge and the Regulation of Biomedical Research’ in J. McHale and N. Gale (eds), The Routledge Handbook on Complementary and Alternative Medicine: Perspectives from Social Science and Law (Abingdon: Routledge, 2015), p. 359.

35 L. Richert, Strange Trips: Science, Culture, and the Regulation of Drugs (Montreal: McGill University Press, 2018), p. 174.

36 J. Barnes, ‘Pharmacovigilance of Herbal Medicines: A UK Perspective’, (2003) Drug Safety, 26(12), 829851.

37 E. Cloatre, ‘Law and Biomedicine and the Making of “Genuine” Traditional Medicines in Global Health’, (2019) Critical Public Health, 29(4), 424434.

38 Richert, Strange Trips, pp. 56–76.

39 J. Kim, ‘Alternative Medicine’s Encounter with Laboratory Science: The Scientific Construction of Korean Medicine in a Global Age’, (2007) Social Studies of Science, 37(6), 855880.

40 Zhan, Other Wordly, p. 72.

41 Footnote Ibid., p. 18

42 Richert, Strange Trips, p. 172.

43 S. A. Langwick, Bodies, Politics and African Healing: The Matter of Maladies in Tanzania (Indiana University Press, 2011), p. 233.

44 Footnote Ibid., p. 223.

45 Jacob, ‘CAM Knowledge’, p. 358.

31 Experiences of Ethics, Governance and Scientific Practice in Neuroscience Research

1 This chapter revisits and reworks a paper previous published as: M. Pickersgill, ‘The Co-production of Science, Ethics and Emotion’, (2012) Science, Technology & Human Values, 37(6), 579603. Data are reproduced by kind permission of the journal and content used by permission of the publisher, SAGE Publications, Inc.

2 M. M. Easter et al., ‘The Many Meanings of Care in Clinical Research’, (2006) Sociology of Health & Illness, 28(6), 695712; U. Felt et al., ‘Unruly Ethics: On the Difficulties of a Bottom-up Approach to Ethics in the Field of Genomics’, (2009) Public Understanding of Science, 18(3), 354371; A. Hedgecoe, ‘Context, Ethics and Pharmacogenetics’, (2006) Studies in History and Philosophy of Biological and Biomedical Sciences, 37(3), 566582; A. Hedgecoe and P. Martin, ‘The Drugs Don’t Work: Expectations and the Shaping of Pharmacogenetics’, (2003) Social Studies of Science, 33(3), 327364; B. SalterBioethics, Politics and the Moral Economy of Human Embryonic Stem Cell Science: The Case of the European Union’s Sixth Framework Programme’, (2007) New Genetics & Society, 26(3), 269288; S. Sperling, ‘Managing Potential Selves: Stem Cells, Immigrants, and German Identity’, (2004) Science & Public Policy, 31(2), 139149; M. N. Svendsen and L. Koch, ‘Between Neutrality and Engagement: A Case Study of Recruitment to Pharmacogenomic Research in Denmark’, (2008) BioSocieties, 3(4), 399418; S. P. Wainwright et al., ‘Ethical Boundary-Work in the Embryonic Stem Cell Laboratory’, (2006) Sociology of Health & Illness, 28(6), 732748.

3 M. Pickersgill, ‘From “Implications” to “Dimensions”: Science, Medicine and Ethics in Society’, (2013) Health Care Analysis, 21(1), 3142.

4 C. Waterton and B. Wynne, ‘Can Focus Groups Access Community Views?’ in R. S. Barbour and J. Kitzinger (eds), Developing Focus Group Research: Politics, Theory and Practice (London: Sage, 1999), pp. 127143, 142. The methodology of these focus groups is more fully described in the following: M. Pickersgill et al., ‘Constituting Neurologic Subjects: Neuroscience, Subjectivity and the Mundane Significance of the Brain’, (2011) Subjectivity, 4(3), 346365; M. Pickersgill et al., ‘The Changing Brain: Neuroscience and the Enduring Import of Everyday Experience’, (2015), Public Understanding of Science, 24(7), 878892; Pickersgill, ‘The Co-production of Science’.

5 S. Jasanoff, S. (ed.) States of Knowledge: The Co-Production of Science and Social Order, Oxford (Routledge, 2004), pp. 112; P. Brodwin, ‘The Coproduction of Moral Discourse in US Community Psychiatry’, (2008) Medical Anthropology Quarterly, 22(2), 127147.

6 See Introduction of this volume; A. Ganguli-Mitra, et al., ‘Reconfiguring Social Value in Health Research through the Lens of Liminality’, (2017) Bioethics, 31(2), 8796.

7 M. J. Farah, ‘Emerging Ethical Issues in Neuroscience’, (2002) Nature Neuroscience, 5(11), 11231129; T. Fuchs, ‘Ethical Issues in Neuroscience’, (2006) Current Opinion in Psychiatry, 19(6), 600607; J. Illes and É. Racine, ‘Imaging or Imagining? A Neuroethics Challenge Informed by Genetics’, (2005) American Journal of Bioethics, 5(2), 518.

8 E. Postan, ‘Defining Ourselves: Personal Bioinformation as a Tool of Narrative Self-conception’, Journal of Bioethical Inquiry, 13(1), 133151. See also Postan, Chapter 23 in this volume.

9 Farah, ‘Emerging Ethical Issues’; Illes and Racine, ‘Imaging or Imagining?’; M. Gazzaniga, The Ethical Brain (Chicago: Dana Press, 2005).

10 Hedgecoe and Martin, ‘The Drugs Don’t Work’, 8.

11 T. C. Booth et al., ‘Incidental Findings in “Healthy” Volunteers during Imaging Performed for Research: Current Legal and Ethical Implications’, (2010) British Journal of Radiology, 83(990), 456465; N. A. Scott et al., ‘Incidental Findings in Neuroimaging Research: A Framework for Anticipating the Next Frontier’, (2012) Journal of Empirical Research on Human Research Ethics, 7(1), 5357; S. A. Tovino, ‘Incidental Findings; A Common Law Approach’, (2008) Accountability in Research, 15(4), 242261.

12 J. Illes et al., ‘Incidental Findings in Brain Imaging Research’, Science, 311(5762), 783784, 783.

13 S. Cohn, ‘Making Objective Facts from Intimate Relations: The Case of Neuroscience and Its Entanglements with Volunteers’, (2008) History of the Human Sciences, 21(4), 86103; S. Shostak and M. Waggoner, ‘Narration and Neuroscience: Encountering the Social on the “Last Frontier of Medicine”’, in M. D. Pickersgill and I. van Keulen, (eds), Sociological Reflections on the Neurosciences (Bingley: Emerald, 2011), pp. 5174.

14 Wainwright et al., ‘Ethical Boundary-Work’.

15 M. Pickersgill et al., ‘Biomedicine, Self and Society: An Agenda for Collaboration and Engagement’, (2019) Wellcome Open Research, 4(9).

32 Humanitarian Research Ethical Considerations in Conducting Research during Global Health Emergencies

1 M. Hunt et al., ‘Ethical Implications of Diversity in Disaster Research’, (2012) American Journal of Disaster Medicine, 7(3), 211221.

2 N. M. Thielman et al., ‘Ebola Clinical Trials: Five Lessons Learned and a Way Forward’, (2016) Clinical Trials, 13(1), 8686.

3 Council for International Organizations of Medical Sciences, ‘International Ethical Guidelines for Health-related Research Involving Humans’, (CIOMS, 2016), Guideline 20.

4 A. Levine, ‘Academics Are from Mars, Humanitarians Are from Venus: Finding Common Ground to Improve Research during Humanitarian Emergencies,’ (2016) Clinical Trials, 13(1), 7982.

5 Nuffield Council on Bioethics, ‘Research in Global Health Emergencies: Ethical Issues,’ (Nuffield Council on Bioethics, 2020).

6 C. Tansey et al., ‘Familiar Ethical Issues Amplified’, (2017) BMC Medical Ethics, 1891, 112.

7 Thielman et al., ‘Ebola Clinical Trials’.

8 WHO, ‘Guidance For Managing Ethical Issues In Infectious Disease Outbreaks’, (WHO, 2016), 30.

10 CIOMS, ‘International Ethical Guidelines’, Commentary to Guideline 20.

11 A. Sumathipala et al., ‘Ethical Issues in Post-disaster Clinical Interventions and Research: A Developing World Perspective. Key Findings from a Drafting and Consensus Generating Meeting of the Working Group on Disaster Research Ethics (WGDRE) 2007’, (2010) Asian Bioethics Review, 2(2), 124142.

12 Thielman et al. ‘Ebola Clinical Trials’, 85.

13 Tansey et al., ‘Familiar Ethical Issues’.

14 Sumathipala et al., ‘Ethical Issues’.

15 Nuffield Council on Bioethics, ‘Briefing Note: Zika – Ethical Considerations’, (Nuffield Council on Bioethics, 2016).

16 S. Qari et al., ‘Preparedness and Emergency Response Research Centers: Early Returns on Investment in Evidence-based Public Health Systems Research’, (2014) Public Health Reports, 129(4), 14.

17 A. Rid and F. Miller, ‘Ethical Rationale for the Ebola “Ring Vaccination” Trial Design’, (2016) American Journal of Public Health, 106(3), 432435.

18 A. J. London and J. Kimmelman, ‘Against Pandemic Research Exceptionalism’, (2020) Science, 368(6490), 476477.

19 E. Zamrozik and M. J. Selgelid, ‘Covid-19 Human Challenge Studies: Ethical Issues’, (2020) Lancet Infectious Disease, www.thelancet.com/journals/laninf/article/PIIS1473-3099(20)30438-2/fulltext.

20 S. Holm, ‘Controlled Human Infection with SARS-CoV-2 to Study COVID-19 Vaccine and Treatments: Bioethics in Utopia,’ (2020) Journal of Medical Ethics, 0, 15.

21 WHO, ‘Ethical Issues Related to Study Design for Trials on Therapeutics for Ebola Virus Disease’, (WHO, 2014), 2.

22 E. C. Hayden, ‘Experimental Drugs Poised for Use in Ebola Outbreak,’ Nature (18 May 2018), www.nature.com/articles/d41586-018-05205-x.

23 WHO, ‘Ebola Virus Disease – Democratic Republic of Congo’, WHO (31 August 2018), www.who.int/csr/don/31-august-2018-ebola-drc/en/.

24 Tansey et al., ‘Familiar Ethical Issues’, 24.

25 A. Saxena and M. Gomes, ‘Ethical Challenges to Responding to the Ebola Epidemic: The World Health Organization Experience’, (2016) Clinical Trials, 13(1), 96100.

26 P. Vince et al., ‘Institutional Trust and Misinformation in the Response to the 2018–2019 Ebola Outbreak in North Kivu, DR Congo: A Population-based Survey’, (2019) Lancet, 19(5), 529356.

27 Nuffield Council on Bioethics, ‘Research in Global Health Emergencies’, 41.

28 Footnote Ibid., 32–36.

29 CIOMS, ‘International Ethical Guidelines’, Guideline 20.

30 Nuffield Council on Bioethics, ‘Research in Global Health Emergencies’.

31 Footnote Ibid., xvi–xvii.

32 Footnote Ibid., 29.

33 M. Hunt et al., ‘The Challenge of Timely, Responsive and Rigorous Ethics Review of Disaster Research: Views of Research Ethics Committee Members’, (2016) PLoS ONE, 11(6), e0157142.

35 E. Alirol et al., ‘Ethics Review of Studies during Public Health Emergencies – The Experience of the WHO Ethics Review Committee During the Ebola Virus Disease Epidemic’, (2017) BMC Medical Ethics, 18(1), 8.

36 L. Eckenwiler et al., ‘Real-Time Responsiveness for Ethics Oversight During Disaster Research’, (2015) Bioethics, 29(9), 653661.

37 A. Ganguli-Mitra et al., ‘Reconfiguring Social Value in Health Research Through the Lens of Liminality’, (2017) Bioethics, 31(2), 8796.

38 N. Pal et al., ‘Ethical Considerations for Closing Humanitarian Projects: A Scoping Review’, (2019) Journal of International Humanitarian Action, 4(1), 19.

39 D. O’Mathúna, ‘Research Ethics in the Context of Humanitarian Emergencies’, (2015) Journal of Evidence-Based Medicine, 8(1), 3135, 31.

40 D. Schopper et al., ‘Innovations in Research Ethics Governance in Humanitarian Settings’, (2015) BMC Medical Ethics, 16(1), 78.

33 A Governance Framework for Advanced Therapies in Argentina Regenerative Medicine, Advanced Therapies, Foresight, Regulation and Governance

1 E. Da Silva, ‘Biotechnology: Developing Countries and Globalization’, (1998) World Journal of Microbiology and Biotechnology, 14(3), 463486.

2 There was a seed industry in the country in which national firms and subsidiaries of multinational companies actively participated as well as public institutions and had a long tradition of germplasm renewal.

3 In 2014, the Food and Agriculture Organization (FAO) recognised CONABIA as a centre of reference for biosecurity of genetically modified organisms worldwide.

4 E. Trigo et al., ‘Los transgénicos en la agricultura argentina, (2002) Libros del Zorzal, I, 165178.

5 G. Laurie et al.,Law, New Technologies, and the Challenges of Regulating for Uncertainty’, (2012) Law, Innovation & Technology, 4(1), 133.

6 F. Arzuaga, ‘Stem Cell Research and Therapies in Argentina: The Legal and Regulatory Approach, (2013) Stem Cells and Development, 22(S1), 443.

7 Organs and Anatomic Human Material Transplantation, Act No. 24.193, of 24 March 1993 and amendments. INCUCAI Resolution No. 307/2007 establishes the classification of medical indications for autologous, allogeneic and unrelated transplantation of HPC. It also regulates procedures for tissue banking, including the banking of stem cells from umbilical cord blood (UCB), which is an alternative source of HPC used in transplants in replacement of bone marrow.

8 S. Harmon, ‘Emerging Technologies and Developing Countries: Stem Cell Research (and Cloning) Regulation and Argentina’, (2008) Developing World Bioethics, 8(2), 138150.

9 National Agency of Promotion of Science and Technology, which in 2008 became the Ministry of Science, Technology and Productive Innovation (MOST).

10 Resolution ANPCYT No 214/06 creates the Advisory Commission in Cellular Therapies and Regenerative Medicine with the objective to advise the National Agency of Promotion of Science and Technology in the evaluation of research projects in regenerative medicine (RM) that request funding for research as well as to study regulatory frameworks on RM in other jurisdictions.

11 S. Harmon and G. Laurie, The Regulation of Human Tissue and Regenerative Medicine in Argentina: Making Experience Work. SCRIPT Opinions, No. 4 (AHRC Research Centre for Studies in Intellectual Property and Technology Law, 2008).

12 AHRC/SCRIPT was directed by Professor Graeme Laurie.

13 The direct antecedent of the use of stem cells for therapeutic purposes is the hematopoietic progenitor cells (HPC) transplantation from bone marrow to treat blood diseases. This practice has been performed for more than fifty years and is considered an ‘established practice’. HPC transplantation is regulated by the Transplant Act 1993, and its regulatory authority is INCUCAI, which has issued regulations governing certain technical and procedural aspects of this practice. INCUCAI Resolution 307/2007 establishes the classification of medical indications for autologous, allogeneic and unrelated transplantation of HPC. It also covers procedures for tissue banking, including the banking of stem cells from umbilical cord blood (UCB), which is an alternative source of HPC used in transplants in replacement of bone marrow.

14 Arzuaga, ‘Stem Cells Research in Argentina’.

15 In eleven years, Incucai has approved four research protocols using outologous cells. Details of protocols can be accessed on: ‘Tratamientos existentes’, (Ministerio de Cliencia, Tecnología e Innovación Productiva, Presidencia de la Nación), www.celulasmadre.mincyt.gob.ar/tratamientos.php.

16 Commercialization Regime of Medicinal Products Act, Act 16.463, of 8 August 1964, and Decree 9763/1964 and amendments.

17 C. Krmpotic, ‘Creer en la cura. Eficacia simbólica y control social en las prácticas del Dr. M.,’ (2011) Scripta Ethnológica, (XXXIII), 97–116.

18 The National Constitution of Argentina establishes a right to health, and stipulates that the private or public health system of the provinces or federal authorities is guarantor of the right. The following are examples of judicial cases that were reported by the Legal Department of OSDE (Social Security Organization for Company Managers): ‘Jasminoy, María Cristina c / Osde Binario s /Sumarísimo’ (Expte. 4008 / 03), Court of First Instance in Civil and Commercial Matters No. 11, Secretariat No. 22. The treatment was covered by OSDE. Diagnosis: Multiple Sclerosis; ‘Silenzi de Stagni de Orfila Estela c / Osde Binario S. A s / Amparo’ (Expte. 4475 / 05), National Civil Court No. 11. The treatment was covered by OSDE. Diagnosis: Multiple Sclerosis; ‘Ferrreira Mariana c / Osde Binario y otros / Sumarísimo’ (Expte. 8342 / 06), Civil and Commercial Federal Court No. 9, Secretariat No. 17. The court decision ordered the coverage of the treatment but it could not be implemented because the plaintiff died. Diagnosis: Leukaemia.

19 V. Mendizabal et al.,Between Caution and Hope: The Role of Argentine Scientists and Experts in Communicating the Risks Associated with Stem Cell Tourism,(2013) Perspectivas Bioéticas, 35–36, 145155.

20 An ESRC-funded research project, Governing Emerging Technologies: Social Values in Stem Cell Research Regulation in Argentina, explored various stakeholders’ regulatory values, ambitions and tolerances. The institutional relationship resulted in the training of researchers and members of the Commission, the hosting of eight international seminars at which experts from various countries – mainly the UK – shared their experiences, and the holding of fellowships which facilitated research visits to academic and regulatory institutions in the UK.

21 Which resulted in engagement activities with judicial associations so as to raise awareness among judges about the problem of experimental treatments, and the need to avoid ordering the transfer of resources from the health system to unscrupulous medical doctors

22 See more at: ‘Red argentina de pacientes’, (Argentina.gob.ar), www.argentina.gob.ar/ciencia/celulasmadre/red-argentina-de-pacientes.

23 ANMAT Disposition 179/2018.

24 Law No. 27.447/2018 y su Decreto Reglamentario No. 16/2019.

25 S. Harmon, ‘Argentina Unbound: Governing Emerging Technologies: Social Values in Stem Cell Regulation in Argentina’, (2008) Presented at European Association of Health Law, ‘The Future of Health Law in Europe’ (Conference, 10–11 April 2008, Edinburgh).

26 G. Laurie et al., ‘Foresighting Futures: Law, New Technologies, and the Challenges of Regulating for Uncertainty’, (2012) Law, Innovation and Technology, 4(1), 133.

34 Human Gene Editing Traversing Normative Systems

1 K. E. Ormond et al., ‘Human Germline Genome Editing’, (2017) The American Journal of Human Genetics, 101(2), 167176.

2 D. Normile, ‘Government Report Blasts Creator of CRISPR Twins’, (2019) Science, 363(6425), 328.

3 J. Qiu, ‘American Scientist Played More Active Role in “CRISPR Babies” Project than Previously Known’, (Stat News, 31 January 2019), www.statnews.com/2019/01/31/crispr-babies-michael-deem-rice-he-jiankui/.

4 B. M. Knoppers et al., ‘Genetics and Stem Cell Research: Models of International Policy-Making’ in J. M. Elliot et al. (eds), Bioethics in Singapore: The Ethical Microcosm (Singapore: World Scientific Publishing, 2010), pp. 133163.

5 Human Genome Editing Initiative, ‘New International Commission on Clinical Use of Heritable Human Genome Editing’, (National Academies of Science Engineering Medicine, 2019), www.nationalacademies.org/gene-editing/index.htm.

6 R. Isasi et al., ‘Genetic Technology Regulation: Editing Policy to Fit the Genome?’, (2016) Science, 351(6271), 337339.

7 Isasi et al. ‘Genetic Technology Regulation’.

8 Ormond et al., ‘Human Germline Genome Editing’.

9 D. Baltimore et al., ‘Biotechnology: A Prudent Path Forward for Genomic Engineering and Germline Gene Modification’, (2015) Science, 348(6230), 3638.

10 Biosafety Law, Law No. 11, 2005 (Brazil).

11 Bioethics and Safety Act 2013 (South Korea).

12 Act Containing Rules Relating to the Use of Gametes and Embryos [The Embryos Act] 2002 (The Netherlands).

13 Isasi et al. ‘Genetic Technology Regulation’; S. Lingqiao and R. Isasi, The Regulation of Human Germline Genome Modification in China. Human Germline Genome Modification and the Right to Science: A Comparative Study of National Laws and Policies (Cambridge: Cambridge University Press; 2019).

14 Prohibition of Genetic Intervention (Human Cloning and Genetic Manipulation of Reproductive Cells) Law 1999 last renewed, 2009 (Israel).

15 Prohibition of Genetic Intervention.

16 Bioethics Law/Loi No. 2004-800 du aout 6 2004 relative à la bioethique and Code Civil (1804) 2004 last amendment 2015 (France).

17 Bioethics Law.

18 D. Cyranoski, ‘China Introduces ‘Social’ Punishments for Scientific Misconduct’, (Nature, 14 December 2018).

19 Embryo Protection Act. 1990 (Germany).

20 Embryo Protection Act.

21 An Act respecting human assisted reproduction and related research (Assisted Human Reproduction Act) 2004 (Canada).

22 Bioethics Law.

23 Indian Council of Medical Research, ‘Ethical Guidelines for Biomedical Research on Human Participants’, (Indian Council of Medical Research, 2000 last amendment 2006).

24 Act on Research on Embryos In Vitro – Loi relative à la recherché sur les embryons in vitro 2003 (Belgium).

25 National Academies of Sciences, Engineering, and Medicine, Human Genome Editing: Science, Ethics, and Governance, (The National Academies Press, 2017); HUGO Ethics Committee, ‘Statement on Gene Therapy Research’, (Human Genome Organisation, 2001).

26 UNESCO Constitution, ‘Universal Declaration on the Human Genome and Human Rights’, (United Nations Educational, Scientific, and Cultural Organization, 1997)

27 United Nations, ‘United Nations Declaration on Human Cloning’, (United Nations, 2005).

28 Council of Europe, ‘Convention for the Protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine: Convention on Human Rights and Biomedicine’, (Council of Europe, 1997).

29 European Union Clinical Trials Regulation 536/2014, OJ No. L 158/1, 2014.

30 Council of Europe, ‘Convention for the Protection of Human Rights’.

31 European Union Clinical Trials Regulation.

32 Ormond et al., ‘Human Germline Genome Editing’; National Academies, ‘Human Genome Editing’; Genetic Alliance Germline Gene Editing, ‘A Call for Moratorium on Germline Gene Editing, Commentary by Genetic Alliance’, (Genetic Alliance, 2019), www.geneticalliance.org/advocacy/policyissues/germline_gene_editing; National Academies of Sciences, Engineering, and Medicine, International Summit on Human Gene Editing: A Global Discussion, (The National Academies Press, 2015); National Academies of Sciences, Engineering, and Medicine, Second International Summit on Human Genome Editing: Continuing the Global Discussion Proceedings of a Workshop—in Brief, (The National Academies Press, 2019); International Society for Stem Cell Research, ‘The ISSCR Statement on Human Germline Genome Modification’, (ISSCR: International Society for Stem Cell Research, 2015).

33 C. Brokowski, ‘Do CRISPR Germline Ethics Statements Cut It?’, (2018) The CRISPR Journal, 1(2), 115125.

34 Enforcement of Scientific Ethics Committee, Academic Division of the Chinese Academy of Sciences (CASAD), ‘Statement About CCR5 Gene-edited Babies’, (CASAD, 2018) www.english.casad.cas.cn/bb/201811/t20181130_201704.html; Chinese Society for Stem Cell Research & Genetics Society of China, ‘Condemning the Reproductive Application of Gene Editing on Human Germline’, (Chinese Society for Cell Biology, 2018), www.cscb.org.cn/news/20181127/2988.html.

35 M. Allyse et al., ‘What Do We Do Now?: Responding to Claims of Germline Gene Editing in Humans’, (2019) Genetics in Medicine, 21(10), 21812183.

36 Nuffield Council on Bioethics. ‘Genome Editing and Human Reproduction: Social and Ethical Issues’, (Nuffield Council on Bioethics, 2018), 154.

37 Bioethics Advisory Committee Singapore, ‘Ethics Guidelines for Human Biomedical Research’, (Bioethics Advisory Committee Singapore, 2015), 50.

39 Indian Council of Medical Research, ‘Ethical Guidelines for Biomedical Research’.

40 Nuffield Council on Bioethics, ‘Genome Editing and Human Reproduction’; The Hinxton Group, ‘Statement on Genome Editing Technologies and Human Germline Genetic Modification’, (The Hinxton Group: An International Consortium on Stem Cells, Ethics, & Law, 2015).

41 European Academies’ Science Advisory Council, ‘Genome Editing: Scientific Opportunities, Public Interests and Policy Options in the European Union’, (EASAC: European Academies’ Science Advisory Council, 2017).

42 R. Isasi, ‘Human Genome Editing: Reflections on Policy Convergence and Global Governance’ in ZfMER (eds), Genomeditierung – Ethische, rechtliche und kommunikations – wissenschaftliche Aspekte im Bereich der molekularen Medizin un Nutzplanzenzüchtung, Zeitschrift für Medizin-Ethik-Recht, (Nomos, 2017), pp. 287–298.

43 E. S. Lander et al., ‘Adopt a Moratorium on Heritable Genome Editing’, (2019) Nature, 567(7747), 165168; Allyse et al., ‘What Do We Do Now?’.

44 Allyse et al. ‘What Do We Do Now?’.

45 M. Boodman, ‘The Myth of Harmonization of Laws’, (1991) The American Journal of Comperative Law, 39(4), 699724.

46 R. Isasi, ‘Policy Interoperability in Stem Cell Research: Demystifying Harmonization’, (2009) Stem Cell Reviews and Reports, 5(2), 108115.

47 Oxford English Dictionary, ‘Harmonization’, (2019) Lexico, https://en.oxforddictionaries.com/definition/harmonization.

48 R. Isasi and G. J. Annas, ‘To Clone Alone: The United Nations Human Cloning Declaration’, (2006) Revista de Derecho y Genoma Humano, 49(24), 1326.

49 United Nations, ‘United Nations Declaration on Human Cloning’; D. Lodi et al., ‘Stem Cells in Clinical Practice: Applications and Warnings’, (2011) Journal of Experimental & Clinical Cancer Research, 30(1), 9.

50 Normile, ‘Government Report’, 328.

35 Towards a Global Germline Ethics? Human Heritable Genetic Modification and the Future of Health Research Regulation

1 Early discussions of these technologies often referred to ‘gene editing’; in this chapter I employ the term ‘genome editing’, a usage that has since become more standard.

2 D. Cyranoski and H. Ledford, ‘Genome-Edited Baby Claim Provokes International Outcry’, (2018) Nature, 563(7733), 607608.

3 The methods developed in the 1980s for producing transgenic mice, for example (see B. H. Koller and O. Smithies, ‘Altering Genes in Animals by Gene Targeting’, [1992] Annual Review of Immunology, 10, 705730), required extensive manipulation of embryonic stem cells (ESC) in vitro, followed by injecting these cells to form chimeric embryos, genetically screening a large number of progeny, and then selectively cross-breeding them to produce the desired genetic makeup – all steps ethically unthinkable to perform in humans.

4 H. Ledford, ‘CRISPR, the Disruptor’, (2015) Nature, 522(7554), 2024.

5 M. Jinek et al., ‘A Programmable Dual-RNA-Guided DNA Endonuclease in Adaptive Bacterial Immunity’, (2012) Science, 337(6096), 816821.

6 P. Liang et al., ‘CRISPR/Cas9-Mediated Gene Editing in Human Tripronuclear Zygotes’, (2015) Protein Cell, 6(5), 363372.

7 D. Baltimore et al., ‘Biotechnology. A Prudent Path Forward for Genomic Engineering and Germline Gene Modification’, (2015) Science, 348(6230), 3638; E. Lanphier et al., ‘Don’t Edit the Human Germ Line’, (2015) Nature, 519(7544), 410441.

8 Reviewed in C. Brokowski, ‘Do CRISPR Germline Ethics Statements Cut It?’, (2018) The CRISPR Journal, 1(2), 115.

9 Committee on Human Gene Editing, Human Genome Editing: Science, Ethics and Governance (Washington, DC: N. A. Press, 2017).

10 Nuffield Council on Bioethics, ‘Genome Editing and Human Reproduction’, (Nuffield Council on Bioethics, 2018).

11 Although, it would later transpire, more than a few international academics knew of He’s work prior to the announcement, provoking questions as to why the work was not flagged earlier (N. Kofler, ‘Why Were Scientists Silent over Gene-Edited Babies?’, (2019) Nature, 566(7745), 427).

12 Mitochondria are numerous organelles within each cell that produce energy via chemical reactions and carry their own genome, separate to nuclear DNA. Some of the genes required for mitochondrial function are encoded within the nuclear DNA, while others are in the mitochondrial genome (mtDNA) itself. Since most of the cytoplasm of a developing embryo comes from the egg, mitochondria are transmitted almost exclusively from the oocyte to offspring, with little if any contribution from the sperm. Diseases caused by mtDNA mutations are thus ‘maternally inherited’, that is, passed on from mother to child.

13 The Human Fertilisation and Embryology (Mitochondrial Donation) Regulations 2015.

14 National Academies of Sciences, Engineering, and Medicine, Mitochondrial Replacement Techniques: Ethical, Social, and Policy Considerations, (Washington, DC: T. N. A. Press, 2016).

15 J. Hamzelou, ‘Exclusive: World’s First Baby Born with New “3 Parent” Technique’, (New Scientist, 27 September 2016).

16 Lanphier et al., ‘Don’t Edit the Human Germ Line’, 410.

17 K. Takahashi et al., ‘Induction of Pluripotent Stem Cells from Adult Human Fibroblasts by Defined Factors’, (2007) Cell, 131(5), 861872; K. Takahashi and S. Yamanaka, ‘Induction of Pluripotent Stem Cells from Mouse Embryonic and Adult Fibroblast Cultures by Defined Factors’, (2006) Cell, 126(4), 663676.

18 S. Hendriks et al., ‘Artificial Gametes: A Systematic Review of Biological Progress towards Clinical Application’, (2015) Human Reproduction Update, 21(3), 285296.

19 UNESCO, ‘Universal Declaration on the Human Genome and Human Rights’, (1998).

20 See I. de Miguel Beriain, ‘Should Human Germ Line Editing Be Allowed? Some Suggestions on the Basis of the Existing Regulatory Framework’, (2019) Bioethics, 33(1), 105111.

21 D. Morgan and M. Ford, ‘Cell Phoney: Human Cloning after Quintavalle’, (2004) Journal of Medical Ethics, 30(6), 524526.

22 Infertility Treatment Act 1995 (Vic), s3(1).

23 Clinical Trials Directive, 2001/20/EC, Article 9(6).

24 Clinical Trials Regulation, 536/2014.

25 UNESCO, ‘Universal Declaration on the Human Genome and Human Rights’, (1998).

26 The definition of ‘permitted embryo’ requires that ‘no nuclear or mitochondrial DNA of any cell of the embryo has been altered’ (see Human Fertilisation and Embryology Act 2008, S. 3ZA(4)(b)), which prima facie prevents implantation of genetically modified embryos. MRT is rendered legal via specific provision for regulations to include within the ‘permitted category’ embryos that have undergone ‘a prescribed process designed to prevent the transmission of serious mitochondrial disease’ (see S. 3ZA(5)). This provision was implemented in the Human Fertilisation and Embryology (Mitochondrial Donation) Regulations 2015.

27 Note, however, that this does not constitute a ban on embryo research across the board, only on federal funding.

28 I. G. Cohen and E. Y. Adashi, ‘The FDA Is Prohibited from Going Germline’, (2016) Science, 353(6299), 545546.

29 S. Chan and M.-d-J. Medina Arellano, ‘Genome Editing and International Regulatory Challenges: Lessons from Mexico’, (2016) Ethics, Medicine and Public Health, 2(3), 426434; S. Chan, ‘Embryo Gene Editing: Ethics and Regulation’, in K. Appasani (eds), Genome Editing and Engineering: From TALENs, ZFNs and CRISPRs to Molecular Surgery (Cambridge: Cambridge University Press, 2018), pp. 454463.

30 T. Ishii, ‘Potential Impact of Human Mitochondrial Replacement on Global Policy Regarding Germline Gene Modification’, (2014) Reprod Biomed Online, 29(2), 150155; F. Baylis, ‘Human Nuclear Genome Transfer (So-Called Mitochondrial Replacement): Clearing the Underbrush’, (2017) Bioethics, 31(1), 719.

31 A. Van Mil et al., ‘Potential Uses for Genetic Technologies: Dialogue and Engagement Research Conducted on Behalf of the Royal Society’, (Hopkins van Mil, 2017).

32 S. Chan, ‘Playing It Safe? Precaution, Risk, and Responsibility in Human Genome Editing’, (2020) Perspectives in Biology and Medicine, 63(1), 111125.

33 McMillan, Chapter 37 in this volume ; G. Cavaliere, ‘A 14-Day Limit for Bioethics: The Debate over Human Embryo Research’, (2017) BMC Medical Ethics, 18(1), 38; S. Chan, ‘How to Rethink the Fourteen-Day Rule’, (2017) Hastings Center Report, 47(3), 56;S. Chan, ‘How and Why to Replace the 14-Day Rule’, (2018) Current Stem Cell Reports, 4(3), 228234.

34 The principle of procreative autonomy, or reproductive liberty, is well-established ethically: see for example J. A. Robertson, Children of Choice: Freedom and the New Reproductive Technologies (Princeton University Press, 1994); R. Dworkin, Life’s Dominion: An Argument about Abortion, Euthanasia and Individual Freedom (New York: Vintage, 1993).

35 H.-J. Ehni, ‘Dual Use and the Ethical Responsibility of Scientists’, (2008) Archivum Immunologiae et Therapiae Experimentalis, 56(3), 147152; H. Jonas, The Imperative of Responsibility (Chicago University Press, 1984). Further analysis is warranted of which collective responsibilities, particularly with respect to complicity, might have been at stake in the He case.

36 National Academies of Sciences, Engineering, and Medicine, ‘Second International Summit on Human Genome Editing: Continuing the Global Discussion: Proceedings of a Workshop – in Brief’, (Washington, DC: N. A. P. (US), 2019).

37 X. Zhai et al., ‘Chinese Bioethicists Respond to the Case of He Jiankui’, (Hastings Bioethics Forum, 7 February 2019), www.thehastingscenter.org/chinese-bioethicists-respond-case-jiankui/.

38 D. Cyranoski, ‘What CRISPR-Baby Prison Sentences Mean for Research’, (2020) Nature, 577(7789), 154155.

39 M. Brazier, ‘Regulating the Reproduction Business?’, (1999) Medical Law Review, 7(2), 166193; A. Alghrani and S. Chan, ‘Scientists in the Dock: Criminal Law and the Regulation of Science,’ in A. Alghrani et al. (eds), The Criminal Law and Bioethical Conflict: Walking the Tightrope Cambridge: Cambridge University Press, 2013), pp. 121139.

40 G. J. Annas et al., ‘Protecting the Endangered Human: Toward an International Treaty Prohibiting Cloning and Inheritable Alterations’, (2002) American Journal of Law and Medicine, 28(2–3), 151178.

41 M. Meloni, Political Biology (London: Palgrave Macmillan, 2016).

42 UNESCO, ‘Universal Declaration on the Human Genome and Human Rights’, (1998), Art. 1.

43 S. Chan et al., ‘Mitochondrial Replacement Techniques, Scientific Tourism, and the Global Politics of Science’, (2017) Hastings Center Report, 47(5), 79.

44 E. Callaway, ‘Second Chinese Team Reports Gene Editing in Human Embryos’, (2016) Nature, doi:10.1038/nature.2016.19718.

45 E. Callaway, ‘Embryo Editing Gets Green Light’, (2016) Nature, 530(7588), 18.

46 J. Zhang, ‘Comment: Transparency Is a Growth Industry’, (2017) Nature, 545(7655), S65.

47 E. S. Lander et al., ‘Adopt a Moratorium on Heritable Genome Editing’, (2019) Nature, 567(7747), 165168.

48 World Health Organisation, ‘Global Health Ethics: Human Genome Editing’, www.who.int/ethics/topics/human-genome-editing/en/.

49 Brokowski, ‘Do CRISPR Germline Ethics Statements Cut It?’.

50 J. Benjamin Hurlbut et al., ‘Building Capacity for a Global Genome Editing Observatory: Conceptual Challenges’, (2018) Trends in Biotechnology, 36(7), 639641; S. Jasanoff et al., ‘Democratic Governance of Human Germline Genome Editing’, (2019) The CRISPR Journal, 2(5), 266271; K. Saha et al., ‘Building Capacity for a Global Genome Editing Observatory: Institutional Design’, (2018) Trends in Biotechnology, 36(8), 741743.

36 Cells, Animals and Human Subjects Regulating Interspecies Biomedical Research

1 For a current discussion of human subject regulation see: I. G. Cohen and H. F. Lynch (eds), Human Subjects Research Regulation: Perspectives on the Future (Cambridge, MA: MIT Press, 2014).

2 For a current discussion of animal welfare regulation see: G. Davies et al., ‘Science, Culture, and Care in Laboratory Animal Research: Interdisciplinary Perspectives on the History and Future of the 3Rs’, (2018) Science, Technology, & Human Values43(4), 603621.

3 C. ThompsonGood Science: The Ethical Choreography of Stem Cell Research (Cambridge, MA: MIT Press, 2013).

4 As discussed below, the term ‘human contributions’ is used in the NAS Guidelines for stem cell research oversight.

5 See, for example, I. Geesink et al., ‘Stem Cell Stories 1998–2008’, (2008) Science as Culture 17(1), 111; L. F. Hogle. ‘Characterizing Human Embryonic Stem Cells: Biological and Social Markers of Identity’, (2010Medical Anthropology Quarterly24(4), 433450.

6 For a history of the use of the term chimera in developmental biology and stem cell science see: A. Hinterberger, ‘Marked ‘H’ for Human: Chimeric Life and the Politics of the Human’, (2018BioSocieties13(2), 453469.

7 On natural chimerism see: A. Martin, ‘Ray Owen and the History of Naturally Acquired Chimerism’, (2015Chimerism6(1–2), 27.

8 For a recent account see: N. C. Nelson, ‘Modeling Mouse, Human, and Discipline: Epistemic Scaffolds in Animal Behavior Genetics’, (2013) Social Studies of Science, 43(1), 329.

9 Human Fertilisation and Embryology Act 2008 (emphasis added).

10 UK House of Lords debate, 15 January 2008, Column 1183.

12 The Academy of Medical Sciences, ‘Animals Containing Human Material’, (2011).

13 S. Franklin, ‘Drawing the Line at Not-Fully-Human: What We Already Know’, (2003) The American Journal of Bioethics, 3(3), 2527.

14 National Academy of Sciences ‘Final Report of The National Academies’ Human Embryonic Stem Cell Research Advisory Committee and 2010 Amendments to The National Academies’ Guidelines for Human Embryonic Stem Cell Research’, (National Academies Press, 2010).

15 A. Sharma et al., ‘Lift NIH Restrictions on Chimera Research’, (2015Science 350(6261), 640.

16 I. Hyun, ‘What’s Wrong with Human/Nonhuman Chimera Research?’ (2016) PLoS biology14(8).

17 See: B. Hurlbut, Experiments in Democracy: Human Embryo Research and the Politics of Bioethics (Columbia University Press, 2017); G. Cavaliere, ‘A 14-day Limit for Bioethics: The Debate over Human Embryo Research’, (2017BMC Medical Ethics18(1) 38.

18 T. Rashid et al., ‘Revisiting the Flight of Icarus: Making Human Organs from PSCs with Large Animal Chimeras’, (2014Cell Stem Cell15(4), 406409.

19 J. C. I. Belmonte, ‘Human Organs from Animal Bodies’, (2016) Scientific American315(5), 3237, 36.

20 Hyun, ‘What’s Wrong with Human/Nonhuman Chimera Research?’

24 S. Jasanoff, Can Science Make Sense of Life? (Cambridge, UK: John Wiley & Sons, 2019).

25 T. Kobayashi et al.,  ‘Generation of Rat Pancreas in Mouse by Interspecific Blastocyst Injection or Pluripotent Stem Cells’, (2010) Cell142(5), 787799.

26 S. Camporesi, ‘Crispr Pigs, Pigoons and the Future of Organ Transplantation: An Ethical Investigation of the Creation of Crispr-Engineered Humanised Organs in Pigs’,  (2018) Etica & Politica/Ethics & Politics, 20(3), 3552. Latest predictions are that a combination between genetically modified pigs and interspecies chimera organogenesis could deliver regenerative medicine solutions for transplantation, see F. Suchy and H. Nakauchi, ‘Interspecies Chimeras’, (2018Current Opinion in Genetics & Development52, 3641.

37 When Is Human? Rethinking the Fourteen-Day Rule

1 Human Fertilisation and Embryology Act 1990 (as amended), s3(4).

2 A. Deglincerti, et al., ‘Self-organization of the In Vitro Attached Human Embryo’, (2016) Nature, 533(7602), 251; M. Shahbazi et al. ‘Self-organization of the Human Embryo in the Absence of Maternal Tissues’, (2016) Nature Cell Biology, 18(6), 700708.

3 For some, embryos are inherently ‘human’, and this chapter does not intend to support or negate this case.

4 J. Appleby and A. Bredenoord, ‘Should the 14‐day Rule for Embryo Research Become the 28‐day Rule?’, (2018) EMBO Molecular Medicine, 10(9), e9437.

5 I. Hyun et al., ‘Embryology Policy: Revisit the 14 day Rule’, (2016) Nature, 533(7602), 169171.

6 S. Chan, ‘How and Why to Replace the 14-Day Rule’, (2018Current Stem Cell Reports4(3), 228234.

7 Ethics Advisory Board, ‘Education and Welfare. Report and Conclusions: HEW Support of Research Involving Human In Vitro Fertilization and Embryo Transfer’, (Department of Health, Education and Welfare, 1979).

8 Committee of Inquiry into Human Fertilisation and Embryology, ‘Report of the Committee of Inquiry into Human Fertilisation and Embryology’, (Department of Health and Social Security, 1984), Cmnd 9314, 1984, (hereafter ‘Warnock Report’).

9 Footnote Ibid., 11.9.

10 Footnote Ibid., 11.16.

11 N. Hammond-Browning, ‘Ethics, Embryos and Evidence: A Look Back at Warnock’, (2015) Medical Law Review, 23(4), 588619, 605.

12 Human Fertilisation and Embryology Act 1990.

13 P. Monahan, ‘Human Embryo Research Confronts Ethical “Rule”’, (2016) Science, 352(6286), 640.

14 Nuffield Council on Bioethics, ‘Human Embryo Culture’, (Nuffield Council on Bioethics, 2017).

15 It is worth noting that in 2017 Hulbert et al. found that there are no sensory systems or functional neural connections in embryos at the twenty-eight-day stage. For more discussion on this see Appleby and Bredenoord ‘The 14‐day Rule’.

16 Hammond-Browning ‘Ethics, Embryos and Evidence’, 604.

17 Footnote Ibid., 605.

18 See Abortion Act 1967, s1.

19 See C. McMillan et al., ‘Beyond Categorisation: Refining the Relationship between Subjects and Objects in Health Research Regulation’, (2021) Law, Innovation and Technology, doi: 10.1080/17579961.2021.1898314.

20 St George’s Healthcare NHS Trust v. S [1998] All ER 673, [1998] 3 WLR 936, 952.

21 Hammond-Browning ‘Ethics, Embryos and Evidence’, 606.

22 S. Chan, ‘How to Rethink the Fourteen‐Day Rule’, (2017) Hastings Center Report, 47(3), 56.

23 Deglincerti et al., ‘Self-organization’, 533

24 Shahbazi et al., ‘Self-organization of the Human Embryo’, 700.

25 Hyun et al., ‘Embryology Policy’, 169.

26 Chan, ‘How and Why’, 228.

27 See M. Ford, ‘Nothing and Not Nothing: Law’s Ambivalent Response to Transformation and Transgression at the Beginning of Life’ in S. Smith, and R. Deazley (eds), The Legal, Medical and Cultural Regulation of the Body: Transformation and Transgression (London: Routledge, 2009), pp. 2146.

28 Footnote Ibid., 43.

29 C. McMillan, The Human Embryo in Vitro: Breaking the Legal Stalemate (Cambridge University Press, 2021).

30 S. Taylor-Alexander et al., ‘Beyond Regulatory Compression: Confronting the Liminal Spaces of Health Research Regulation’, (2016) Law, Innovation and Technology, 8(2) 149176; McMillan, ‘The Human Embryo’.

31 I.e. Should research and reproductive embryos be treated the same? Should the fourteen-day rule be extended? What can we find out about time between fourteen and twenty-eight days? Etc.

32 I.e. The question of how we should treat embryos is, of course, never certain because there is no objective answer; in recognition of moral pluralism it is very much a subjective matter.

33 See Ford, ‘Nothing and Not Nothing’, 31

34 This is not to suggest that we could cross boundaries between research and reproduction, however.

35 Taylor-Alexander et al., ‘Beyond Regulatory Compression’.

36 E.g. S. Wong, ‘The Limits to Growth’, (2016) New Scientist, 232(3101), 1819.

37 See McMillan, ‘The Human Embryo’.

38 See E. Jonlin, ‘The Voices of the Embryo Donors’, (2015) Trends in Molecular Medicine, 21(2), 5557; S. Parry, ‘(Re) Constructing Embryos in Stem Cell Research: Exploring the Meaning of Embryos for People Involved in Fertility Treatments’, (2006) Social Science and Medicine, 62(10), 23492359.

39 See McMillan, ‘The Human Embryo’.

41 Appleby and Bredenoord, ‘The 14‐day Rule’.

42 G. Cavaliere, ‘A 14-day Limit for Bioethics: The Debate over Human Embryo Research’, (2017) BMC Medical Ethics, 18(38).

43 See McMillan, ‘The Human Embryo’.

38 A Perfect Storm Non-evidence-Based Medicine in the Fertility Clinic

1 R. G. Edwards et al., ‘Early Stages of Fertilization In Vitro of Human Oocytes Matured In Vitro’, (1969) Nature, 221, 632635.

2 S. Franklin, ‘Louise Brown: My Life as the World’s First Test-Tube Baby by Louise Brown and Martin Powell, Bristol Books (2015)’, (2016) Reproductive Biomedicine and Society Online, 3, 142144.

3 H. K. Snick et al., ‘The Spontaneous Pregnancy Prognosis in Untreated Subfertile Couples: The Walcheren Primary Care Study’, (1997) Human Reproduction, 12(7), 15821588; E. R. te Velde et al., ‘Variation in Couple Fecundity and Time to Pregnancy: an Essential Concept in Human Reproduction’, (2000) Lancet, 355(9219), 19281929.

4 E. G. Papanikolaou et al., ‘Live Birth Rate Is Significantly Higher after Blastocyst Transfer than after Cleavage-Stage Embryo Transfer When at Least Four Embryos Are Available on Day 3 of Embryo Culture. A Randomized Prospective Study’, (2005) Human Reproduction, 20(11), 31983203.

5 K. Stocking et al., ‘Are Interventions in Reproductive Medicine Assessed for Plausible and Clinically Relevant Effects? A Systematic Review of Power and Precision in Trials and Meta-Analyses’, (2019) Human Reproduction, 34(4), 659665.

6 Archie Cochrane famously said that obstetrics deserved ‘the wooden spoon’ for being the least scientific medical speciality. A. L. Cochrane, ‘1931–1971: A Critical Review with Particular Reference to the Medical Profession’ in G. Teeling-Smith and N. E. J. Wells (eds), Medicines for the Year 2000 (London: Office of Health Economics, 1979), pp. 212.

7 Human Fertilisation and Embryology Act 1990, sections 13(6), 17(1) and 17(1)(d).

8 J. Wilkinson et al., ‘Reproductive Medicine: Still More ART than Science?’, (2019) British Journal of Obstetrics and Gynaecology, 126(2), 138141.

9 Stocking et al., ‘Interventions in Reproductive Medicine’; J. M. N. Duffy et al., ‘Core Outcome Sets in Women’s and Newborn Health: A Systematic Review’, (2017) British Journal of Obstetrics and Gynaecology, 124(10), 14811489.

10 J. Rayner et al., ‘Australian Women’s Use of Complementary and Alternative Medicines to Enhance Fertility: Exploring the Experiences of Women and Practitioners’, (2009) BMC Complementary and Alternative Medicine, 9(1), 52.

11 National Institute for Health and Care Excellence, ‘Fertility: Assessment and Treatment for People with Fertility Problems’, (NICE, 2013).

13 Human Fertilisation and Embryology Authority, ‘State of the Fertility Sector 2016–7’, (HFEA, 2017).

14 S. Howard, ‘The Hidden Costs of Infertility Treatment’, (2018) British Medical Journal, 361.

15 G. M. Hartshorne and R. J. Lilford, ‘Different Perspectives of Patients and Health Care Professionals on the Potential Benefits and Risks of Blastocyst Culture and Multiple Embryo Transfer’, (2002) Human Reproduction, 17(4), 10231030.

16 P. Braude, ‘One Child at a Time: Reducing Multiple Births through IVF, Report of the Expert Group on Multiple Births after IVF’, (Expert Group on Multiple Births after IVF, 2006).

17 Wilkinson et al., ‘Reproductive Medicine’.

18 A. K. Datta et al., ‘Add-Ons in IVF Programme – Hype or Hope?’, (2015) Facts, Views & Vision in ObGyn, 7(4), 241250.

19 C. N. M. Renckens, ‘Alternative Treatments in Reproductive Medicine: Much Ado About Nothing: “The Fact That Millions of People Do Not Master Arithmetic Does Not Prove That Two Times Two Is Anything Else than Four”: W. F. Hermans’, (2002) Human Reproduction, 17(3), 528533.

20 J. Boivin and L. Schmidt, ‘Use of Complementary and Alternative Medicines Associated with a 30% Lower Ongoing Pregnancy/Live Birth Rate during 12 Months of Fertility Treatment’, (2009) Human Reproduction, 24(7), 16261631.

21 R. Barber, ‘The Killer Cells That Robbed Me of Four Babies’, Daily Mail (2 January 2011).

22 J. Fricker, ‘My Body Tried to Kill My Baby’, Daily Mail (2 July 2007).

23 See, for example, H. Shehata, quoted in BBC News, ‘Baby Born to Woman Who Suffered 20 Miscarriages’, BBC News (17 January 2014): ‘We found that some women’s natural killer cells are so aggressive they attack the pregnancy, thinking the foetus is a foreign body’.

24 Datta et al., ‘Add-Ons in IVF Programme’.

25 A. Moffett and N. Shreeve, ‘First Do No Harm: Uterine Natural Killer (NK) Cells in Assisted Reproduction’, (2015) Human Reproduction, 30(7), 15191525.

26 Datta et al., ‘Add-Ons in IVF Programme’.

27 R. Rai et al., ‘Natural Killer Cells and Reproductive Failure – Theory, Practice and Prejudice’, (2005) Human Reproduction, 20(5), 11231126.

28 HFEA, ‘Treatment Add-On’, (HFEA, 2019).

29 ‘I was Born to Be a Mum – And Couldn’t Have Done It without Reproductive Immunology’, (Zita West), www.zitawest.com/i-was-born-to-be-a-mum-and-couldnt-have-done-it-without-reproductive-immunology/.

30 J. Hawkins, ‘Selling ART: An Empirical Assessment of Advertising on Fertility Clinics’ Websites’, (2013) Indiana Law Journal, 88(4), 11471179.

31 E. A. Spencer et al., ‘Claims for Fertility Interventions: A Systematic Assessment of Statements on UK Fertility Centre Websites’, (2016) BMJ Open, 6(11).

32 Human Medicines Regulations 2012, 58(4)(a) and 58(4)(b).

33 General Medicine Council, ‘Good Practice in Prescribing and Managing Medicines and Devices’, (GMC, 2013), para 68.

34 Footnote Ibid., para 69.

35 Footnote Ibid., paras 70(a) and 71.

36 Human Fertilisation and Embryology Act 1990, Schedule 3, para 1.

37 HFEA, ‘9th Code of Practice’, (HFEA, 2019), para 4.5.

38 Footnote Ibid., para 4(9).

39 HFEA, ‘Treatment Add-Ons’.

40 Wilkinson et al., ‘Reproductive Medicine’.

41 Human Fertilisation and Embryology Act 1990, section 17(1)(d).

42 A. J. Rutherford, ‘Should the HFEA Be Regulating the Add‐On treatments for IVF/ICSI in the UK? FOR: Regulation of the Fertility Add‐On Treatments for IVF’, (2017) British Journal of Obstetrics & Gynaecology, 124(12), 1848.

43 W. L. Ledger, ‘The HFEA Should Be Regulating Add‐On Treatments for IVF/ICSI’, (2017) British Journal of Obstetrics & Gynaecology, 124(12), 18501850.

44 D. Archard, ‘Ethics of Regenerative Medicine and Innovative Treatments’, (Nuffield Council of Bioethics, 13 October 2017), www.nuffieldbioethics.org/blog/ethics-regenerative-medicine-innovative-stem-cell-treatment.

45 A. Petersen et al., ‘Stem Cell Miracles or Russian Roulette?: Patients’ Use of Digital Media to Campaign for Access to Clinically Unproven Treatments’, (2016) Health, Risk and Society, 17(7–8), 592604.

46 E. Jackson et al., ‘Learning from Cross-Border Reproduction’, (2017) Medical Law Review, 25(1), 2346.

47 Ledger, ‘HFEA Should Be Regulating Add‐On Treatments ‘.

48 Moffett and Shreeve, ‘First Do No Harm’.

39 Medical Devices Regulation New Concepts and Perspectives Needed

1 C. Howard et al., ‘The Maker Movement: A New Avenue for Competition in the EU’, (2014) European View, 13(2), 333340; M. Tan et al., ‘The Influence of the Maker Movement on Engineering and Technology Education’, (2016) World Transactions on Engineering and Technology Education, 14(1), 8994.

2 Regulation (EU) 2017/745, 5 April 2017, on medical devices, amending Directive 2001/83/EC, Regulation (EC) No. 178/2002 and Regulation (EC) No. 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC, OJ L 117, 5.5.2017.

3 Regulation (EU) 2017/746, 5 April 2017, on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU, OJ L 117, 5.5.2017.

4 M. Foucault, The Order of Things: An Archaeology of the Human Sciences (London: Routledge, 1966).

5 See D. Haraway, A Cyborg Manifesto: Science, Technology and Social Feminism in the Late Twentieth Century (London: Routledge, 1991).

6 N. Hayles, How We Become Posthuman: Virtual Bodies in Cybernetics, Literature and Informatics (University of Chicago Press, 1999); S. Wilson, ‘The Composition of Posthuman Bodies’, (2017) International Journal of Performance Arts & Digital Media, 13(2), 137152.

7 D. Serlin, Replaceable You: Engineering the Body in Postwar America (University of Chicago Press, 2004).

8 S. Harmon et al., ‘New Risks Inadequately Managed: The Case of Smart Implants and Medical Device Regulation’, (2015) Law, Innovation & Technology, 7(2) 231252; G. Haddow et al., ‘Implantable Smart Technologies: Defining the ‘Sting’ in Data and Device,’ (2016) Health Care Analysis, 24(3), 210227.

9 M. Donnarumma, ‘Beyond the Cyborg: Performance, Attunement and Autonomous Computation’, (2017) International Journal of Performance Arts & Digital Media, 13(2), 105119; A. Brown et al., ‘Body Extension and the Law: Medical Devices, Intellectual Property, Prosthetics and Marginalisation (Again)’, (2018) Law, Innovation & Technology, 10(2), 161184; M. Quigley and S. Ayihongbe, ‘Everyday Cyborgs: On Integrated Persons and Integrated Goods’, (2018) Medical Law Review, 26(2), 276308.

10 The billions of objects linked in networks and exchanging information now includes us, all melting into the fabric of our personal, social, and commercial environments: S. Gutwirth, ‘Beyond Identity?’, (2008) Identity in the Information Society, 1(1), 123133.

11 The first practical technology for genetically designed humans – CRISPR Cas-9 – is being refined and applied: S. Harmon, ‘Gene-Edited Babies: A Cause for Concern’, (2019, Impact Ethics), www.impactethics.ca/2019/03/08/genome-edited-babies-a-cause-for-concern. Synthetic beings would be the result of designed biological systems relying on existing and new DNA sequences and assembled to support natural evolution: J. Boeke et al., ‘The Genome Project—Write’, (2016) Science, 353(6295), 126127. Multiple fields are working on artificial human-type cognitive function, which involves perception, processing, planning, retention, reasoning, and subjectivity: V. Müller (ed.), Fundamental Issues of Artificial Intelligence (Cham, Switzerland: Springer, 2016).

12 D. Lawrence and M. Brazier, ‘Legally Human? “Novel Beings” and English Law’, (2018) Medical Law Review, 26(2), 309327.

13 R. Braidotti, The Posthuman (Polity Press, 2013); R. Dolphijn and I. van der Tuin, New Materialism: Interviews and Cartographies (Open Humanities Press, 2012).

14 G. Deleuze and F. Guattari, A Thousand Plateaus (London: Continuum, 1987); M. DeLanda, Assemblage Theory (Edinburgh University Press, 2016).

15 T. Tamari, ‘Body Image and Prosthetic Aesthetics: Disability, Technology and Paralympic Culture’, (2017) Body & Society, 23(2), 2556.

16 S. Harmon et al., ‘Moving Toward a New Aesthetic’ in S. Whately et al. (eds), Dance, Disability and Law: Invisible Difference (Bristol: Intellect, 2018) pp. 177194.

17 I. Widäng and B. Fridlund, ‘Self‐Respect, Dignity and Confidence: Conceptions of Integrity among Male Patients’, (2003) Journal of Advanced Nursing, 42(1), 4756.

18 G. Haddow et al., ‘Cyborgs in the Everyday: Masculinity and Biosensing Prostate Cancer’, (2015) Science as Culture, 24(4), 484506.

19 How others perceive us is linked to how they look at us. Staring is the complex phenomenon of observation and internalisation with many facets: R. Garland-Thomson, Staring: How We Look (Oxford University Press, 1996). It is often defined as an oppressive act of disciplinary looking that subordinates the subject: L. Mulvey, ‘Visual Pleasure and Narrative Cinema’, (1975) Screen, 16(3), 618; F. Michel, Foucault Live: Interviews, 1961–1984 (Semiotext(e), 1996);A. Clark, ‘Exploring Women’s Embodied Experiences of ‘The Gaze’ in a Mix-Gendered UK Gym’, (2017) Societies, 8(1), 2.

20 M. Hildebrandt, ‘Profiling and the Identity of the European Citizen’ in. M Hildebrandt and S. Gutwirth (eds), Profiling the European Citizen: Cross-Disciplinary Perspectives (Berlin: Springer, 2008), pp. 303326.

21 Gutwirth, ‘Beyond Identity?’

22 S. Lasch and J. Friedman (eds), Modernity and Identity (Oxford: Blackwell, 1992);D. Polkinghorne, ‘Explorations of Narrative Identity’, (1996) Psychological Inquiry, 7(4), 363367; A. Blasi and K. Glodis, ‘The Development of Identity: A Critical Analysis from the Perspective of the Self as Subject’, (1995) Developmental Review, 15(4), 404433; L. Huddy, ‘From Social to Political Identity: A Critical Examination of Social Identity Theory’, (2001) Political Psychology, 22(1), 127156.

23 M. Shildrick, ‘Individuality, Identity and Supplementarity in Transcorporeal Embodiment’ in K. Cahill et al. (eds), Finite but Unbounded: New Approaches in Philosophical Anthropology (Berlin: de Gruyter, 2017), pp. 153172, p. 154.

24 S. Popat et al., ‘Bodily Extensions and Performance’, (2017) International Journal of Performance Arts & Digital Media, 13(2), 101104.

25 Husayn v Poland (2015) 60 EHRR 16 (ECHR). See also Dickson v UK (2008) 46 EHRR 41 (Grand Chamber).

26 The ‘Mental Capacity Act 2005’ stipulates that third-party decision-makers must make decisions that are only in the subject person’s best interest as understood from the perspective of that person. Where a decision interferes with the person’s physical integrity, the option that represents the least restrictive means must be adopted.

27 Vo v France (2005) 40 EHRR 12 (ECHR).

28 International Covenant on Civil and Political Rights (1966), Art. 23(2); International Covenant on Economic, Social and Cultural Rights (1966), Arts. 6(1) and 7.

29 European Convention on Human Rights and Fundamental Rights (1951), Art. 8 (right to private life); Goodwin v United Kingdom (28957/95) [2002] IRLR 664.

30 E. Mordini and C. Ottolini, ‘Body Identification, Biometrics and Medicine: Ethical and Social Considerations’, (2007) Annali dell’Istituto Superiore di Sanità, 43(1), 5160.

31 [2002] EWHC 1593 (Admin).

32 J. Marshall, Personal Freedom through Human Rights Law? (Leiden: Martinus Nijhoff, 2009).

33 The existing right to privacy is extremely limited, and predominantly ‘negative’, not allowing the construction of positive claims related to identity: P. De Hert, A Right to Identity to Face the Internet of Things (Strasbourg: Council of Europe Publishing, 2007), www.cris.vub.be/files/43628821/pdh07_Unesco_identity_internet_of_things.pdf.

34 S. Harmon et al., ‘Struggling to be Fit: Identity, Integrity, and the Law’, (2017) Script-ed, 14(2), 326344.

35 European Commission, ‘Medical Devices: Regulatory Framework’, (European Commission), www.ec.europa.eu/growth/sectors/medical-devices/regulatory-framework_en; CAMD Implementation Taskforce, ‘Medical Devices Regulation/In-vitro Diagnostics Regulation (MDR/IVDR) Roadmap’, (2018). During the transition, devices can be placed on the market under the new or old regime. It is unclear what impact these Regulations will have post-Brexit, but the UK, which implemented the old regime through the Medical Devices Regulations 2002, will have to comply with EU standards if it wishes to continue to trade within the EU. The Medicines and Healthcare products Regulatory Agency has highlighted its desire to retain a close working partnership with the EU: MHRA, ‘Medical Devices: EU Regulations for MDR and IVDR’, www.gov.uk/guidance/medical-devices-eu-regulations-for-mdr-and-ivdr; Medical Devices (Amendment etc.) (EU Exit) Regulations 2019, not yet approved.

36 G. Laurie, ‘Liminality and the Limits of Law in Health Research Regulation’, (2017) Medical Law Review, 25(1), 4772; C. McMillan et al., ‘Beyond Categorisation: Refining the Relationship Between Subjects and Objects in Health Research Regulation’, (2021) Law, Innovation and Technology, doi: 10.1080/17579961.2021.1898314.

37 MDR Recital 2 cites the Treaty of Union as a foundation for its remit to harmonise the rules for market-access and free-movement of goods, and for setting high standards of device quality and safety.

38 IVDR Article 1 parallels this language for in vitro diagnostic medical devices, which are defined as any medical device that is a reagent, reagent product, calibrator, control material, kit, instrument, apparatus, piece of equipment, software or system, whether used alone or in combination, intended to be used in vitro for the examination of specimens, including blood and tissue donations, derived from the human body, for a number of purposes.

39 The insufficient nature of the Regulations’ transparency of clinical evidence to front-line actors has been noted: A. Fraser et al., ‘The Need for Transparency of Clinical Evidence for Medical Devices in Europe’, (2018) Lancet, 392(10146), 521530.

40 MDR Arts. 51–60. Art. 51 creates the classes I, IIa, IIb and III, which are informed by the device’s intended purposes and inherent risks.

41 MDR Arts. 61–82. Art. 61 states that clinical data shall inform safety and performance requirements under normal conditions of intended use, the evaluation of undesirable side-effects and the risk/benefit ratio.

42 This narrowing has been recognized in the broader health technologies context: M. Flear, ‘Regulating New Technologies: EU Internal Market Law, Risk and Socio-Technical Order’ in M. Cremona (ed.), New Technologies and EU Law (Oxford University Press, 2016), pp. 74122.

43 MDR Arts. 83–100. Art. 83 states that manufacturers shall plan, establish, document, implement, maintain and update a post-market surveillance system for each device proportionate to the risk class and appropriate for the device type. IVDR Recital 75 and Chapter VII are substantively similar.

44 Laurie, ‘Liminality and the Limits of Law’, 68. Also, McMillan et al., ‘Beyond Categorisation’.

45 A. van Gennep, The Rites of Passage (University of Chicago Press, 1960).

46 Quigley and Ayihongbe, ‘Everyday Cyborgs’, 305.

47 D. Dickenson, Property in the Body: Feminist Perspectives, 2nd Edition (Cambridge University Press, 2017).

48 R. Brownsword et al. (eds), ‘Introduction’, Oxford Handbook of the Law and Regulation of Technology (Oxford University Press, 2017), pp. 338.

49 De Hert, Footnote note 33, argues that there ought to be a clear right to identity because people cannot function without it; it is like living, breathing, or being free to feel and think, all of which are minimal requirements for social justice in a rights-conscious society. Such recognition of identity paves the way for identity to be recognised as a right protected by law. He says that ‘states should undertake to respect the right of each person to preserve and develop his or her ipse and idem identity without unlawful interference’ (1). For more on identity as an emerging legal concept: L. Downey, Emerging Legal Concepts at the Nexus of Law, Technology and Society: A Case Study in Identity, unpublished PhD thesis, University of Edinburgh (2017).

Afterword What Could a Learning Health Research Regulation System Look Like?

1 As we go to press, we are heartened to read a blog by Natalie Banner, ‘A New Approach to Decisions about Data’, in which she advocates for the idea of ‘learning governance’ and with which we broadly agree. N. Banner, ‘A New Approach to Decisions about Data’ (Understanding Patient Data, 2020), www.understandingpatientdata.org.uk/news/new-approach-decisions-about-data.

2 Institute of Medicine, ‘Patients Charting the Course: Citizen Engagement and the Learning Health System’ (Institute of Medicine, 2011), 240.

3 National Academy of Engineering, ‘Engineering a Learning Healthcare System: A Look at the Future’ (National Academy of Engineering, 2011).

4 For an analysis of ethics as a (problematic?) negotiated regulatory tool in the neurosciences, see Pickersgill, Chapter 31, this volume.

5 National Academy of Engineering, ‘Engineering a Learning Healthcare System’, 5.

9 Institute of Medicine, ‘Crossing the Quality Chasm: A New Health System for the 21st Century’ (Institute of Medicine, 2001).

10 National Academy of Engineering, ‘Engineering a Learning Healthcare System’, 5.

11 For an account of ethical considerations in a learning healthcare system, see R. R. Faden et al., ‘An Ethics Framework for a Learning Health Care System: A Departure from Traditional Research Ethics and Clinical Ethics’ (2013) Hastings Center Report, 43(s1), S16S27.

12 M. Calvert et al., ‘Advancing Regulatory Science and Innovation in Healthcare’ (Birmingham Health Partners, 2020).

13 Calvert et al., ‘Advancing Regulatory Science,’ 6, citing S. Faulkner, ‘The Development of Regulatory Science in the UK: A Scoping Study’ (CASMI, 2018).

14 European Medicines Agency, ‘EMA Regulatory Science to 2025: Strategic Reflection’ (EMA, 2020), 5.

15 For a plea to recognise the value of upstream input from the social sciences and humanities, see M. Pickersgill et al., ‘Biomedicine, Self and Society: An Agenda for Collaboration and Engagement’ (2019Wellcome Open Research, 4(9), https://doi.org/10.12688/wellcomeopenres.15043.1.

16 Kerasidou, Chapter 8, Aitken and Cunningham-Burley, Chapter 11, Chuong and O’Doherty, Chapter 12, and Burgess, Chapter 25, this volume.

17 P. Carter et al., ‘The Social Licence for Research: Why care.data Ran into Trouble’ (2015) Journal of Medical Ethics, 40(5), 404409.

18 H. Hopkins et al., ‘Foundations of Fairness: Views on Uses of NHS Patients’ Data and NHS Operational Data’ (Understanding Patient Data, 2020), www.understandingpatientdata.org.uk/what-do-people-think-about-third-parties-using-nhs-data#download-the-research.

19 Kaye and Prictor, Chapter 10 this volume.

20 Schaefer, Chapter 3 this volume.

21 Kerasidou, Chapter 8 this volume.

22 Aitken and Cunningham-Burley, Chapter 11, Chuong and O’Doherty, Chapter 12, and Burgess, Chapter 25 this volume.

23 T. Foley and F. Fairmichael, ‘The Potential of Learning Healthcare Systems’ (The Learning Healthcare Project, 2015), 4.

24 Further on feedback loops in the health research context, see S. Taylor-Alexander et al., ‘Beyond Regulatory Compression: Confronting the Liminal Spaces of Health Research Regulation’ (2016) Law, Innovation and Technology, 8(2), 149176.

25 For a discussion of a learning system in the context of AI and medical devices, see Ho, Chapter 28, this volume.

26 For a richer conceptualisation of regulatory failure than mere technological risk and safety concerns, see Flear, Chapter 16, this volume.

27 See, generally, P. Drahos (ed), Regulatory Theory: Foundations and Applications (ANU Press, 2017), and more particularlyR. Baldwin et al., Understanding Regulation: Theory, Strategy, and Practice (Oxford University Press, 2011), p. 158.

28 On such a systemic exercise for risk-benefit see, Coleman, Chapter 13, this volume. On the blinkered view of regulation that reduces assessments only to techno-scientific assessments of risk-benefit, see Haas and Cloatre, Chapter 30.

29 T. Swierstra and A. Rip, ‘Nano-ethics as NEST-ethics: Patterns of Moral Argumentation About New and Emerging Science and Technology’ (2007) Nanoethics, 1, 320, 8.

30 See further, van Delden and van der Graaf, Chapter 4, this volume.

31 A. Ganguli-Mitra et al., ‘Reconfiguring Social Value in Health Research Through the Lens of Liminality’ (2017) Bioethics, 31(2), 8796.

32 For a more dynamic account of research ethics review, see Dove, Chapter 18, this volume.

33 Ganguli Mitra and Hunt, Chapter 32, this volume.

34 Lipworth et al., Chapter 29, this volume.

35 Haas and Cloatre, Chapter 30, this volume.

36 K. Kipnis, ‘Vulnerability in Research Subjects: A Bioethical Taxonomy’ (2001) Commissioned Paper, www.aapcho.org/wp/wp-content/uploads/2012/02/Kipnis-VulnerabilityinResearchSubjects.pdf, 9.

37 See Rogers, Chapter 1, and Brassington, Chapter 9, this volume.

38 J. Clift, ‘A Blueprint for Dynamic Oversight: How the UK Can Take a Global Lead in Emerging Science and Technologies’ (Wellcome, 2019).

39 On further policy perspectives, see Meslin, Chapter 22, this volume.

40 On Rules, Principles, and Best Practices, see Sethi, Chapter 17, this volume.

41 I. Fletcher et al., ‘Co-production and Managing Uncertainty in Health Research Regulation: A Delphi Study’ (2020Health Care Analysis, 2899120, 99.

42 On Access Governance, see Shabani, Thorogood, Murtagh, Chapter 19, this volume.

43 Fletcher, ‘Co-production and Managing Uncertainty’, 109.

44 See Postan, Chapter 23, this volume.

45 Quotes in Fletcher et al., ‘Co-production and Managing Uncertainty’, 109.

46 See G. Laurie et al., ‘Charting Regulatory Stewardship in Health Research: Making the Invisible Visible’ (2018) Cambridge Quarterly of Healthcare Ethics, 27(2), 333347; E. S. Dove, Regulatory Stewardship of Health Research: Navigating Participant Protection and Research Promotion (Cheltenham: Edward Elgar Publishing, 2020).

47 Laurie et al., ‘Charting Regulatory Stewardship’, 338.

48 Dove, ‘Regulatory Stewardship’.

49 N. Stephens et al., ‘Documenting the Doable and Doing the Documented: Bridging Strategies at the UK Stem Cell Bank’ (2011) Social Studies of Science, 41(6), 791813.

50 On institutional perspectives on regulation, see McMahon, Chapter 21, this volume.

51 Evidence of the need for such incentives is presented in A. Sorbie et al., ‘Examining the Power of the Social Imaginary through Competing Narratives of Data Ownership in Health Research’ (2021), Journal of Law and the BioSciences, https://doi.org/10.1093/jlb/lsaa068.

Figure 0

Table 28.1. FDA classification of medical devices by risks

Figure 1

Table 28.2. Comparing pharmaceutical trial phases and medical device trial stages

Figure 2

Table 28.3. Risk characterisation framework for software as a medical device

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×