It is a common trope in discussions of human health research, particularly as to its appropriate regulation, to frame the analysis in terms of the private and public interests that are at stake. Too often, in our view, these interests are presented as being in tension with each other, sometimes irreconcilably so. In this section, the authors grapple with this (false) dichotomy, both by providing deeper insights into the nature and range of the interests in play, as well as by inviting us to rethink attendant regulatory responses and responsibilities. This is the common theme that unites the contributions.
The section opens with the chapter from Postan (Chapter 23) on the question of the return of individually relevant research findings to health research participants. Here an argument is made – adopting a narrative identity perspective – that greater attention should be paid to the informational interests of participants, beyond the possibility that findings might be of clinical utility. Set against the ever-changing nature of the researcher–participant relationship, Postan posits that there are good reasons to recognise these private identity interests, and, as a consequence, to reimagine the researcher as interpretative partner of research findings. At the same time, the implications of all of this for the wider research enterprise are recognised, not only in resource terms but also with respect to striking a defensible balance of responsibilities to participants while seeking to deliver the public value of research itself.
As to the concept of public interest per se, this has been tackled by Sorbie in Chapter 6, and various contributions in Section IB have addressed the role and importance of public engagement in the design and delivery of robust health research regulation. In this section, several authors build on these earlier chapters in multiple ways. For example, Taylor and Whitton (Chapter 24) directly challenge the putative tension between public and private interests, arguing that each is implicated in the other’s protection. They offer a reconceptualisation of privacy through a public interest lens, raising important questions for existing laws of confidentiality and data protection. Their perspective requires us to recognise the common interest at stake. Most uniquely, however, they extend their analysis to show how group privacy interests currently receive short shrift in health research regulation, and they suggest that this dangerous oversight must be addressed adequately because the failure to recognise group privacy interests might ultimately jeopardise the common public interest in health research.
Starkly, Burgess (Chapter 25) uses just such an example of threats to group privacy – the care.data debacle – to mount a case for mobilising public expertise in the design of health research regulation. Drawing on the notion of deliberative public engagement, he demonstrates how this process cannot only counter asymmetries of power in the structural design of regulation but also how the resulting advice about what is in the public interest can bring both legitimacy and trustworthiness to resultant models of governance. This is of crucial importance, because as he states: ‘[i]t is inadequate to assert or assume that research and its existing and emerging regulation is in the public interest’. His contribution allows us to challenge any such assertion and to move beyond it responsibly.
The last two contributions to this section continue this theme of structural reimagining of regulatory architectures, set against the interests and values in play. Vayena and Blassime (Chapter 26) offer the example of Big Data to propose a model of adaptive governance that can adequately accommodate and respond to the diverse and dynamic interests. Following principles-based regulation as previously discussed by Sethi in Chapter 17, they outline a model involving six principles and propose key factors for their implementation and operationalisation into effective governance structures and processes. This form of adaptive governance mirrors the discussions by Kaye and Prictor in Chapter 10. Importantly, the factors identified by the current authors – of social learning, complementarity and visibility – not only lend themselves to full and transparent engagement with the range of public and private interests, they require it. In the final chapter of this section, Brownsword (Chapter 27) invites us to address an overarching question that is pertinent to this entire volume: ‘how are the interests in pushing forward with research into potentially beneficial health technologies to be reconciled with the heterogeneous interests of the concerned who seek to push back against them?’ His contribution is to push back against the common regulatory response when discussing public and private interests: namely, to seek a ‘balance’. While not necessarily rejecting the balancing exercise as a helpful regulatory device at an appropriate point in the trajectory of regulatory responses to a novel technology, he implores us to place this in ‘a bigger picture of lexically ordered regulatory responsibilities’. For him, morally and logically prior questions are those that ask whether any new development – such as automated healthcare – poses threats to human existence and agency. Only thereafter ought we to consider a role for the balancing exercise that is currently so prevalent in human health research regulation.
Collectively, these contributions significantly challenge the public/private trope in health research regulation, but they leave it largely intact as a framing device for engaging with the constantly changing nature of the research endeavour. This is helpful in ensuring that on-going conversations are not unduly disrupted in unproductive ways. By the same token, individually these chapters provide a plethora of reasons to rethink the nature of how we frame public and private interests, and this in turn allows us to carve out new pathways in the future regulatory landscape. Thus:
Private interests have been expanded as to content (Postan) and extended as to their reach (Taylor and Whitton).
Moreover, the implications of recognising these reimagined private interests have been addressed, and not necessarily in ways resulting in inevitable tension with appeals to public interest.
The content of public interest has been aligned with deliberative engagement in ways that can increase the robustness of health research regulation as a participative exercise (Burgess).
Systemic oversight that is adaptive to the myriad of evolving interests has been offered as proof of principle (Vayena and Blassime).
The default of seeking balance between public and private interests has been rightly questioned, at least as to its rightful place in the stack of ethical considerations that contribute to responsible research regulation (Brownsword).
This chapter offers a perspective on the long-running ethical debate about the nature and extent of responsibilities to return individually relevant research findings from health research to participants. It highlights the ways in which shifts in the research landscape are changing the roles of researchers and participants, the relationships between them, and what this might entail for the responsibilities owed towards those who contribute to research by taking part in it. It argues that a greater focus on the informational interests of participants is warranted and that, as a corollary to this, the potential value of findings beyond their clinical utility deserves greater attention. It proposes participants’ interests in using research findings in developing their own identities as a central example of this wider value and argues that these could provide grounds for disclosure.
23.2 Features of Existing Disclosure Guidance
This chapter is concerned with the questions of whether, why, when and how individually relevant findings, which arise in the course of health research, should be offered or fed-back to the research participant to whom they directly pertain.Footnote 1 Unless otherwise specified, what will be said here applies to findings generated through observational and hands-on studies, as well as those using previously collected tissues and data.
Any discussion of ethical and legal responsibilities for disclosure of research findings must negotiate a number of category distinctions relating to the nature of the findings and the practices within which they are generated. However, as will become clear below, several lines of demarcation that have traditionally structured the debate are shifting. A distinction has historically been drawn between the intended (pertinent, or primary) findings from a study and those termed ‘incidental’ (ancillary, secondary, or unsolicited). ‘Incidental findings’ are commonly defined as individually relevant observations generated through research, but lying outwith the aims of the study.Footnote 2 Traditionally, feedback of incidental findings has been presented as more problematic than that of ‘intended findings’ (those the study set out to investigate). However, the cogency of this distinction is increasingly questioned, to the extent that many academic discussions and guidance documents have largely abandoned it.Footnote 3 There are several reasons for this, including difficulties in drawing a bright line between the categories in many kinds of studies, especially those that are open-ended rather than hypothesis-driven.Footnote 4 The relevance of researchers’ intentions to the ethics of disclosure is also questioned.Footnote 5 For these reasons, this chapter will address the ethical issues raised by the return of individually relevant research results, irrespective of whether they were intended.
The foundational question of whether findings should be fed-back – or feedback offered as an option – is informed by the question of why they should. This may be approached by examining the extent of researchers’ legal and ethical responsibilities to participants – as shaped by their professional identities and legal obligations – the strength of participants’ legitimate interests in receiving feedback, or researchers’ responsibilities towards the research endeavour. The last of these includes consideration of how disclosure efforts might impact on wider public interests in the use of research resources and generation of valuable generalisable scientific knowledge, and public trust in research. These considerations then provide parameters for addressing questions of which kinds of findings may be fed-back and under what circumstances. For example, which benefits to participants would justify the resources required for feedback? Finally, there are questions of how, including how researchers should plan and manage the pathway from anticipating the generation of such findings to decisions and practices around disclosure.
In the past two decades, a wealth of academic commentaries and consensus statements have been published, alongside guidance by research funding bodies and professional organisations, making recommendations about approaches to disclosure of research findings.Footnote 6 Some are prescriptive, specifying the characteristics of findings that ought to be disclosed, while others provide process-focused guidance on the key considerations for ethically, legally and practically robust disclosure policies. It is not possible here to give a comprehensive overview of all the permutations of responses to the four questions above. However, some prominent and common themes can be extracted.
Most strikingly, in contrast to the early days of this debate, it is rare now to encounter the bald question of whether research findings should ever be returned. Rather the key concerns are what should be offered and how.Footnote 7 The resource implications of identifying, validating and communicating findings are still acknowledged, but these are seen as feeding into an overall risk/benefit analysis rather than automatically implying non-disclosure. In parallel with this shift, there is less scepticism about researchers’ general disclosure responsibilities. In the UK, researchers are not subject to a specific legal duty to return findings.Footnote 8 Nevertheless, there does appear to be a growing consensus that researchers do have ethical responsibilities to offer findings – albeit limited and conditional ones.Footnote 9 The justifications offered for these responsibilities vary widely, however, and indeed are not always made explicit. This chapter will propose grounds for such responsibilities.
When it comes to determining what kinds of findings should be offered, three jointly necessary criteria are evident across much published guidance. These are captured pithily by Lisa Eckstein et al. as ‘volition, validity and value’.Footnote 10 Requirements for analytic and clinical validity entail that the finding reliably measures and reports what it purports to. Value refers to usefulness or benefit to the (potential) recipient. In most guidance this is construed narrowly in terms of the information’s clinical utility – construed as actionability and sometimes further circumscribed by the seriousness of the condition indicated.Footnote 11 Utility for reproductive decision-making is sometimes included.Footnote 12 Although some commentators suggest that ‘value’ could extend to the non-clinical, subjectively determined ‘personal utility’ of findings, it is generally judged that this alone would be insufficient to justify disclosure costs.Footnote 13 The third necessary condition is that the participant should have agreed voluntarily to receive the finding, having been advised at the time of consenting to participate about the kinds of findings that could arise and having had the opportunity to assent to or decline feedback.Footnote 14
Accompanying this greater emphasis on the ‘which’ and ‘how’ questions is an increasing focus upon the need for researchers to establish clear policies for disclosing findings, that are explained in informed consent procedures, and an accompanying strategy for anticipating, identifying, validating, interpreting, recording, flagging-up and feeding-back findings in ways that maximise benefits and minimise harms.Footnote 15 Broad agreement among scholars and professional bodies that – in the absence of strong countervailing reasons – there is an ethical responsibility to disclose clinically actionable findings is not, however, necessarily reflected in practice, where studies may still lack disclosure policies, or have policies of non-disclosure.Footnote 16
Below I shall advance the claim that, despite a greater emphasis upon, and normalisation of, feedback of findings, there are still gaps, which mean that feedback policies may not be as widely instituted or appropriately directed as they should be. Chief among these gaps are, first, a continued focus on researchers’ inherent responsibilities considered separately from participants’ interests in receiving findings and, second, a narrow conception of when these interests are engaged. These gaps become particularly apparent when we attend to the ways in which the roles of researchers and participants and relationships between them have shifted in a changing health research landscape. In the following sections, I will first highlight the nature of these changes, before proposing what these mean for participants’ experiences, expectations and informational interests and, thus, for ethically robust feedback policies and practices.
23.3 The Changing Health Research Landscape
The landscape of health research is changing. Here I identify three facets of these changes and consider how these could – and indeed should – have an effect on the practical and ethical basis of policies and practices relating to the return of research findings.
The first of these developments is a move towards ‘learning healthcare’ systems and translational science, in which the transitions between research and care are fluid and cyclical, and the lines between patient and participant are often blurred.Footnote 17 The second is greater technical capacities, and appetite, for data-driven research, including secondary research uses of data and tissues – sourced from patient records, prior studies, or biobanks – and linkage between different datasets. This is exemplified by the growth in large-scale and high-profile of genomic studies such as the UK’s ‘100,000 Genomes’ project.Footnote 18 The third development is increasing research uses of technologies and methodologies, such as functional neuroimaging, genome-wide association studies, and machine-learning, which lend themselves to open-ended, exploratory inquiries rather than hypothesis-driven ones.Footnote 19 I wish to suggest that these three developments have a bearing on disclosure responsibilities in three key respects: erosion of the distinction between research and care; generation of findings with unpredictable or ambiguous validity and value; and a decreasing proximity between researchers and participants. I will consider each of these in turn.
Much of the debate about disclosure of findings has, until recently, been premised on there being a clear distinction between research and care, and what this entails in terms of divergent professional priorities and responsibilities, and the experiences and expectations of patient and participants. Whereas it has been assumed that clinicians’ professional duty of care requires disclosure of – at least – clinically actionable findings, researchers are often seen as being subject to a contrary duty to refrain from feedback if this would encourage ‘therapeutic misconceptions’, or divert focus and resources from the research endeavour.Footnote 20 However, as health research increasingly shades into ‘learning healthcare’, these distinctions become increasingly untenable.Footnote 21 It is harder to insist that responsibilities to protect information subjects’ interests do not extend to those engaged in research, or that participants’ expectations of receiving findings are misconceived. Furthermore, if professional norms shift towards more frequent disclosure, so the possibility that healthcare professionals may be found negligent for failing to disclose becomes greater.Footnote 22 These changes may well herald more open feedback policies in a wider range of studies. However, if these policies are premised solely on the duty of care owed in healthcare contexts to participants-as-patients, then the risk is that any expansion will fail to respond adequately to the very reasons why findings should be offered at all – to protect participants’ core interests.
Another consequence of the shifting research landscape, and the growth of data-driven research in particular, lies in the nature of findings generated. For example, many results from genomic analysis or neuroimaging studies are probabilistic rather than strongly predictive, and produce information of varying quality and utility.Footnote 23 And open-ended and exploratory studies pose challenges precisely because what they might find – and thus their significance to participants – are unpredictable and, especially in new fields of research, may be less readily validated. These characteristics are of ethical significance because they present obstacles to meeting the requirements (noted above) for securing validity, value and ascertaining what participants wish to receive. And where validity and value are uncertain, robust analysis of the relative risks and benefits of disclosure is not possible. Given these challenges, it is apparent that meeting participants’ informational interests will require more than just instituting clear disclosure policies. Instead, more flexible and discursive disclosure practices may be needed to manage unanticipated or ambiguous findings.
Increasingly, health research is conducted using data or tissues that were collected for earlier studies, or sourced from biobanks or patient records.Footnote 24 In these contexts, in contrast to the closer relationships entailed by translational studies, researchers may be geographically, temporally and personally far-removed from the participants. This poses a different set of challenges when determining responsibilities for disclosing research findings. First, it may be harder to argue that researchers working with pre-existing data collections hold a duty of care to participants, especially one analogous to that of a healthcare professional. Second, there is the question of who is responsible for disclosure: is it those who originally collected materials, manage this resource or generate the findings? Third, if consent is only sought when the data or tissues are originally collected, it is implausible that a one-off procedure could address in detail all future research uses, let alone the characteristics, of all future findings.Footnote 25 And finally, in these circumstances, disclosure may be more resource-intensive where, for example, much time has elapsed or datasets have been anonymised. These observations underscore the problems of thinking of ‘health research’ as a homogenous category in which the respective roles and expectations of researchers and participants are uniform and easily characterised, and ethical responsibilities attach rigidly to professional identities.
Finally, it is also instructive to attend to shifts in wider cultural and legal norms surrounding our relationships to information about ourselves and the increasing emphasis on informational autonomy, particularly with respect to accessing and controlling information about our health or genetic relationships. There is increased legal protection of informational interests beyond clinical actionability, including the interest in developing one’s identity, and in reproductive decision-making.Footnote 26 For example, European human rights law has recognised the right to access to one’s health records and the right to know one’s genetic origins as aspects of the Article 8 right to respect for private life.Footnote 27 And in the UK, the legal standard for information provision by healthcare professionals has shifted from one determined by professional judgement, to that which a reasonable patient would wish to know.Footnote 28
When taken together, the factors considered in this section provide persuasive grounds for looking beyond professional identities, clinical utility and one-off consent and information transactions when seeking to achieve ethically defensible feedback of research findings. In the next section, I will present an argument for grounding ethical policies and practices upon the research participants’ informational interests.
23.4 Re-focusing on Participants’ Interests
What emerges from the picture above is that the respective identities and expectations of researchers and participants are changing, and with them the relationships and interdependencies between them. Some of these changes render research relationships more intimate, akin to clinical care, while other makes them more remote. And the roles that each party fulfils, or are expected to fulfil, may be ambiguous. This lack of clarity presents obstacles to relying on prior distinctions and definitions and raises questions about the continued legitimacy of some existing guiding principles.Footnote 29 Specifically, it disrupts the foundations upon which disclosure of individually relevant results might be premised. In this landscape, it is no longer possible or appropriate – if indeed it ever was – simply to infer what ethical feedback practice would entail from whether not an actor is categorised as ‘a researcher’. This is due not only to ambiguity about the scope of this role and associated responsibilities. It also looks increasingly unjustifiable to give only secondary attention to the nature and specificity of participants’ interests: to treat these as if they are a homogenous group of narrowly health-related priorities that may be honoured, provided doing so does not get in the way of the goal of generating generalisable scientific knowledge. There is a need to revisit the nature and balance of private and public interests at stake. My proposal here is that participants’ informational interests, and researchers’ particular capacities to protect these interests, should comprise the heart of ethical feedback practices.
There are several reasons why it seems appropriate – particularly now – to place participants’ interests at the centre of decision-making about disclosure. First, participants’ roles in research are no less in flux than researchers’. While it may be true that the inherent value of any findings to participants – whether they might wish to receive them and whether the information would be beneficial or detrimental to their health, well-being, or wider interests – may not be dramatically altered by emerging research practices, their motivations, experiences and expectations of taking part may well be different. In the landscape sketched above, it is increasingly appropriate to think of participants less as passive subjects of investigation, but rather as partners in the research relationship.Footnote 30 This is a partnership grounded in the contributions that participants make to a study and in the risks and vulnerabilities incurred when they agree to take part. The role of participant-as-partner is underscored by the rise of the idea that there is an ethical ‘duty to participate’.Footnote 31 This idea has escaped the confines of academic argument. Implications of such a duty are evident in in public discourse concerning biobanks and projects such as 100,000 Genomes. For example, referring to that project, the (then) Chief Medical Officer for England has said that to achieve ‘the genomic dream’, we should ‘agree to use of data for our own benefit and others’.Footnote 32 A further compelling reason for placing the interests of participants at the centre of return policies is that doing so is essential to building confidence and demonstrating trustworthiness in research.Footnote 33 Without this trust there would be no participants and no research.
In light of each of these considerations, it is difficult to justify the informational benefits of research accruing solely to the project aims and the production of generalisable knowledge, without participants’ own core informational interests inviting corresponding respect. That is, respect that reflects the nature of the joint research endeavour and the particular kinds of exposure and vulnerabilities participants incur.
If demonstrating respect was simply a matter of reciprocal recognition of participants’ contributions to knowledge production, then it could perhaps be achieved by means other than feedback. However, research findings occupy a particular position in the vulnerabilities, dependencies and responsibilities of the researcher relationship. Franklin Miller and others argue that researchers have responsibilities to disclose findings that arise from a particular pro tanto ethical responsibility to help others and protect their interests within certain kinds of professional relationships.Footnote 34 These authors hold that this responsibility arises because, in their professional roles, researchers have both privileged access to private aspects of participants’ lives, and particular opportunities and skills for generating information of potential significance and value to participants to which they would not otherwise have access.Footnote 35 I would add to this that being denied the opportunity to obtain otherwise inaccessible information about oneself not only fails to protect participants from avoidable harms, it also fails to respect and benefit them in ways that recognise the benefits they bring to the project and the vulnerabilities they may incur, and trust they invest, when doing so.
None of what I have said seeks to suggest that research findings should be offered without restriction, or at any cost. The criteria of ‘validity, value and volition’ continue to provide vital filters in ensuring that information meets recipients’ interests at all. However, providing these three conditions are met, investment of research resources in identifying, validating, offering and communicating individually relevant findings, may be ethically justified, even required, when receiving them could meet non-trivial informational interests. One question that this leaves unanswered, of course, is what counts as an interest of this kind.
If responsibilities for feedback are premised on the value of particular information to participants, it seems arbitrary to confine this value solely to clinical actionability, unless health-related interests are invariably more critical than all others. It is not at all obvious that this is so. This section provides a rationale for recognising at least one kind of value beyond clinical utility.Footnote 36
It is suggested here that where research findings support a participant’s abilities to develop and inhabit their own sense of who they are, significant interests in receiving these findings will be engaged. The kinds of findings that could perform this kind of function might include, for example, those that provide diagnoses that explain longstanding symptoms – even where there is no effective intervention – susceptibility estimates that instigate patient activism, or indications of carrier status or genetic relatedness that allow someone to (re)assess of understand their relationships and connections to others.
The claim to value posited here goes beyond appeals to ‘personal utility’, as commonly characterised in terms of curiosity, or some unspecified, subjective value. It is unsurprising that, thus construed, personal utility is rarely judged to engage sufficiently significant interests to warrant the effort and resources of disclosing findings.Footnote 37 However, the claim here – which I have more fully discussed elsewhereFootnote 38 – is that information about the states, dispositions and functions of our bodies and minds, and our relationships to others (and others’ bodies) – such as that conveyed by health research findings – is of value to us when, and to the extent that, it provides constitutive and interpretive tools that help us to develop our own narratives about who we are – narratives that constitute our identities.Footnote 39 Specifically, this value lies not in contributing to just any identity-narrative, but one that makes sense when confronted by our embodied and relational experiences and supports us in navigating and interpreting these experiences.Footnote 40 These experiences include those of research participation itself. A coherent, ‘inhabitable’ self-narrative is of ethical significance, because such a narrative is not just something we passively and inevitably acquire. Rather, it is something we develop and maintain, which provides the practical foundations for our self-understanding, interpretive perspective and values, and thus our autonomous agency, projects and relationships.Footnote 41 If we do indeed have a significant interest in developing and maintaining such a narrative, and some findings generated in health research can support us in doing so, then my claim is that these findings may be at least as valuable to us as those that are clinically actionable. As such, our critical interests in receiving them should be recognised in feedback policies and practices.
In response to concern that this proposal constitutes an unprecedented incursion of identity-related interests into the (public) values informing governance of health research, it is noted that the very act of participating in research is already intimately connected to participants’ conceptions of who they are and what they value, as illustrated by choices to participate motivated by family histories of illness,Footnote 42 or objections to tissues or data being used for commercial research.Footnote 43 Participation already impacts upon the self-understandings of those who choose to contribute. Indeed, it may often be seen as contributing to the narratives that comprise their identities. Seen in this light, it is not only appropriate, but vital, that the identity-constituting nature of research participation is reflected in the responsibilities that researchers – and the wider research endeavour – owe to participants.
What would refocusing ethical feedback for research findings to encompass the kinds of identity-related interests described above mean for the responsibilities of researchers and others? I submit that it entails responsibilities both to look beyond clinical utility to anticipate when findings could contribute to participants’ self-narratives and to act as an interpretive partner in discharging responsibilities for offering and communicating findings.
It must be granted that the question of when identity-related interests are engaged by particular findings is a more idiosyncratic matter than clinical utility. This serves to underscore the requirement that any disclosure of findings is voluntary. And while this widening of the conception of ‘value’ is in concert with increasing emphasis on individually determined informational value in healthcare – as noted above – it is not a defence of unfettered informational autonomy, requiring the disclosure of whatever participants might wish to see. In order for research findings to serve the wider interests described above, they must still constitute meaningful and reliable biomedical information. There is no value without validity.Footnote 44
These two factors signal that the ethical responsibilities of researchers will not be discharged simply by disclosing findings. There is a critical interpretive role to be fulfilled at several junctures, if participants’ interests are to be protected. These include: anticipating which findings could impact on participants’ health, self-conceptions or capacities to navigate their lives; equipping participants to understand at the outset whether findings of these kinds might arise; and, if participants choose to receive these findings, ensuring that these are communicated in a manner that is likely to minimise distress, and enhance understanding of the capacities and limitations of the information in providing reliable explanations, knowledge or predictions about their health and their embodied states and relationships. This places the researcher in the role of ‘interpretive partner’, supporting participants to make sense of the findings they receive and to accommodate – or disregard – them in conducting their lives and developing their identities.
This role of interpretive partner represents a significant extension of responsibilities from an earlier era in which a requirement to report even clinically significant findings was questioned. The question then arises as to who will be best placed to fulfil this role. As noted above, dilemmas about who should disclose arise most often in relation to secondary research uses of data.Footnote 45 These debates err, however, when they treat this as a question focused on professional and institutional duties abstracted from participants’ interests. When we attend to these interests, the answer that presents itself is that feedback should be provided by whoever is best placed to recognise and explain the potential significance of the findings to participants. And it may in some cases be that those best placed to do this are not researchers at all, but professionals performing a role analogous to genetic counsellors.
Even though the triple threshold conditions for disclosure – validity, value and volition – still apply, any widening of the definition of value implies a larger category of findings to be validated, offered and communicated. This will have resource implications. And – as with any approach to determining which findings should be fed-back and how – the benefits of doing so must still be weighed against any resultant jeopardy to the socially valuable ends of research. However, if we are not simply paying lip-service to, but taking seriously, the ideas that participants are partners in, not merely passive objects of, research, then protecting their interests – particularly those incurred through participation – is not supererogatory, but an intrinsic part of recognising their contribution to biomedical science, their vulnerability, trust and experiences of contributing. Limiting these interests to receipt of clinically actionable findings is arbitrary and out of step with wider ethico-legal developments in the health sphere. Just because these findings arise in the context of health research is not on its own sufficient reason for interpreting ‘value’ solely in clinical terms.
In this chapter, I have argued that there are two shortcomings in current ethical debates and guidance regarding policies and practices for feeding back individually relevant findings from health research. These are, first, a focus on the responsibilities of actors for disclosure that remains insufficiently grounded in the essential questions of when and how disclosure would meet core interests of participants; and, second, a narrow interpretation of these interests in terms of clinical actionability. Specifically, I have argued that participants have critical interests in accessing research findings where these offer valuable tools of narrative self-constitution. These shortcomings have been particularly brought to light by changes in the nature of health research, and addressing them becomes ever more important as the role participants evolves from one of an object of research, to active members of shared endeavours. I have proposed that in this new health research landscape, there are not only strong grounds for widening feedback to include potentially identity-significant findings, but also to recognise the valuable role of researchers and others as interpretive partners in the relational processes of anticipating, offering and disclosing findings.
1 This chapter will not discuss responsibilities actively to pursue findings, or disclosures to family members in genetic research, nor is it concerned with feedback of aggregate findings. For discussion of researchers’ experiences of encountering and disclosing incidental findings in neuroscience research see Pickersgill, Chapter 31 in this volume.
2 S. M. Wolf et al., ‘Managing Incidental Findings in Human Subjects Research: Analysis and Recommendations’, (2008) The Journal of Law, Medicine & Ethics, 36(2), 219–248.
3 L. Eckstein et al., ‘A Framework for Analyzing the Ethics of Disclosing Genetic Research Findings’, (2014) The Journal of Law, Medicine & Ethics, 42(2), 190–207.
4 B. E. Berkman et al., ‘The Unintended Implications of Blurring the Line between Research and Clinical Care in a Genomic Age’, (2014) Personalized Medicine,11(3), 285–295.
5 E. Parens et al., ‘Incidental Findings in the Era of Whole Genome Sequencing?’, (2013) Hastings Center Report, 43(4), 16–19.
6 For example, in addition to sources cited elsewhere in this chapter, see R. R. Fabsitz et al., ‘Ethical and Practical Guidelines for Reporting Genetic Research Results to Study Participants’, (2010) Circulation: Cardiovascular Genetics, 3(6),574–580; G. P. Jarvik et al., ‘Return of Genomic Results to Research Participants: The Floor, the Ceiling, and the Choices in Between’, (2014) The American Journal of Human Genetics, 94(6), 818–826.
7 C. Weiner, ‘Anticipate and Communicate: Ethical Management of Incidental and Secondary Findings in the Clinical, Research, and Direct-to-Consumer Contexts’, (2014) American Journal of Epidemiology, 180(6), 562–564.
8 Medical Research Council and Wellcome Trust, ‘Framework on the Feedback of Health-Related Findings in Research’, (Medical Research Council and Wellcome Trust, 2014).
9 Berkman et al., ‘The Unintended Implications’.
10 Eckstein et al., ‘A Framework for Analyzing’.
11 Wolf et al., ‘Managing Incidental Findings’.
13 Eckstein et al., ‘A Framework for Analyzing’.
14 Medical Research Council and Wellcome Trust, ‘Framework on the Feedback’.
16 Berkman et al., ‘The Unintended Implications’.
17 S. M. Wolf et al., ‘Mapping the Ethics of Translational Genomics: Situating Return of Results and Navigating the Research-Clinical Divide’, (2015) Journal of Law, Medicine & Ethics, 43(3), 486–501.
18 G. Laurie and N. Sethi, ‘Towards Principles–Based Approaches to Governance of Health–Related Research Using Personal Data’, (2013) European Journal of Risk Regulation, 4(1), 43–57. Genomics England, ‘The 100,000 Genomes Project’, (Genomics England), www.genomicsengland.co.uk/about-genomics-england/the-100000-genomes-project/.
19 Eckstein et al., ‘A Framework for Analyzing’.
20 A. L. Bredenoord et al., ‘Disclosure of Individual Genetic Data to Research Participants: The Debate Reconsidered’, (2011) Trends in Genetics, 27(2), 41–47.
21 Wolf et al., ‘Mapping the Ethics’.
22 In the UK, the expected standard of duty of care is assessed to what reasonable members of the profession would do as well as what recipients want to know (see C. Johnston and J. Kaye, ‘Does the UK Biobank Have a Legal Obligation to Feedback Individual Findings to Participants?’, (2004) Medical Law Review, 12(3), 239–267.
23 D. I. Shalowitz et al., ‘Disclosing Individual Results of Clinical Research: Implications of Respect for Participants’, (2005) JAMA, 294(6), 737–740.
24 Laurie and Sethi, ‘Towards Principles–Based Approaches’.
25 G. Laurie and E. Postan, ‘Rhetoric or Reality: What Is the Legal Status of the Consent Form in Health-Related Research?’, (2013) Medical Law Revue, 21(3), 371–414.
26 Odièvre v. France (App. no. 42326/98)  38 EHRR 871; ABC v. St George’s Healthcare NHS Trust & Others  EWCA Civ 336.
27 J. Marshall, Personal Freedom through Human Rights Law?: Autonomy, Identity and Integrity Under the European Convention on Human Rights (Leiden: Brill, 2008).
28 A. M. Farrell and M. Brazier, ‘Not So New Directions in the Law of Consent? Examining Montgomery v Lanarkshire Health Board’, (2016) Journal of Medical Ethics, 42(2), 85–88.
29 G. Laurie, ‘Liminality and the Limits of Law in Health Research Regulation: What Are We Missing in the Spaces In-Between?’, (2016) Medical Law Review, 25 (1), 47–72.
30 J. Kaye et al., ‘From Patients to Partners: Participant-Centric Initiatives in Biomedical Research’, (2012) Nature Reviews Genetics, 13(5), 371.
31 J. Harris, ‘Scientific Research Is a Moral Duty’, (2005) Journal of Medical Ethics, 31(4), 242–248.
32 S. C. Davies, ‘Chief Medical Officer Annual Report 2016: Generation Genome’, (Department of Health and Social Care, 2017), p. 4.
33 Wolf et al., ‘Mapping the Ethics’.
34 F. G. Miller et al., ‘Incidental Findings in Human Subjects Research: What Do Investigators Owe Research Participants?’, (2008) The Journal of Law, Medicine & Ethics, 36(2), 271–279.
36 In Chapter 39 of this volume, Shawn Harmon presents a parallel argument that medical device regulations are similarly premised on a narrow conception of harm that fails to account for identity impacts.
37 Eckstein et al., ‘A Framework for Analyzing’.
38 E. Postan, ‘Defining Ourselves: Personal Bioinformation as a Tool of Narrative Self-Conception’, (2016) Journal of Bioethical Inquiry, 13(1), 133–151.
39 M. Schechtman, The Constitution of Selves (New York: Cornell University Press, 1996).
40 Postan, ‘Defining Ourselves’.
41 C. Mackenzie, ‘Introduction: Practical Identity and Narrative Agency’ in K. Atikins and C. Mackenzie (eds), Practical Identity and Narrative Agency (Abingdon: Routledge, 2013), pp. 1–28.
42 L. d’Agincourt-Canning, ‘Genetic Testing for Hereditary Breast and Ovarian Cancer: Responsibility and Choice’, (2006) Qualitative Health Research, 16(1), 97–118.
43 P. Carter et al., ‘The Social Licence for Research: Why care.data Ran into Trouble’, (2015) Journal of Medical Ethics, 41(5), 404–409.
44 E. M. Bunnik et al., ‘Personal Utility in Genomic Testing: Is There Such a Thing?’, (2014) Journal of Medical Ethics, 41(4), 322–326.
45 S. M. Wolf et al., ‘Managing Incidental Findings and Research Results in Genomic Research Involving Biobanks and Archived Data Sets’, (2012) Genetics in Medicine, 14(4), 361–384.
Privacy and public interest are reciprocal concepts, mutually implicated in each other’s protection. This chapter considers how viewing the concept of privacy through a public interest lens can reveal the limitations of the narrow conception of privacy currently inherent to much health research regulation (HRR). Moreover, it reveals how the public interest test, applied in that same regulation, might mitigate risks associated with a narrow conception of privacy.
The central contention of this chapter is that viewing privacy through the lens of public interest allows the law to bring into focus more things of common interest than privacy law currently recognises. We are not the first to recognise that members of society share a common interest in both privacy and health research. Nor are we the first to suggest that public is not necessarily in opposition to private, with public interests capable of accommodating private and vice versa.Footnote 1 What is novel about our argument is the suggestion that we might invoke public interest requirements in current HRR to protect group privacy interests that might otherwise remain out of sight.
It is important that HRR takes this opportunity to correct its vision. A failure to do so will leave HRR unable to take into consideration research implications with profound consequences for future society. A failure will undermine legitimacy in HRR. It is no exaggeration to say that the value of a confidential healthcare system may come to depend on whether HRR acknowledges the significance of group data to the public interest. It is group data that shapes health policies, evaluates success, and determines the healthcare opportunities offered to members of particular groups. Individual opportunity, and entitlement, is dependent upon group classification.
The argument here is three-fold: (1) a failure to take common interests into account when making public interest decisions undermines the legitimacy of the decision-making process; (2) a common interest in privacy extends to include group interests; (3) the law’s current myopia regarding group privacy interests in data protection law and the duty of confidence law can be corrected, to a varying extent, through bringing group privacy interests into view through the lens of public interest.
24.2 Common Interests, Public Interest and Legitimacy
In this section, we seek to demonstrate how a failure to take the full range of common (group) interests into account when making public interest decisions will undermine the legitimacy of those decisions.
When Held described broad categories into which different theories of public interest might be understood to fall, she listed three: preponderance or aggregative theories, unitary theories and common interest theories.Footnote 2 When Sorauf earlier composed his own list, he combined common interests with values and gave the category the title ‘commonly-held value’.Footnote 3 We have separately argued that a compelling conception of public interest may be formed by uniting elements of ‘common interest’ and ‘common value’ theories of public interest.Footnote 4 It is, we suggest, through combining facets of these two approaches that one can overcome the limitations inherent to each. Here we briefly recap this argument before seeking to build upon it.
Fundamental to common interest theories of the public interest is the idea that something may serve ‘the ends of the whole public rather than those of some sector of the public’.Footnote 5 If one accepts the idea that there may be a common interest in privacy protection, as well as in the products of health research, then ‘common interest theory’ brings both privacy and health research within the scope of public interest consideration. However, it cannot explain how – in case of any conflict – they ought to be traded-off against each other – or other common interests – to determine the public interest in a specific scenario.
In contrast to common interest theories, commonly held value theories claim the ‘public interest emerges as a set of fundamental values in society’.Footnote 6 If one accepts that a modern liberal democracy places a fundamental value upon all members of society being respected as free and equal citizens, then any interference with individual rights should be defensible in terms that those affected can both access and have reason to endorseFootnote 7 – with discussion subject to the principles of public reasoning.Footnote 8 Such a commitment is enough to fashion a normative yardstick, capable of driving a public interest determination. However, the object of measurement remains underspecified.
It is through combining aspects of common interest and common value approaches that a practical conception of the public interest begins to emerge: any trade-off between common interests ought to be defensible in terms of common value: for reasons that those affected by a decision can both access and have reason to endorse.Footnote 9
An advantage of this hybrid conception of public interest is its connection with (social) legitimacy.Footnote 10 If a decision-maker fails to take into account the full range of interests at stake, then not only do they undermine any public interest claim, but also the legitimacy of the decision-making process underpinning it.Footnote 11 Of course, this does not imply that the legitimacy of a system depends upon everyone perceiving the ‘public interest’ to align with their own contingent individual or common interests. Public-interest decision-making should, however, ensure that when the interests of others displace any individual’s interests, including those held in common, it should (ideally) be transparent why this has happened and (again, ideally) the reasons for displacement should be acceptable as ‘good reasons’ to the individual.Footnote 12 If the displaced interest is more commonly held, it is even more important for a system practically concerned with maintaining legitimacy, to transparently account for that interest within its decision-making process.
In this section, the key claim is that a common interest in privacy extends beyond a narrow atomistic conception of privacy to include group interests.
We are aware of no ‘real definition’ of privacy.Footnote 13 There are, however, many stipulative or descriptive definitions, contingent upon use of the term within particular cultural contexts. Here we operate with the idea that privacy might be conceived in the legal context as representing ‘norms of exclusivity’ within a society: the normative expectation that some states of information separation are, by default, to be maintained.Footnote 14 This is a broad conception of privacy extending beyond the atomistic one that Bennet and Raab observe to be the prevailing privacy paradigm in many Western societies.Footnote 15 It is not necessary to defend a broad conception of privacy in order to recognise a common interest in privacy protection. It is, however, necessary to broaden the conception in order to bring all of the possible common interests in privacy into view. As Bennet and Raab note, the atomistic conception of privacy
fails to properly understand the construction, value and function of privacy within society.Footnote 16
Our ambition here is not to demonstrate an atomistic conception to be ‘wrong’ in any objective or absolute sense; but, rather to recognise the possibility that a coherent conception of privacy may extend its reach and capture additional values and functions. In 1977, after a comprehensive survey of the literature available at the time, Margulis proposed the following consensus definition of privacy
[P]rivacy, as a whole or in part, represents control over transactions between person(s) and other(s), the ultimate aim of which is to enhance autonomy and/or to minimize vulnerability.Footnote 17
Nearly thirty years after the definition was first offered, Margulis recognised that his early attempt at a consensus definition
failed to note that, in the privacy literature, control over transactions usually entailed limits on or regulation of access to self (Allen, 1998), sometimes to groups (e.g., Altman, 1975), and occasionally to larger collectives such as organisations (e.g., Westin, 1967).Footnote 18
The adjustment is important. It allows for a conception of privacy to recognise that there may be relevant norms, in relation to transactions involving data, that do not relate to identifiable individuals but are nonetheless associated with normative expectation of data flows and separation. Not only is there evidence that there are already such expectations in relation to non-identifiable data,Footnote 19 but data relating to groups – rather than just individuals – will be of increasing importance.Footnote 20
There are myriad examples of how aggregated data have led to differential treatment of individuals due to association with group characteristics.Footnote 21 Beyond the obvious examples of individual discrimination and stigmatisation due to inferences drawn from (perceived) group membership, there can be group harm(s) to collective interests including, for example, harm connected to things held to be of common cultural value and significance.Footnote 22 It is the fact that data relates to the group level that leaves cultural values vulnerable to misuse of the data.Footnote 23 This goes beyond a recognition that privacy may serve ‘not just individual interests but also common, public, and collective purposes’.Footnote 24 It is recognition that it is not only individual privacy but group privacy norms that may serve these common purposes. In fact, group data, and the norms of exclusivity associated with it, are likely to be of increasing significance for society. As Taylor, Floridi and van der Sloot note,
with big data analyses, the particular and the individual is no longer central. … Data is analysed on the basis of patterns and group profiles; the results are often used for general policies and applied on a large scale.Footnote 25
This challenges the adequacy of a narrow atomistic conception of privacy to account for what will increasingly matter to society. De-identification of an individual as a member of a group, including those groups that may be created through the research and may not otherwise exist, does not protect against any relevant harm.Footnote 26 In the next part, we suggest that not only can the concept of the public interest be used to bring the full range of privacy interests into view, but that a failure to do so will undermine the legitimacy of any public interest decision-making process.
The argument in this section is that, although HRR does not currently recognise the concept of group privacy interests, through the concept of public interest inherent to both the law of data protection and the duty of confidence, there is opportunity to bring group privacy interests into view.
The Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (hereafter, Treaty 108) (as amended)Footnote 27 cast the template for subsequent data protection law when it placed the individual at the centre of its object and purposeFootnote 28 and defined ‘personal data’ as:
any information relating to an identified or identifiable individual (‘data subject’)Footnote 29
This definition narrows the scope of data protection law even further than data relating to an individual. Data relating to unidentified or unidentifiable individuals fall outside its concern. This blinkered view is replicated through data protection instruments from the first through to the most recent: the EU General Data Protection Regulation (GDPR).
The GDPR is only concerned with personal data, defined in a substantively similar and narrow fashion to Treaty 108. In so far as its object is privacy protection, it is predicated upon a relatively narrow and atomistic, conception of privacy. However, if the concerns associated with group privacy are viewed through the lens of public interest, then they may be given definition and traction even within the scope of a data protection instrument like the GDPR. The term ‘the public interest’ appears in the GDPR no fewer than seventy times. It has a particular significance in the context of health research. This is an area, such as criminal investigation, where the public interest has always been protected.
Our argument is that it is through the application of the public interest test to health research governance in data protection law, that there is an opportunity to recognise in part common interests in group privacy. For example, any processing of personal data within material and territorial scope of the GDPR requires a lawful basis. Among the legal bases most likely to be applicable to the processing of personal data for research purposes is either that the processing is necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller (Article 6(1)(e)), or, that it is necessary for the purposes of the legitimate interests pursued by the controller (Article 6(1)(f)). In the United Kingdom (UK), where universities are considered to be public authorities, universities are unable to rely upon ‘legitimate interests’ as a basis for lawful processing. Much health research in the UK will thus be carried out on the basis that it is necessary for the performance of a task in the public interest. Official guidance issued in the UK is that the organisations relying upon the necessity of processing to carry out a task ‘in the public interest’
should document their justification for this, by reference to their public research purpose as established by statute or University Charter.Footnote 30
Mere assertion that a particular processing operation is consistent with an organisation’s public research purpose will provide relatively scant assurance that the operation is necessary for the performance of a task in the public interest. More substantial justification would document justification relevant to particular processing operations. Where research proposals are considered by institutional review boards, such as university or NHS ethics committees, then independent consideration by such bodies of the public interest in the processing operation would provide the rationale. We suggest this provides an opportunity for group privacy concerns to be drawn into consideration. They might also form part of any privacy impact assessment carried out by the organisation. What is more, for the sake of legitimacy, any interference with group interests, or risk of harm to members of a group or to the collective interests of the group as a whole, should be subject to the test that members of the group be offered reasons to accept the processing as appropriate.Footnote 31 Such a requirement might support good practice in consumer engagement prior to the roll out of major data initiatives.
Admittedly, while this may provide opportunity to bring group privacy concerns into consideration where processing is carried out by a public authority (and the legal basis of processing is performance of a task carried out in the public interest), this only provides limited penetration of group privacy concerns into the regulatory framework. It would not, for example, apply where processing was in pursuit of legitimate interests or another lawful basis. There are other limited opportunities to bring group privacy concerns into the field of vision of data protection law through the lens of public interest.Footnote 32 However, for as long as the gravitational orbit of the law is around the concept of ‘personal data’, the chances to recognise group privacy interests are likely to be limited and peripheral. By contrast, more fundamental reform may be possible in the law of confidence.
As with data protection and privacy,Footnote 33 there is an important distinction to be made between privacy and confidentiality. However, the UK has successfully defended its ability to protect the right to respect for private and family life, as recognised by Article 8 of the European Convention on Human Rights (ECHR), by pointing to the possibility of an action for breach of confidence.Footnote 34 It has long been recognised that the law’s protection of confidence is grounded in the public interestFootnote 35 but, as Lord Justice Briggs noted in R (W,X,Y and Z) v. Secretary of State for Health (2015),
the common law right to privacy and confidentiality is not absolute. English common law recognises the need for a balancing between this right and other competing rights and interests.Footnote 36
The argument put forward here is consistent with the idea that the protection of privacy and other competing rights and interests, such as those associated with health research, are each in the public interest. The argument here is that when considering the appropriate balance or trade-off between different aspects of the public interest, then a broader view of privacy protection than has hitherto been taken by English law is necessary to protect the legitimacy of decision-making. Such judicial innovation is possible.
The law of confidence has already evolved considerably over the past twenty or so years. Since the Human Rights Act 1998Footnote 37 came into force in 2000, the development of the common law has been in harmony with Articles 8 and 10 of the ECHR.Footnote 38 As a result, as Lord Hoffmann put it,
What human rights law has done is to identify private information as something worth protecting as an aspect of human autonomy and dignity.Footnote 39
Protecting private information as an aspect of individual human autonomy and dignity might signal a shift toward the kind of narrow and atomistic conception of privacy associated with data protection law. This would be as unnecessary as it would be unfortunate. In relation to the idea of privacy, the European Court of Human Rights has itself said that
The Court does not consider it possible or necessary to attempt an exhaustive definition of the notion of ‘private life’ … Respect for private life must also comprise to a certain degree the right to establish and develop relationships with other human beings.Footnote 40
It remains open to the courts to recognise that the implications of group privacy concerns have a bearing on an individual’s ability to establish and develop relations with other human beings. Respect for human autonomy and dignity may yet serve as a springboard toward a recognition by the law of confidence that data processing impacts upon the conditions under which we live social (not atomistic) lives and our ability to establish and develop relationships as members of groups. After all, human rights are due to members of a group and their protection has always been motivated by group concerns.Footnote 41
One of us has argued elsewhere that English Law took a wrong turn when R (Source Informatics) v. Department of HealthFootnote 42 was taken to be authority for the proposition that a duty of confidence cannot be breached through the disclosure of non-identifiable data. It is possible that the ratio in Source Informatics may yet be re-interpreted and recognised to be consistent with a claim that legal duties may be engaged through use and disclosure of non-identifiable data.Footnote 43 In some ways, this would simply be to return to the roots of the legal protection of privacy. In her book The Right to Privacy, Megan Richardson traces the origins and influence of the ideas underpinning the legal right to privacy. As she remarks, ‘the right from the beginning has been drawn on to serve the rights and interests of minority groups’.Footnote 44 Richardson recognises that, even in those cases where an individual was the putative focus of any action or argument,
Once we start to delve deeper, we often discover a subterranean network of families, friends and other associates whose interests and concerns were inexorably tied up with those of the main protagonist.Footnote 45
As a result, it has always been the case that the right to privacy has ‘broader social and cultural dimensions, serving the rights and interests of groups, communities and potentially even the public at large’.Footnote 46 It would be a shame if, at a time when we may need it most, the duty of confidence would deny its own potential to protect reasonable expectations in the use and disclosure of information simply because misuse had the potential to impact more than one identifiable individual.
The argument has been founded on the claim that a commitment to the protection of common interests in privacy and the product of health research, if placed alongside the commonly held value in individuals as free and equal persons, may establish a platform upon which one can construct a substantive idea of the public interest. If correct, then it is important to a proper calculation of the public interest to understand the breadth of privacy interests that need to be accounted for if we are to avoid subjugating the public to governance, and a trade-off between competing interests, that they have no reason to accept.
Enabling access to the data necessary for health research is in the public interest. So is the protection of group privacy. Recognising this point of connection can help guide decision-making where there is some kind of conflict or tension. The public interest can provide a common, commensurate framing. When this framing has a normative dimension, then this grounds the claim that the full range of common interests ought to be brought into view and weighed in the balance. One must capture all interests valued by the affected public, whether individual or common in nature, to offer them a reason to accept a particular trade-off between privacy and the public interest in health research. To do otherwise is to get the balance of governance wrong and compromise its social legitimacy.
That full range of common interests must include interests in group data. An understanding of what the public interest requires in a particular situation is short-sighted if this is not brought into view. An implication is that group interests must be taken into account within an interpretation and application of public interest in data protection law. Data controllers should be accountable for addressing group privacy interests in any public interest claim. With respect to the law of confidence, there is scope for even more significant reform. If the legitimacy of the governance framework, applicable to health data, is to be assured into the future, then it needs to be able to see – so that it might protect – reasonable expectations in data relating to groups of persons and not just identifiable individuals. Anything else will be a myopic failure to protect some of the most sensitive data about people simply on the grounds that misuse does not affect a sole individual but multiple individuals simultaneously. That is not a governance model that we have any reason to accept and we have the concept of public interest at our disposal to correct our vision and bring the full range of relevant interests into view.
1 The idea that both privacy and health research may be described as ‘public interest causes’ is also compelling developed in W. W. Lowrance, Privacy, Confidentiality, and Health Research (Cambridge University Press, 2012) and the relationship between privacy and the public interested in C. D. Raab, ‘Privacy, Social Values and the Public Interest’ in A. Busch and J. Hofmann (eds), Politik und die Regulierung von Information [Politics and the Regulation of Information] (Baden-Baden, Germany: Politische Vierteljahresschrift, Sonderheft 46, 2012), pp. 129–151.
2 V. P. Held, The Public Interest and Individual Interests (New York: Basic Books, 1970).
3 F. J. Sorauf, ‘The Public Interest Reconsidered’, (1957), The Journal of Politics, 19(4), 616–639.
4 M. J. Taylor ‘Health Research, Data Protection, and the Public Interest in Notification’, (2011) Medical Law Review, 19(2), 267–303; M. J. Taylor and T. Whitton, ‘Public Interest, Health Research and Data Protection Law: Establishing a Legitimate Trade-Off between Individual Control and Research Access to Health Data’, (2020) Laws, 9(1), 6.
5 M. Meyerson and E. C. Banfield, cited by Sorauf ‘The Public Interest Reconsidered’, 619.
6 J. Bell, ‘Public Interest: Policy or Principle?’ in R. Brownsword (ed.), Law and the Public Interest: Proceedings of the 1992 ALSP Conference (Stuttgart: Franz Steiner, 1993) cited in M. Feintuck, Public Interest in Regulation (Oxford University Press, 2004), p. 186.
7 There is a connection here with what has been described by Rawls as ‘public reasons’: limited to premises and modes of reasoning that are accessible to the public at large. L. B. Solum, ‘Public Legal Reason’, (2006) Virginia Law Review, 92(7), 1449–1501, 1468.
8 ‘The virtue of public reasoning is the cultivation of clear and explicit reasoning orientated towards the discovery of common grounds rather than in the service of sectional interests, and the impartial interpretation of all relevant available evidence.’ Nuffield Council on Bioethics, ‘Public Ethics and the Governance of Emerging Biotechnologies’, (Nuffield Council on Bioethics, 2012), 69.
9 G. Gaus, The Order of Public Reason: A Theory of Freedom and Morality in a Diverse and Bounded World (Cambridge University Press, 2011), p. 19. Note the distinction Gaus draws here between the Restricted and the Expansive view of Freedom and Equality.
10 We here associate legitimacy with ‘the capacity of the system to engender and maintain the belief that the existing political institutions are the most appropriate ones for the society’ S. M. Lipset, Political Man: The Social Bases of Politics (Baltimore, MD: John Hopkins University Press, 1981 ), p. 64. This is consistent with recognition that the ‘liberal principle of legitimacy states that the exercise of political power is justifiable only when it is exercised in accordance with constitutional essentials that all citizens may reasonably be expected to endorse in the light of principles and ideals acceptable to them as reasonable and rational’, Solum, ‘Public Legal Reason’, 1472. See also D. Curtin and A. J. Meijer, ‘Does Transparency Strengthen Legitimacy?’, (2006) Inform Polity II, 11(2), 109–122, 112 and M. J. Taylor, ‘Health Research, Data Protection, and the Public Interest in Notification’, (2011) Medical Law Review, 19(2), 267–303.
11 The argument offered is a development of one originally presented in M. J. Taylor, Genetic Data and the Law (Cambridge University Press, 2012), see esp. pp. 29–34.
12 The term ‘accept’ is chosen over ‘prefer’ for good reason. M. J. Taylor and N. C. Taylor ‘Health Research Access to Personal Confidential Data in England and Wales: Assessing Any Gap in Public Attitude between Preferable and Acceptable Models of Consent’, (2014) Life Sciences, Society and Policy, 10(1), 1–24.
13 A ‘real definition’ is to be contrasted with a nominal definition. A real definition may associate a word or term with elements that must necessarily be associated with the referent (a priori). A nominal definition may be discovered by investigating word usage (a posteriori). For more, see Stanford Encyclopedia of Philosophy, ‘Definitions’, (Stanford Encyclopedia of Philosophy, 2015), www.plato.stanford.edu/entries/definitions/.
14 G. Laurie recognises privacy to be a state of non-access. G. Laurie, Genetic Privacy: A Challenge to Medico-Legal Norms (Cambridge University Press, 2002) p. 6. We prefer the term ‘exclusivity’ rather than ‘separation’ as it recognises a lack of separation in one aspect does not deny a privacy claim in another. E.g. one’s normative expectations regarding use and disclosure are not necessarily weakened by sharing information with health professionals. For more see M. J. Taylor, Genetic Data and the Law: A Critical Perspective on Privacy Protection (Cambridge University Press, 2012), pp. 13–40.
15 See, C. J. Bennet and C. D. Raab, The Governance of Privacy: Policy Instruments in Global Perspective (Ashgate, 2003), p. 13.
17 S. T. Margulis ‘Conceptions of Privacy: Current Steps and Next Steps’, (1977) Journal of Social Issues, 33(3), 5–21, 10.
18 S. T. Margulis ‘Privacy as a Social Issue and a Behavioural Concept’, (2003) Journal of Social Issues, 9(2), 243–261, 245.
19 Department of Health, ‘Summary of Responses to the Consultation on the Additional Uses of Patient Data’, (Department of Health, 2008).
20 Our argument has no application to aggregate data that does not relate to a group until or unless that association is made.
21 A number are described for example by V. Eubanks, Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor (New York: St Martin’s Press, 2018).
22 See e.g. Foster v. Mountford (1976) 14 ALR 71. (Australia)
23 An example of the kind of common purpose that privacy may serve relates to the protection of culturally significant information. A well-known example of this is the harm associated with the research conducted with the Havasupai Tribe in North America. R. Dalton, ‘When Two Tribes Go to War’, (2004) Nature, 430(6999), 500–502; A. Harmon, ‘Indian Tribe Wins Fight to Limit Research of Its DNA’, The New York Times (21 April 2010). Similar concerns had been expressed by the Nuu-chahnulth of Vancouver Island, Canada, when genetic samples provided for one purpose (to discover the cause of rheumatoid arthritis) were used for other purposes. J. L. McGregor, ‘Population Genomics and Research Ethics with Socially Identifiable Groups’, (2007) Journal of Law and Medicine, 35(3), 356–370, 362. Proposals to establish a genetic database on Tongans floundered when the ethics policy focused on the notion of individual informed consent and failed to take account of the traditional role played by the extended family in decision-making. B. Burton, ‘Proposed Genetic Database on Tongans Opposed’, (2002) BMJ, 324(7335), 443.
24 P. M. Regan, Legislating Privacy: Technology, Social Values, and Public Policy (University of North Carolina Press, 1995) p. 221.
25 L. Taylor et al. (eds), Group Privacy: New Challenges of Data Technologies (New York: Springer, 2017), p. 5.
26 Taylor et al., ‘Group Privacy’, p. 7.
27 Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data, Strasbourg, 28 January 1981, in force 1 October 1985, ETS No. 108, Protocol CETS No. 223.
28 To protect every individual, whatever his or her nationality or residence, with regard to the processing of their personal data, thereby contributing to respect for his or her human rights and fundamental freedoms, and in particular the right to privacy.
29 ‘Convention for the Protection of Individuals’, Article 2(a).
30 Health Research Authority, ‘Legal Basis for Processing Data’, (NHS Health Research Authority, 2018). www.hra.nhs.uk/planning-and-improving-research/policies-standards-legislation/data-protection-and-information-governance/gdpr-detailed-guidance/legal-basis-processing-data/.
31 M. J. Taylor and T. Whitton, ‘Public Interest, Health Research and Data Protection Law: Establishing a Legitimate Trade-Off between Individual Control and Research Access of Health Data,’ (2020) Laws, 9(6), 1–24, 17–19; J. Rawls, The Law of Peoples (Harvard University Press, 1999), pp. 129–180.
32 E.g. The conception of public interest proposed in this chapter would allow concerns associated with processing in a third country, or an international organisation, to be taken into consideration where associated with issues of group privacy. Article 49(5) of the General Data Protection Regulation, Regulation (EU) 2016/679, OJ L 119, 4 May 2016.
33 Although data protection law seeks to protect fundamental rights and freedoms, in particular the right to respect for a private life, without collapsing the concepts of data protection and privacy.
34 Earl Spencer v. United Kingdom  25 EHRR CD 105.
35 W v. Egdell  1 All ER 835. Ch. 359.
36 R (W, X, Y and Z) v. Secretary of State for Health  EWCA Civ 1034, .
37 Campbell v. MGN Ltd  UKHL 22,  ALL ER (D) 67 (May), per Lord Nicholls .
38 Footnote Ibid. .
39 Footnote Ibid. .
40 Niemietz v. Germany  13710/88 .
41 Regan, ‘Legislating Privacy’, p. 8.
42  EWCA Civ 3011.
43 M. J. Taylor, ‘R (ex p. Source Informatics) v. Department of Health ’ in J. Herring and J. Wall (eds), Landmark Cases in Medical Law (Oxford: Hart, 2015), pp. 175–192; D. Beyleveld, ‘Conceptualising Privacy in Relation to Medical Research Values’ in S. A. M. MacLean (ed.), First Do No Harm (Farnham, UK: Ashgate, 2006), p. 151. It is interesting to consider how English Law may have something to learn in this respect from the Australian courts e.g. Foster v. Mountford (1976) 14 ALR 71.
44 M. Richardson, The Right to Privacy (Cambridge University Press, 2017), p. 120.
45 Footnote Ibid., p. 122.
46 Footnote Ibid., p. 119.
This chapter will develop the role that deliberative public engagement should have in health research regulation. The goal of public deliberation is to mobilise the expertise that members of the public have, to explore their values in relation to specific trade-offs, with the objective of recommendations that respect diverse interests. Public deliberation requires that a small group is invited to a structured event that supports informed, civic-minded consideration of diverse perspectives on public interest. Ensuring that the perspectives considered are inclusive of perspectives that might otherwise be marginalised or silenced requires explicitly designing the small group in relation to the topic. Incorporating public expertise enhances the trustworthiness of policies and governance by explicitly acknowledging and negotiating diverse public interests. Trustworthiness is distinct from trust, so the chapter begins by exploring that distinction in the context of the example of care.data and the loss of trust in the English National Health Service’s (NHS) use of electronic health records for research. While better public engagement prior to the announcement might have avoided the loss of trust, subsequent deliberative public engagement may build trustworthiness into the governance of health research endeavours and contribute to re-establishing trust.
Some activities pull at the loose threads of social trust. These events threaten to undermine the presumption of legitimacy that underlie activities directed to public interest. The English NHS care.data programme is one cautionary tale (NHS, 2013).Footnote 1 The trigger event was the distribution of a pamphlet to households. The pamphlet informed the public that a national database of patients’ medical records would be used for patient care, to monitor outcomes, and that research would take place on anonymous datasets. The announcement was entirely legitimate within the existing regulatory regime. The Editor-in-Chief of the British Medical Journal summarises, ‘But all did not go to plan. NHS England’s care.data programme failed to win the public’s trust and lost the battle for doctors’ support. Two reports have now condemned the scheme, and last week the government decided to scrap it.’Footnote 2
The stimulation of public distrust is often characterised by a political deployment of a segment of the public, but it may lead to a wider rejection of previously non-controversial trade-offs. In the case of care.data, the first response was to ensure better education about benefits and enhanced informed consent. The Caldicott Report on the care.data programme called for better technology standards, publication of disclosure procedures, an easy opt-out procedure and a ‘dynamic consent’ process.Footnote 3
There are good reasons to doubt that improved regulation and informed consent procedures alone will restore the loss, or sustain current levels, of public trust. It was unlikely that the negative reaction to care.data had to do with an assessment of the adequacy of the regulations for privacy and access to health data. Moreover, everything proposed under care.data was perfectly lawful. It is far more likely that the reaction had to do with a rejection of what was presented as a clear case of justified use of patients’ medical records. The perception was that the trade-offs were not legitimate, at least to some of the public and practitioners. The destabilisation of trust that patient information was being used in appropriate ways, even with what should have been an innocuous articulation of practice, suggests a shift in how the balance between access to information and privacy are perceived. Regulatory experts on privacy and informed consent may strengthen protection or recalibrate what is protected. But such measures do not develop an understanding and response to how a wider public might assign proportionate weight to privacy and access in issues related to research regulation. Social controversy about the relative weight of important public interests demonstrates the loss of legitimacy of previous decisions and processes. It is the legitimacy of the programmes that require public input.
The literature on public understanding of science also suggests that merely providing more detailed information and technical protections is unlikely to increase public trust.Footnote 4 Although alternative models of informed consent are beyond the scope of this chapter, it seems more likely that informed consent depends on relationships of trust, and that trust, or its absence, is more of a heuristic approach that serves as a context in which people make decisions under conditions of limited time and understanding.Footnote 5 Trust is often extended without assessment of whether the conditions justify trust, or are trustworthy. It also follows that trust may not be extended even when the conditions seem to merit trust. The complicated relationship between trust and trustworthiness has been discussed in another chapter (see Chuong and O’Doherty, Chapter 12) and in the introduction to this volume, citing Onora O’Neill, who encourages us to focus on demonstrating trustworthiness in order to earn trust.
The care.data experience illustrates how careful regulation within the scope of law and current research ethics, and communicating those results to a wide public, was not sufficient for the plan to be perceived as legitimate and to be trusted. Regulation of health research needs to be trustworthy, yet distrust can be stimulated despite considerable efforts and on-going vigilance. If neither trust nor distrust are based on the soundness of regulation of health research, then the sources of distrust need to be explicitly addressed.
25.3 Patients or Public? Conceptualising What Interests Are Important
It is possible to turn to ‘patients’ or ‘the public’ to understand what may stabilise or destabilise trust and legitimacy in health research. There is considerable literature and funding opportunities related to involving patients in research projects, and related improved outcomes.Footnote 6 The distinction between public and patients is largely conceptual, but it is important to clarify what aspects of participants’ lives we are drawing on to inform research and regulation, and then to structure recruitment and the events to emphasise that focus.Footnote 7 In their role as a patient, or caregivers and advocates for family and friends in healthcare, participants can draw on their experiences to inform clinical care, research and policy. In contrast, decisions that allocate across healthcare needs, or broader public interests, require consideration of a wider range of experiences, as well as the values, and practical knowledge that participants hold as members of the public. Examples of where it is important to achieve a wider ‘citizen’ perspective include funding decisions on drug expenditures and disinvestment, and balancing privacy concerns against benefits from access to health data or biospecimens.Footnote 8 Considerations of how to involve the public in research priorities is not adequately addressed by involving community representatives on research ethics review committees.Footnote 9
Challenges to trust and legitimacy often arise when there are groups who hold different interpretations of what is in the public interest. Vocal participants on issues are often divide into polarised groups. But there is often also a multiplicity of public interests, so there is no single ‘public interest’ to be discovered or determined. Each configuration of a balance of interests also has resource implications, and the consequences are borne unevenly across the lines of inequity in society. There is democratic deficit when decisions are made without input from members of public who will be affected by the policy but have not been motivated to engage. This deficit is best addressed by ‘actively seek(ing) out moral perspectives that help to identify and explore as many moral dimensions of the problem as possible’.Footnote 10 This rejects the notions that bureaucracies and elected representatives are adequately informed by experts and stakeholders to determine what is in the interests of all who will be affected by important decisions. These decisions are, in fact, about a collective future, often funded by public funds with opportunity costs. Research regulation, like biotechnology development and policy, must explicitly consider how and who decides the relative importance of benefits and risks.
The distinction between trust and trustworthiness, between bureaucratic legitimacy and perceived social licence, gives rise to the concern that much patient and public engagement may be superficial and even manipulative.Footnote 11 Careful consideration must be given to how the group is convened, informed, facilitated and conclusions or recommendations are formulated. An earlier chapter considered the range of approaches to public and patient engagement, and how different approaches are relevant for different purposes (see Aitkin and Cunningham-Burley, Chapter 11).Footnote 12 To successfully stimulate trust and legitimacy, the process of public engagement requires working through these dimensions.
The use of the term ‘public’ is normally intended to be as inclusive as possible, but it is also used to distinguish the call to public deliberation from other descriptions of members of society or stakeholders. There is a specific expertise called upon when people participate as members of the public as opposed to patients, caregivers, stakeholders or experts. Participants are sought for their broad life perspective. As perspective bearers coming from a particular structural location in society, with ‘experience, history and social knowledge’,Footnote 13 participants draw on their own social knowledge and capacity in a deliberative context that supports this articulation without presuming that their experiences are adequate to understand that of others’, or that there is necessarily a common value or interest
‘Public expertise’ is what we all develop as we live in our particular situatedness, and in structured deliberative events it is blended with an understanding of other perspectives, and directed to develop collective advice related to the controversial choices that are the focus of the deliberation. Adopting Althusser’s notion of hailing or ‘interpellation’ as ideological construction of people’s role and identity, Berger and De Cleen suggest that calling people to deliberate ‘offers people the opportunity to speak (thus empowering them) and a central aspect of how their talk is constrained and given direction (the exercise of power on people)’.Footnote 14 In deliberation, the manifestations of public expertise is interwoven with the overall framing, together co-creating the capacity to consider the issues deliberated from a collective point of view.Footnote 15 Political scientist Mark Warren suggests that ‘(r)epresentation can be designed to include marginalized people and unorganized interests, as well as latent public interests’.Footnote 16 As one form of deliberation, citizen juries, captured in the name and process, the courts have long drawn on public to constitute a group of peers who must make sense and form collective judgments out of conflicting and diverse information and alternative normative weightings.Footnote 17
Simone Chambers, in a classic review of deliberative democracy, emphasised two critiques from diversity theory, and suggested that these would be a central concern in the next generation of deliberative theorists: (1) reasonableness and reason-giving; (2) conditions of equality as participants in deliberative activities.Footnote 18 The facilitation of deliberative events is discussed below, but participants can be encouraged and given the opportunity to understand each other’s perspectives in a manner that may be less restrictive than theoretical discussions suggest. For example, the use of narrative accounts to explain how participants come to hold particular beliefs or positions provide important perspectives that might not be volunteered or considered if there was a strong emphasis on justifying one’s views with reasons in order for them to be considered.Footnote 19
The definition and operationalisation of inclusiveness is important because deliberative processes are rarely large scale, focussing instead on the way that small groups can demonstrate how a wider public would respond if they were informed and civic-minded.Footnote 20 Representation or inclusiveness is often the starting place for consideration of an engagement process.Footnote 21 Steel and colleagues have described three different types of inclusiveness that provides conceptual clarity about the constitution of a group for engagement: representative, egalitarian and normic diversity.Footnote 22
Representative diversity requires that the distribution of the relevant sub-groups in the sample reflects the same distribution as in the reference population. Egalitarian inclusiveness requires equal representation of people from each relevant sub-group so that each perspective is given equal representation. In contrast to representative diversity, Egalitarian diversity ignores the size of each sub-group in the population, and emphasises equal representation of each sub-group. Normic diversity requires the over-representation of sub-groups who are marginalised or overwhelmed by the larger, more influential or mainstream groups in the population. Each of these concepts aim for a symmetry, but the representative approach presumes that symmetry is the replication of the population, while egalitarian and normic concepts directly consider asymmetry of power and voice in society.
Attempts to enhance the range of perspectives considered in determining the public interest(s) is likely to draw on the normic and egalitarian concepts of diversity, and de-emphasise the representative notion. The goal of deliberative public engagement is to address a democratic deficit whereby some groups have been the dominant perspectives considered on the issues, even if none have prevailed over others. It seeks to include a wider range of input from diverse citizens about how to live together given the different perspectives on what is ‘in the public interest. Normic diversity suggests that dominant groups are less present in the deliberating group, and egalitarian suggests that it is important to have similar representation across the anticipated diverse perspectives. The deliberation must be informed about, but not subjugated by, dominant perspectives, and one approach is to exclude dominant perspectives, including those of substance experts, from participating in the deliberation, but introduce their perspectives and related information through materials and presentations intended to inform participants. Deliberative participants must exercise their judgement and critically consider a wide range of perspectives, while stakeholders are agents for a collective identity that asserts the importance of one perspective over others.Footnote 23 It is also challenging to identify the range of relevant perspectives that give particular form to the public expertise for an issue, although demographics may be used to ensure that participants reflect a range of life experiences.Footnote 24 Specific questions may also suggest that particular public perspectives are important to include in the deliberating group. For example, in Californian deliberations on biobanks it was important to include Spanish-only speakers because, despite accounting for the majority of births, they were often excluded from research regulation issues (normic diversity), and they were an identifiable group who likely had unique perspectives compared to other demographic segments of the California population (egalitarian diversity).Footnote 25
As previously discussed, mobilising public expertise requires considerable support. To be credible and legitimate, a deliberative process must demonstrate that the participants are adequately informed and consider diverse perspectives. Participants must respectfully engage each other in the development of recommendations that focus on reasoned inclusiveness but fully engage the trade-offs required in policy decisions.
It seems obvious that participation in an engagement to advise research regulation must be informed about the activities to be regulated. This is far from a simple task. An engagement can easily be undermined if the information provided is incomplete or biased. It is important to provide not only complete technical details, but also ensure that social controversies and stakeholder perspectives are fairly represented. This can be managed by having an advisory of experts, stakeholders and potential knowledge users. Advisors can provide input into the questions and the range of relevant information that participants must consider to be adequately informed. It is also important to consider how best to provide information to support comprehension across participants with different backgrounds. One approach is to utilise a combination of a background booklet and a panel of four to six speakers.Footnote 26 The speakers, a combination of experts and stakeholders, are asked to be impassioned, explaining how or why they come to their particular view. This will establish that there are controversies, help draw participants into the issues and stimulate interest in the textual information.
Facilitation is another critical element of deliberative engagement. Deliberative engagement is distinguished by collective decisions supported by reasons from the participants – the recommendations and conclusions are the result of a consideration of the diverse perspectives reflected in the process and among participants. The approach to facilitation openly accepts that participants may re-orient the discussion and focus, and that the role of facilitation is to frame the discussion in a transparent manner.Footnote 27 Small groups of six to eight participants can be facilitated to develop fuller participation and articulation of different perspectives and interests than is possible in a larger group. Large group facilitation can be oriented to giving the participants as much control over topic and approach as they assume, while supporting exploration of issues and suggesting statements where the group may be converging. The facilitator may also draw closure to enable participants to move on to other issues by suggesting that there is a disagreement that can be captured. Identifying places of deep social disagreement identifies where setting policy will need to resolve controversy about what is genuinely in the public’s interest, and where there may be a need for more nuanced decisions on a case-by-case basis. The involvement of industry and commercialisation in biobanks is a general area that has frequently defied convergence in deliberation.Footnote 28
Even if recruitment succeeds in convening a diverse group of participants, sustaining diversity and participation across participants requires careful facilitation. The deliberative nature of the activity is dynamic. Participants increase their level of knowledge and understanding of diverse perspectives as facilitation encourages them to shift from an individual to a collective focus. Premature insistence on justifications can stifle understanding of diverse perspectives, but later in the event, justifications are crucial to produce reasons in support of conclusions. Discussion and conclusions can be inappropriately influenced by participants’ personalities, as well as the tendency for some participants to position themselves as having authoritative expertise. It is well within the expertise of the public to consider whether claims to special knowledge or personalities are lacking substantive support for their positions. But self-reflective and respectful communication is not naturally occurring, and deliberation requires skilled facilitation to avoid dominance of some participants and to encourage critical reflection and participation of quieter participants. The framing of the issues and information as well as facilitation inevitably shapes the conclusions, and participants may not recognise that issues and concerns important to them have been ruled out of scope.
Assessing the quality of deliberative public engagement is fraught with challenges. Abelson and Nabatchi have provided good overviews of the state of deliberative civic engagement, assessing its impacts and assessment.Footnote 29
There are recent considerations of whether and under what conditions deliberative public engagement is useful and effective.Footnote 30 Because deliberative engagement is expensive and resource intensive, it needs to be directed to controversies where the regulatory bodies want, and are willing, to have their decisions and policies shaped by public input. Such authorities do not thereby give up their legitimate duties and freedom to act in the public interest or to consult with experts and stakeholders. Rather, activities such as deliberative public engagement are supplemental to the other sources of advice, and not determinative of the outcomes. This point is important for knowledge users, sponsors and participants to understand.
How, then, might deliberative public engagement have helped avoid the negative reaction to care.data? It is first important to distinguish trust from trustworthiness. Trust, sometimes considered as social licence, is usually presumed in the first instance. As a psychological phenomenon, trust is often a heuristic form of reasoning that supports economical use of intellectual and social capital.Footnote 31 There is some evidence that trust is particularly important with regard to research participation.Footnote 32 Based on previous experiences, we develop trust – or distrust – in people and institutions. There is a good chance that many people whose records are in the NHS would approach the use of their records for other purposes with a general sense of trust. Loss of trust often flows from abrupt discovery that things are not as we presumed, which is what appears to have happened in care.data. On the other hand, trustworthiness of governance is when the governance system has the characteristics that, if scrutinised, would support that it is worthy of trust.
Given this understanding, it might have been possible to demonstrate trustworthiness of governance of the NHS data by holding deliberative public engagement and considering its recommendations for data management. Also, public trust might not have been as widely undermined if the announcement of extension of access to include commercial partners provided a basis for finding the governance trustworthy. Of course, distrust of critical stakeholders and members of public will still require direct responses to their concerns.
It is important to note that trustworthiness that can stand up to scrutiny is the goal, rather than directing efforts at increasing trust. Since trust is given in many cases without reflection, it can often be manipulated. By aiming at trustworthiness, arrived at through considerations that include deliberative public input, the authorities demonstrate that their approach is trustworthy. Articulating how controversies have been considered with input from informed and deliberating members of public would have demonstrated that the trust presumed at the outset was, in an important sense, justified. Now, after the trust has been lost and education and reinforced individual consent has not addressed the concerns, deliberation to achieve legitimate and trustworthy governance may have a more difficult time stimulating wide public trust, but it may remain the best available option.
Deliberative public engagement has an important role in the regulation of health research. Determining what trade-offs are in the public interest requires a weighing of alternatives and relative weights of different interests. Experts and stakeholders are legitimate advocates for the interests they represent, but their interests manifest an asymmetry of power. Including a well-designed process to include diverse public input can increase the legitimacy and trustworthiness of the policies. Deliberative engagement mobilises a wider public to direct their collective experience and expertise. The resulting advice about what is in the public interest explicitly builds diversity in the recruitment of the participants and in the design of the deliberation.
Deliberative public engagement is helpful for issues where there is genuine controversy about what is in the public interest, but it is far from a panacea. It is an important complement to stakeholder and expert input. The deliberative approach starts with careful consideration of the issues to be deliberated and how the diversity is to be structured into the recruitment of a deliberating small group. Expert and stakeholder advisors, as well as decision-makers who are the likely recipients of the conclusions of the deliberation, can help develop the range of information necessary for deliberation on the issues to be informed. Participants need to be supported by exercises and facilitation that helps them develop a well-informed and respectful understanding of diverse perspectives. Facilitation then shifts to support the development of a collective focus and conclusions with justifications. Diversity and asymmetry of power is respected through the conceptualisation and implementation of inclusiveness, the development of information, and through facilitation and respect for different kinds of warranting. There must be a recognition that the role of event structure and facilitation means that the knowledge is co-produced with the participants, and that it is very challenging to overcome asymmetries, even in the deliberation itself. Another important feature is the ability to identify persistent disagreements and not force premature consensus on what is in the public interest. In this quality, it mirrors the need for, and nature of, regulation of health research to struggle with the issue of when research is in ‘the public interest’.
It is inadequate to assert or assume that research and its existing and emerging regulation is in the public interest. It is vital to ensure wide, inclusive consideration that is not overwhelmed by economic or other strongly vested interests. This is best accomplished by developing, assessing and refining ways to better include diverse citizens in the informed reflections about what is in our collective interests, and how to best live together when those interests appear incommensurable.
1 NHS, ‘News: NHS England sets out the next steps of public awareness about care.data’, (NHS, 2013), www.england.nhs.uk/2013/10/care-data/.
2 F. Godlee, ‘What Can We Salvage From care.data?’, (2016) BMJ, 354(i3907).
3 F. Caldicott et al., ‘Information: To Share Or Not to Share? The Information Governance Review’, (UK Government Publishing Service, 2013).
4 A. Irwin and B. Wynne, Misunderstanding Science. The Public Reconstruction of Science and Technology (Abingdon: Routledge, 1996); S. Locke, ‘The Public Understanding of Science – A Rhetorical Invention’, (2002) Science Technology & Human Values, 27(1), 87–111.
5 K. C. O’Doherty and M. M. Burgess, ‘Developing Psychologically Compelling Understanding of the Involvement of Humans in Research’, (2019) Human Arenas 2(6), 1–18.
6 J. F. Caron-Flinterman et al., ‘The Experiential Knowledge of Patients: A New Resource for Biomedical Research?’, (2005) Social Science and Medicine, 60(11), 2575–2584; M. De Wit et al., ‘Involving Patient Research Partners has a Significant Impact on Outcomes Research: A Responsive Evaluation of the International OMERACT Conferences’, (2013) BMJ Open, 3(5); S. Petit-Zeman et al., ‘The James Lind Alliance: Tackling Research Mismatches’, (2010) Lancet, 376(9742), 667–669; J. A. Sacristan et al., ‘Patient Involvement in Clinical Research: Why, When, and How’, (2016) Patient Preference and Adherence, 2016(10), 631–640.
7 C. Mitton et al., ‘Health Technology Assessment as Part of a Broader Process for Priority Setting and Resource Allocation’, (2019) Applied Health Economics and Health Policy, 17(5), 573–576.
8 M. Aitken et al., ‘Consensus Statement on Public Involvement and Engagement with Data-Intensive Health Research’, (2018) International Journal of Population Data Science, 4(1), 1–6; C. Bentley et al., ‘Trade-Offs, Fairness, and Funding for Cancer Drugs: Key Findings from a Public Deliberation Event in British Columbia, Canada’, (2018) BMC Health Services Research, 18(1), 339–362; S. M. Dry et al., ‘Community Recommendations on Biobank Governance: Results from a Deliberative Community Engagement in California’, (2017) PLoS ONE 12(2), 1–14; R. E. McWhirter et al., ‘Community Engagement for Big Epidemiology: Deliberative Democracy as a Tool’, (2014) Journal of Personalized Medicine, 4(4), 459–474.
9 J. Brett et al., ‘Mapping the Impact of Patient and Public Involvement on Health and Social Care Research: A Systematic Review’, (2012) Health Expectations, 17(5), 637–650; R. Gooberman-Hill et al., ‘Citizens’ Juries in Planning Research Priorities: Process, Engagement and Outcome’, (2008) Health Expectations, 11(3), 272–281; S. Oliver et al., ‘Public Involvement in Setting a National Research Agenda: A Mixed Methods Evaluation’, (2009) Patient, 2(3), 179–190.
10 S. Sherwin, ‘Toward Setting an Adequate Ethical Framework for Evaluating Biotechnology Policy’, (Canadian Biotechnology Advisory Committee, 2001). As cited in M. M. Burgess and J. Tansey, ‘Democratic Deficit and the Politics of “Informed and Inclusive” Consultation’ in E. Einsiedel (ed.), From Hindsight to Foresight (Vancouver: UBC Press, 2008), pp. 275–288.
11 A. Irwin et al., ‘The Good, the Bad and the Perfect: Criticizing Engagement Practice’, (2013) Social Studies of Science, 43(1), 118–135; S. Jasanoff, The Ethics of Invention: Technology and the Human Future (Manhattan, NY: Norton Publishers, 2016); B. Wynne, ‘Public Engagement as a Means of Restoring Public Trust in Science: Hitting the Notes, but Missing the Music?’, (2006) Community Genetics 9(3), 211–220.
12 J. Gastil and P. Levine, The Deliberative Democracy Handbook: Strategies for Effective Civic Engagement in the Twenty-First Century (Plano, TX: Jossey-Bass Publishing, 2005).
13 I. M. Young, Inclusion and Democracy (Oxford University Press, 2000), p. 136.
14 M. Berger and B. De Cleen, ‘Interpellated Citizens: Suggested Subject Positions in a Deliberation Process on Health Care Reimbursement’, (2018) Comunicazioni Sociali, 1, 91–103; L. Althusser, ‘Ideology and Ideological State Apparatuses: Notes Towards an Investigation’ in L. Althusser (ed.) Lenin and Philosophy and Other Essays (Monthly Review Press, 1971), pp. 173–174.
15 H. L. Walmsley, ‘Mad Scientists Bend the Frame of Biobank Governance in British Columbia’, (2009) Journal of Public Deliberation, 5(1), Article 6.
16 M. E. Warren, ‘Governance-Driven Democratization’, (2009) Critical Policy Studies, 3(1), 3–13, 10.
17 G. Smith and C. Wales, ‘Citizens’ Juries and Deliberative Democracy’, (2000) Political Studies, 48(1), 51–65.
18 S. Chambers, ‘Deliberative Democratic Theory’, (2003) Annual Review of Political Science, 6, 307–326.
19 M. M. Burgess et al., ‘Assessing Deliberative Design of Public Input on Biobanks’ in S. Dodds and R. A. Ankeny (eds) Big Picture Bioethics: Developing Democratic Policy in Contested Domains (Switzerland: Springer, 2016), pp. 243–276.
20 R. E. Goodin and J. S. Dryzek, ‘Deliberative Impacts: The Macro-Political Uptake of Mini-Publics’, (2006) Politics & Society, 34(2), 219–244.
21 H. Longstaff and M. M. Burgess, ‘Recruiting for Representation in Public Deliberation on the Ethics of Biobanks’, (2010) Public Understanding of Science, 19(2), 212–24.
22 D. Steel et al., ‘Multiple Diversity Concepts and Their Ethical-Epistemic Implications’, (2018) The British Journal for the Philosophy of Science, 8(3), 761–780.
23 K. Beier et al., ‘Understanding Collective Agency in Bioethics’, (2016) Medicine, Health Care and Philosophy, 19(3), 411–422.
24 Longstaff and Burgess, ‘Recruiting for Representation’.
25 S. M. Dry et al., ‘Community Recommendations on Biobank Governance’.
26 Burgess et al., ‘Assessing Deliberative Design’, pp. 270–271.
27 A. Kadlec and W. Friedman, ‘Beyond Debate: Impacts of Deliberative Issue Framing on Group Dialogue and Problem Solving’, (Center for Advances in Public Engagement, 2009); H. L. Walmsley, ‘Mad Scientists Bend the Frame of Biobank Governance in British Columbia’, (2009) Journal of Public Deliberation, 5(1), Article 6.
28 M. M. Burgess, ‘Deriving Policy and Governance from Deliberative Events and Mini-Publics’ in M. Howlett and D. Laycock (eds), Regulating Next Generation Agri-Food Biotechnologies: Lessons from European, North American and Asian Experiences (Abingdon: Routledge, 2012), pp. 220–236; D. Nicol, et al., ‘Understanding Public Reactions to Commercialization of Biobanks and Use of Biobank Resources’, (2016) Social Sciences and Medicine, 162, 79–87.
29 J. Abelson et al., ‘Bringing ‘The Public’ into Health Technology Assessment and Coverage Policy Decisions: From Principles to Practice’, (2007) Health Policy, 82(1), 37–50; T. Nabatchi et al., Democracy in Motion: Evaluating the Practice and Impact of Deliberative Civic Engagement (Oxford University Press, 2012).
30 D. Caluwaeerts and M. Reuchamps, The Legitimacy of Citizen-led Deliberative Democracy: The G1000 in Belgium (Abingdon: Routledge, 2018).
31 G. Gigerenzer and P. M. Todd, ‘Ecological Rationality: The Normative Study of Heuristics’, in P. M. Todd and G. Gigerenzer (eds), Ecological Rationality: Intelligence in the World (Oxford University Press, 2012).
32 E. Christofides et al., ‘Heuristic Decision-Making About Research Participation in Children with Cystic Fibrosis’, (2016) Social Science & Medicine, 162, 32–40; O’Doherty and Burgess, ‘Developing Psychologically Compelling Understanding’;M. M. Burgess and K. C. O’Doherty, ‘Moving from Understanding of Consent Conditions to Heuristics of Trust’, (2019) American Journal of Bioethics, 19(5), 24–26.
In recent times, biomedical research has begun to tap into larger-than-ever collections of different data types. Such data include medical history, family history, genetic and epigenetic data, information about lifestyle, dietary habits, shopping habits, data about one’s dwelling environment, socio-economic status, level of education, employment and so on. As a consequence, the notion of health data – data that are of relevance for health-related research or for clinical purposes – is expanding to include a variety of non-clinical data, as well as data provided by research participants themselves through commercially available products such as smartphones and fitness bands.Footnote 1 Precision medicine that pools together genomic, environmental and lifestyle data represents a prominent example of how data integration can drive both fundamental and translational research in important domains such as oncology.Footnote 2 All of this requires the collection, storage, analysis and distribution of massive amounts of personal information as well as the use of state-of-the art data analytics tools to uncover new disease-related patterns.
To date, most scholarship and policy on these issues has focused on privacy and data protection. Less attention has been paid to addressing other aspects of the wicked challenges posed by Big Data health research and even less work has been geared towards the development of novel governance frameworks.
In this chapter, we make the case for adaptive and principle-based governance of Big Data research. We outline six principles of adaptive governance for Big Data research and propose key factors for their implementation into effective governance structures and processes.
For present purposes, the term ‘governance’ alludes to a democratisation of administrative decision-making and policy-making or, to use the words of sociologist Anthony Giddens, to ‘a process of deepening and widening of democracy [in which] government can act in partnership with agencies in civil society to foster community renewal and development.’Footnote 3
Regulatory literature over the last two decades has formalised a number of approaches to governance that seem to address some of the defining characteristics of Big Data health research. In particular, adaptive governance and principles-based regulation appear well-suited to tackle three specific features of Big Data research, namely: (1) the evolving, and thus hardly predictable nature of the data ecosystem in Big Data health research – including the fast-paced development of new data analysis techniques; (2) the polycentric character of the actor network of Big Data and the absence of a single centre of regulation; and (3) the fact that most of these actors do not currently share a common regulatory culture and are driven by unaligned values and visions.Footnote 4
Adaptive governance is based on the idea that – in the presence of uncertainty, lack of evidence and evolving, dynamic phenomena – governance should be able to adapt to the mutating conditions of the phenomenon that it seeks to govern. Key attributes of adaptive governance are the inclusion of multiple stakeholders in governance design,Footnote 5 collaboration between regulating and regulated actors,Footnote 6 the incremental and planned incorporation of evidence in governance solutionsFootnote 7 and openness to cope with uncertainties through social learning.Footnote 8 This is attained by planning evidence collection and policy revision rounds in order to refine the fit between governance and public expectations; distributing regulatory tasks across a variety of actors (polycentricity); designing partially overlapping competences for different actors (redundancy); and by increasing participation in policy and management decisions by otherwise neglected social groups. Adaptive governance thus seems to adequately reflect the current state of Big Data health research as captured by the three characteristics outlined above. Moreover, social learning – a key feature of adaptive governance – can help explore areas of overlapping consensus even in a fragmented actor network like the one that constitutes Big Data research.
Principles based regulation (PBR) is a governance approach that emerged in the 1990s to cope with the expansion of the financial services industry. Just as Big Data research is driven by technological innovation, financial technologies (the so-called fintech industry) have played a disruptive role for the entire financial sector.Footnote 9 Unpredictability, accrual of new stakeholders and lack of regulatory standards and best practices characterise this phenomenon. To respond to this, regulators such as the UK Financial Services Authority (FSA), backed-up by a number of academic supporters of ‘new governance’ approaches,Footnote 10 have proposed principles-based regulation as a viable governance model.Footnote 11 In this model, regulation and oversight relies on broadly-stated principles that reflect regulators orientations, values and priorities. Moreover, implementation of the principles is not entirely delegated to specified rules and procedures. Rather, PBR relies on regulated actors to set up mechanism to comply with the principles.Footnote 12 Principles are usually supplemented by guidance, white papers and other policies and processes to channel the compliance efforts of regulated entities. See further on PBR, Sethi, Chapter 17, this volume.
We contend that PBR is helpful to set up Big Data governance in the research space because it is explicitly focussed on the creation of some form of normative alignment between the regulator and the regulated; it creates conditions that can foster the emergence of shared values among different regulated stakeholders. Since compliance is not rooted on box-ticking nor respect for precisely-specified rules, PBR stimulates experimentation with a number of different oversight mechanisms. This bottom-up approach allows stakeholders to explore a wide range of activities and structures to align with regulatory principles, favouring the selection of more cost-efficient and proportionate mechanisms. Big data health research faces exactly this need to create stakeholders’ alignment and to cope with the wide latitude of regulatory attitudes that is to be expected in an innovative domain with multiple newcomers.
The governance model that we propose below relies on both adaptive governance – as to its capacity to remain flexible to future evolutions of the field – and PBR – because of its emphasis on principles as sources of normative guidance for different stakeholders.
The framework we propose below provides guidance to actors that have a role in the shaping and management of research employing Big Data; it draws inspiration from the above-listed features of adaptive governance. Moreover, it aligns with PBR in that it offers guidance to stakeholders and decision-makers engaged at various levels in the governance of Big Data health research. As we have argued elsewhere, our framework will facilitate the emergence of systemic oversight functions for the governance of Big Data health research.Footnote 13 The development of systemic oversight relies on six high-order principles aimed at reducing the effects of a fragmented governance landscape and at channelling governance decisions – through both structures and processes – towards an ethically defensible common ground. These six principles do not predefine which specific governance structures and processes shall be put in place – hence the caveat that they represent high-order guides. Rather, they highlight governance features that shall be taken into account in the design of structures and processes for Big Data health research. Equally, our framework is not intended as a purpose-neutral approach to governance. Quite to the contrary; the six principles we advance do indeed possess a normative character in that they endorse valuable states of affairs that shall occur as a result of appropriate and effective governance. By the same token, our framework suggests that action should be taken in order to avoid certain kinds of risks that will most likely occur if left unattended. In this section, we will illustrate the six principles of systemic oversight – adaptivity, flexibility, monitoring, responsiveness, reflexivity and inclusiveness – while the following section deals with the effective interpretation and implementation of such principles in terms of both structures and processes.
Adaptivity: adaptivity is the capacity of governance structures and processes to ensure proper management of new forms of data as they are incorporated into health research practices. Adaptivity, as presented here, has also been discussed as a condition for resilience, that is, for the capacity of any given system to ‘absorb disturbances and reorganize while undergoing change so as to still retain essentially the same function, structure, identity and feedbacks.’Footnote 14 This feature is crucial in the case of a rapidly evolving field – like Big Data research – whose future shape, as a consequence, is hard to anticipate.
Flexibility: flexibility refers to the capacity to treat different data types depending on their actual use rather than their source alone. Novel analytic capacities are jeopardising existing data taxonomies, which rapidly renders regulatory categories constructed around them obsolete. Flexibility means, therefore, recognising the impact of technical novelties and, at a minimum, giving due consideration to their potential consequences.
Monitoring: risk minimisation is a crucial aim of research ethics. With the possible exception of highly experimental procedures, the spectrum of physical and psychological harms due to participation in health research is fairly straightforward to anticipate. In the evolving health data ecosystem described so far, however, it is difficult to anticipate upfront what harms and vulnerabilities research subjects may encounter due their participation in Big Data health research. This therefore requires on-going monitoring.
Responsiveness: despite efforts in monitoring emerging vulnerabilities, risks can always materialise. In Big Data health research, privacy breaches are a case in point. Once personal data are exposed, privacy is lost. No direct remedy exists to re-establish the privacy conditions that were in place before the violation. Responsiveness therefore prescribes that measures are put in place to at least reduce the impact of such violations on the rights, interests and well-being of research participants.
Reflexivity: it is well known that certain health-related characteristics cluster in specific human groups, such as populations, ethnic groups, families and socio-economic strata. Big data are pushing the classificatory power of research to the next level, with potentially worrisome implications. The classificatory assumptions that drive the use of rapidly evolving data-mining capacities need to be put under careful scrutiny as to their plausibility, opportunity and consequences. Failing to do so will result in harms to all human groups affected by those assumptions. What is more, public support for, as well as trust in, scientific research may be jeopardised by the reputational effects that can arise if reflexivity and scrutiny are not maintained.
Inclusiveness: the last component of systemic oversight closely resonates with one of the key features of adaptive governance, that is, the need to include all relevant parties in the governance process. As more diverse data sources are aggregated, the more difficult it becomes for research participants to exert meaningful control on the expanding cloud of personal data that is implicated by their participation.Footnote 15 Experimenting with new forms of democratic engagement is therefore imperative for a field that depends on resources provided by participants (i.e. data), but that, at the same time, can no longer anticipate how such resources will be employed, how they will be analysed and with which consequences. See Burgess, Chapter 25.
26.4 Big Data Health Research: Implementing Effective Governance
While there is no universal definition of the notion of effective governance, it alludes in most cases to an alignment between purposes and outcomes, reached through processes that fulfil constituents’ expectations and which project legitimacy and trust onto the involved actors.Footnote 16 This understanding of effective governance fits well with our domain of interest: Big Data health research. In the remainder of this chapter, drawing on literature on the implementation of adaptive governance and PBR, we discuss key issues to be taken into account in trying to derive effective governance structures and oversight mechanism from the AFIRRM principles.
The AFIRRM framework endorses the use of principles as high-level articulations of what is to be expected by regulatory mechanisms for the governance of Big Data health research. Unlike the use of PBR in financial markets where a single regulator expects compliance, PBR in the Big Data context responds to the reality that governance functions are distributed among a plethora of actors, such as ethics review committees, data controllers, privacy commissioners, access committees, etc. PBR within the AFIRRM framework offers a blueprint for such a diverse array of governance actors to create new structures and processes to cope with the specific ethical and legal issues raised by the use of Big Data. Such principles have a generative function in the governance landscape, that is, in the process of being created to govern those issues.
The key advantage of principles in this respect is that they require making the reason behind regulation visible to all interested parties, including publics. This amounts to an exercise of public accountability that can bring about normative coherence among actors with different starting assumptions. The AFIRRM principles stimulate a bottom-up exploration of the values at stake and how compliance with existing legal requirements will be met. In this sense, the AFIRRM principles perform a formal, more than a substantive function, precisely because we assume the substantive ethical and legal aims of regulation that have already been developed in health research – such as the protection of research participants from the risk of harm – to hold true also for research employing Big Data. What AFIRRM principles do is to provide a starting point for deliberation and action that respects existing ethical standards and complies with pre-existing legal rules.
The AFIRRM principles do not envision actors in the space of Big Data research to self-regulate, but they do presuppose trust between regulators and regulated entities: regulators need to be confident that regulated entities will do their best to give effect to the principles in good faith. While some of the interests at stake in Big Data health research might be in tension – like the interest of researchers to access and distribute data, and the interests of data donors to control what their personal data are used for – developing efficient governance structures and processes that meet stakeholders’ expectations is of advantage for all interested parties to begin with conversations based on core agreed principles. Practically, this requires all relevant stakeholders to have a say in the development and operationalisation of the principles at stake.
Adaptive governance scholarship has identified typical impediments to effective operationalisation of adaptive mechanisms. A 2012 literature review of adaptive governance, network management and institutional analysis identified three key challenges to the effective implementation of adaptive governance: ill-defined purposes and objectives, unclear governance context and lack of evidence in support of blueprint solutions.Footnote 17
Let us briefly illustrate each of these challenges and explain how systemic oversight tries to avoid them. In the shift from centralised forms of administration and decision-making, to less formalised and more distributed governance networks that occurred over the last three decades,Footnote 18 the identification of governance objectives is no longer straightforward. This difficulty may also be due to the potentially conflicting values of different actors in the governance ecosystem. In this respect, systemic oversight has the advantage of not being normatively neutral. The six principles of systemic oversight determinedly aim at fostering an ethical common ground for a variety of governance actors and activities in the space of Big Data research. What underpins the framework, therefore, is a view of what requires ethical attention in this rapidly evolving field, and how to prioritise actions accordingly. In this way, systemic oversight can provide orientation for a diverse array of governance actors (structures) and mechanisms (processes), all of which are supposed to produce an effective system of safeguards around activities in this domain. Our framework directs attention to critical features of Big Data research and promotes a distributed form of accountability that will, where possible, emerge spontaneously from the different operationalisations of its components. The six components of systemic oversight, therefore, suggest what is important to take into account when considering how to adapt the composition, mandate, operations and scope of oversight bodies in the field of Big Data research.
The second challenge to effective adaptive governance – unclear governance context – refers to the difficulty of mapping the full spectrum of rules, mechanisms, institutions and actors involved in a distributed governance system or systems. Systemic oversight requires mapping the overall governance context in order to understand how best to implement the framework in practice. This amounts to an empirical inquiry into the conditions (structures, mechanisms and rules) in which governance actors currently operate. In a recent study we showed that current governance mechanisms for research biobanks, for instance, are not aligned with the requirements of systemic oversight.Footnote 19 In particular, we showed that systemic oversight can contribute to improve accountability of research infrastructures that, like biobanks, collect and distribute an increasing amount of scientific data.
The third and last challenge to effective operationalisation of adaptive mechanisms has to do with the limits of ready-made blueprint solutions to complex governance models. Political economist and Nobel Laureate Elinor Ostrom has written extensively on this. In her work on socio-ecological systems, Ostrom has convincingly shown that policy actors have the tendency to buy into what she calls ‘policy panaceas’,Footnote 20 that is, ready-made solutions to very complex problems. Such policy panaceas are hardly ever supported by solid evidence regarding the effectiveness of their outcomes. One of the most commonly cited reasons for their lack of effectiveness is that complexity entails high degrees of uncertainty as to the very phenomenon that policy makers are trying to govern.
We saw that uncertainty is characteristic of Big Data research too (see Section 26.2). That is why systemic oversight refrains from prescribing any particular governance solution. While not rejecting traditional predict-and-control approaches (such as informed consent, data anonymisation and encryption), systemic oversight does not put all the regulatory weight on any particular instrument or body. The systemic ambition of the framework lies in its pragmatic orientation towards a plurality of tools, mechanisms and structures that could jointly stabilise the responsible use of Big Data for research purposes. In this respect, our framework acknowledges that ‘[a]daptation typically emerges organically among multiple centers of agency and authority in society as a relatively self-organized or autonomous process marked by innovation, social learning and political deliberation’.Footnote 21
Still, a governance framework’s capacity to avoid known bottlenecks to operationalisation is a necessary but not a sufficient condition to its successful implementation. The further question is how the principles of the systemic oversight model can be incorporated into structures and processes in Big Data research governance. With structures we mean actors and networks of actors involved in governance, and organised in bodies charged with oversight, organisational or policy-making responsibilities. Processes, instead, are the mechanisms, procedures, rules, laws and codes through which actors operate and bring about their governance objectives. Structures and processes define the polycentric, redundant and experimental system of governance that an adaptive governance model intends to promote.Footnote 22
Here we follow the work of Rijke and colleaguesFootnote 23 in identifying three key properties of adaptive governance structures: centrality, cohesion and density. While it is acknowledged that centralised structures can be effective as a response to crises and emergencies, centralisation is precisely a challenge in Big Data; our normative response is to call for inclusive social learning among the broad array of stakeholders, subject to challenges of incomplete representation of relevant interests (see further below). Still, this commitment can help to promote network cohesion by fostering discussion about how to implement the principles, while also promoting the formation of links between governance actors, as required by density. In addition, this can help to ensure that governance roles are fairly distributed among a sufficiently diverse array of stakeholders and that, as a consequence, decisions are not hijacked by technical experts.
The governance space in Big Data research is already populated by numerous actors, such as IRBs, data access committees and advisory boards. These bodies are not necessarily inclusive of a sufficiently broad array of stakeholders and therefore they may not be very effective at promoting social learning. Their composition could thus be rearranged in order to be more representative of the interests at stake and to promote continued learning. New actors could also enter the governance system. For instance, data could be made available for research by data subjects themselves through data platforms.Footnote 24
Network of actors (structures) operating in the space of health research do so through mechanisms and procedures (processes) such as informed consent and ethics review, as well as data access review, policies on reporting research findings to participants, public engagement activities and privacy impact assessment.
Processes are crucial to effective governance of health research and are a critical component of the systemic oversight approach as their features can determine the actual impact of its principles. Drawing on scholarship in adaptive governance, we present three such features (components) that are central to the appropriate interpretation of the systemic oversight principles.
Social learning: social learning refers to learning that occurs by observing others.Footnote 25 In governance settings that are open to participation by different stakeholders, social learning can occur across different levels and hierarchies of the governance structures. According to many scholars, including Ostrom,Footnote 26 social learning represents an alternative to policy blueprints (see above) – especially when it is coupled with and leading to adaptive management. Planned adaptations – that is, previously scheduled rounds of policy revision in light of new knowledge – can be occasions for governance actors to capitalise on each other’s experience and learn about evolving expectations and risks. Such learning exercises can reduce uncertainty and lead to adjustments in mechanisms and rules. The premise of this approach is the realisation that in complex systems characterised by pronounced uncertainty, ‘no particular epistemic community can possess all the necessary knowledge to form policy’.Footnote 27 Social learning – be it aimed at gathering new evidence, at fostering capacity building or at assessing policy outcomes – is relevant to all of the six components of systemic oversight. The French law on bioethics, for instance, prescribes periodic rounds of nationwide public consultation – the so-called Estates General on bioethics.Footnote 28 This is an example of how social learning can be fostered. Similar social learning can be triggered even at smaller scales – for instance in local oversight bodies – in order to explore new solutions and alternative designs.
Complementarity: complementarity is the capacity of governance processes to fulfil both the need for processes to be functionally compatible and to ensure procedural correspondence between processes and the phenomena they intend to regulate. Functional complementarity refers to the distribution of regulatory functions across a given set of processes exhibiting partial overlap (see redundancy, above). This feature is crucial for both monitoring and reflexivity. Procedural complementarity, on the other hand, refers to the temporal alignment between governance processes and the activities that depend on such processes. One prominent example, in this respect, is the timing of ethics review processes, or that of data access requests processing.Footnote 29 For instance, the European General Data Protection Regulation (GDPR) prescribes a maximum 72-hour delay between detection and notification of privacy breaches. This provision is an example of procedural complementarity that would be of the utmost importance for the principle of responsiveness.
Visibility: governance processes need to be visible, that is, procedures and their scope need to be as publicly available as possible to whomever is affected by them or must act accordingly to them. The notion of regulatory visibility has recently been highlighted by Laurie and colleagues, who argue for regulatory stewardship within ecosystems to help researchers clarify values and responsibilities in health research and navigate the complexities.Footnote 30 Recent work also demonstrates that currently it is difficult to access policies and standard operating procedures of prominent research institutions like biobanks. In principle, fair scientific competition may militate against disclosure of technical details about data processing, but it is hard to imagine practical circumstances in which administrators of at least publicly funded datasets would not have incentives to share as much information as possible regarding the way they handle their data. Process visibility goes beyond fulfilling a pre-determined set of criteria (for instance, for auditing purposes). By disclosing governance processes and opportunities for engagement, actors actually offer reasons to be trusted by a variety of stakeholders.Footnote 31 This feature is of particular relevance for the principles of monitoring and reflexivity, as well as to improve the effectiveness of inclusive governance processes.
In this chapter, we have defended adaptive governance as a suitable regulatory approach for Big Data health research by proposing six governance principles to foster the development of appropriate structures and processes to handle critical aspects of Big Data health research. We have analysed key aspects of implementation and identified a number of important features that can make adaptive regulation operational. However, one might legitimately ask: in the absence of a central regulatory actor endowed with clearly recognised statutory prerogatives, how can it be assumed that the AFIRRM principles will be endorsed by the diverse group of stakeholders operating in the Big Data health research space? Clearly, this question does not have a straightforward answer. However, to increase likelihood of uptake, we have advanced AFIRRM as a viable and adaptable model for the creation of necessary tools that can deliver on common objectives. Our model is based on a careful analysis of regulatory scholarship vis-à-vis the key attributes of this type of research. We are currently undertaking considerable efforts to introduce AFIRRM to regulators, operators and organisations in the space of research or health policy. We are cognisant of the fact that the implementation of a model like AFIRRM needs not be temporally linear. Different actors may take initiative at different points in time. It cannot be expected that a coherent system of governance will emerge in a synchronically orchestrated manner through the uncoordinated action of multiple stakeholders. Such a path could only be imagined if a central regulator had the power and the will to make it happen. Nothing indicates, however, that regulation will assume a centralised character anytime soon. Nevertheless, polycentricity is not in itself a barrier to the emergence of a coherent governance ecosystem. Indeed, the AFIRRM principles – in line with its adaptive orientation – rely precisely on polycentric governance to cope with the uncertainty and complexity of Big Data health research.
1 fitbit Inc., ‘National Institutes of Health Launches Fitbit Project as First Digital Health Technology Initiative in Landmark All of Us Research Program (Press Release)’, (fitbit, 2019).
2 D. C. Collins et al., ‘Towards Precision Medicine in the Clinic: From Biomarker Discovery to Novel Therapeutics’, (2017) Trends in Pharmacological Sciences, 38(1), 25–40.
3 A. Giddens, The Third Way: The Renewal of Social Democracy (New York: John Wiley & Sons, 2013), p. 69.
4 E. Vayena and A. Blasimme, ‘Health Research with Big Data: Time for Systemic Oversight’, (2018) The Journal of Law, Medicine & Ethics, 46(1), 119–129.
5 C. Folke et al., ‘Adaptive Governance of Social-Ecological Systems’, (2005) Annual Review of Environment and Resources, 30, 441–473.
6 T. Dietz et al., ‘The Struggle to Govern the Commons’, (2003) Science, 302(5652), 1907–1912.
7 C. Ansell and A. Gash, ‘Collaborative Governance in Theory and Practice’, (2008) Journal of Public Administration Research and Theory, 18(4), 543–571.
8 J. J. Warmink et al., ‘Coping with Uncertainty in River Management: Challenges and Ways Forward’, (2017) Water Resources Management, 31(14), 4587–4600.
9 R. J. McWaters et al., ‘The Future of Financial Services-How Disruptive Innovations Are Reshaping the Way Financial Services Are Structured, Provisioned and Consumed’, (World Economic Forum, 2015).
10 R. A. W. Rhodes, ‘The New Governance: Governing without Government’, (1996) Political Studies, 44(4), 652–667.
11 J. Black, ‘The Rise, Fall and Fate of Principles Based Regulation’, (2010) LSE Legal Studies Working Paper, 17.
13 Vayena and Blasimme, ‘Health Research’.
14 B. Walker et al., ‘Resilience, Adaptability and Transformability in Social–Ecological Systems’, (2004) Ecology and Society, 9 (2), 4.
15 E. Vayena and A. Blasimme, ‘Biomedical Big Data: New Models of Control over Access, Use and Governance’, (2017) Journal of Bioethical Inquiry, 14(4), 501–513.
16 See, for example, S. Arjoon, ‘Striking a Balance between Rules and Principles-Based Approaches for Effective Governance: A Risks-Based Approach’, (2006) Journal of Business Ethics, 68(1), 53–82; A. Kezar, ‘What Is More Important to Effective Governance: Relationships, Trust, and Leadership, or Structures and Formal Processes?’, (2004) New Directions for Higher Education, 127, 35–46.
17 J. Rijke et al., ‘Fit-for-Purpose Governance: A Framework to Make Adaptive Governance Operational’, (2012) Environmental Science & Policy, 22, 73–84.
18 R. A. W. Rhodes, Understanding Governance: Policy Networks, Governance, Reflexivity, and Accountability (Buckingham: Open University Press, 1997); R. A. W. Rhodes, ‘Understanding Governance: Ten Years On’, (2007) Organization Studies, 28(8), 1243–1264.
19 F. Gille et al. ‘Future-proofing biobanks’ governance’, (2020) European Journal of Human Genetics, 28, 989–996.
20 E. Ostrom, ‘A Diagnostic Approach for Going beyond Panaceas’, (2007) Proceedings of the National Academy of Sciences, 104(39), 15181–15187.
21 D. A. DeCaro et al., ‘Legal and Institutional Foundations of Adaptive Environmental Governance’, (2017) Ecology and Society: A Journal of Integrative Science for Resilience and Sustainability, 22 (1), 1.
22 B. Chaffin et al., ‘A Decade of Adaptive Governance Scholarship: Synthesis and Future Directions’, (2014) Ecology and Society, 19(3), 56.
23 Rijke et al., ‘Fit-for-Purpose Governance’.
24 A. Blasimme et al., ‘Democratizing Health Research Through Data Cooperatives’, (2018) Philosophy & Technology, 31(3), 473–479.
25 A. Bandura and R. H. Walters, Social Learning Theory, vol. 1 (Prentice-hall Englewood Cliffs, NJ, 1977).
26 Ostrom, ‘A Diagnostic Approach’.
27 D. Swanson et al., ‘Seven Tools for Creating Adaptive Policies’, (2010) Technological Forecasting and Social Change, 77(6), 924–939, 925.
28 D. Berthiau, ‘Law, Bioethics and Practice in France: Forging a New Legislative Pact’, (2013) Medicine, Health Care and Philosophy, 16(1), 105–113.
29 G. Silberman and K. L. Kahn, ‘Burdens on Research Imposed by Institutional Review Boards: The State of the Evidence and Its Implications for Regulatory Reform’, (2011) The Milbank Quarterly, 89(4), 599–627.
30 G. T. Laurie et al., ‘Charting Regulatory Stewardship in Health Research: Making the Invisible Visible’, (2018) Cambridge Quarterly of Healthcare Ethics, 27(2), 333–347.
31 O. O’Neill, ‘Trust with Accountability?’, (2003) Journal of Health Services Research & Policy, 8(1), 3–4.
New technologies, techniques, and tests in healthcare, offering better prevention, or better diagnosis and treatment, are not manna from heaven. Typically, they are the products of extensive research and development, increasingly enabled by high levels of automation and reliant on large datasets. However, while some will push for a permissive regulatory environment that is facilitative of beneficial innovation, others will push back against research that gives rise to concerns about the safety and reliability of particular technologies as well as their compatibility with respect for fundamental values. Yet, how are the interests in pushing forward with research into potentially beneficial health technologies to be reconciled with the heterogeneous interests of the concerned who seek to push back against them?
A stock answer to this question is that regulators, neither over-regulating nor under-regulating, should seek an accommodation or a balance of interests that is broadly ‘acceptable’. If the issue is about risks to human health and safety, then regulators – having assessed the risk – should adopt a management strategy that confines risk to an acceptable level; and, if there is a tension between, say, the interest of researchers in accessing health data and the interest of patients in both their privacy and the fair processing of their personal data, then regulators should accommodate these interests in a way that is reasonable – or, at any rate, not manifestly unreasonable.
The central purpose of this chapter is not to argue that this balancing model is always wrong or inappropriate, but to suggest that it needs to be located within a bigger picture of lexically ordered regulatory responsibilities.Footnote 1 In that bigger picture, the paramount responsibility of regulators is to act in ways that protect and maintain the conditions that are fundamental to human social existence (the commons). After that, a secondary responsibility is to protect and respect the values that constitute a group as the particular kind of community that it is. Only after these responsibilities have been discharged do we get to a third set of responsibilities that demand that regulators seek out reasonable and acceptable balances of conflicting legitimate interests. Accordingly, before regulators make provision for a – typically permissive – framework that they judge to strike an acceptable balance of interests in relation to some particular technology, technique or test, they should check that its development, exploitation, availability and application crosses none of the community’s red lines and, above all, that it poses no threat to the commons.
The chapter is in three principal parts. First, in Section 27.2, we start with two recent reports by the Nuffield Council on Bioethics – one a report on the use of Non-Invasive Prenatal Testing (NIPT),Footnote 2 and the other on genome-editing and human reproduction.Footnote 3 At first blush, the reports employ a similar approach, identifying a range of legitimate – but conflicting – interests and then taking a relatively conservative position. However, while the NIPT report exemplifies a standard balancing approach, the genome-editing report implicates a bigger picture of regulatory responsibilities. Second, in Section 27.3, I sketch my own take on that bigger picture. Third, in Section 27.4, I speak to the way in which the bigger picture might bear on our thinking about the regulation of automated healthcare and research technologies. In particular, in this part of the chapter, the focus is on those technologies that power smart machines and devices, technologies that are hungry for human data but then, in their operation, often put humans out of the loop.
27.2 NIPT, Genome-Editing and the Balancing of Interests
In its report on the ethics of NIPT, the Nuffield Council on Bioethics identifies a range of legitimate interests that call for regulatory accommodation. On the one side, there is the interest of pregnant women and their partners in making informed reproductive choices. On the other side, there are interests – particularly of the disability community and of future children – in equality, fairness and inclusion. The question is: how are regulators to ‘align the responsibilities that [they have] to support women to make informed reproductive choices about their pregnancies, with the responsibilities that [they have] … to promote equality, inclusion and fair treatment for all’?Footnote 4 In response to which, the Council, being particularly mindful of the interests of future children – in an open future – and the interest in a wider societal environment that is fair and inclusive, recommends that a relatively restrictive approach should be taken to the use of NIPT.
In support of the Council’s approach and its recommendation, there is a good deal that can be said. For example, the Council consulted widely before drawing up the inventory of interests to be considered: it engaged with the arguments rationally and in good faith; where appropriate, its thinking was evidence-based; and its recommendation is not manifestly unreasonable. If we were to imagine a judicial review of the Council’s recommendation, it would surely survive the challenge.
However, if the Council had given greater weight to the interest in reproductive autonomy together with the argument that women have ‘a right to know’ and that healthcare practitioners have an interest in doing the best that they can for their patients,Footnote 5 leading to a much less restrictive recommendation, we could say exactly the same things in its support.
In other words, so long as the Council – and, similarly, any regulatory body – consults widely and deliberates rationally, and so long as its recommendations are not manifestly unreasonable, we can treat its preferred accommodation of interests as acceptable. Yet, in such balancing deliberations, it is not clear where the onus of justification lies or what the burden of justification is; and, in the final analysis, we cannot say why the particular restrictive position that the Council takes is more or less acceptable than a less restrictive position.
Turning to the Council’s second report, it hardly needs to be said that the development of precision gene-editing techniques, notably CRISPR-Cas9, has given rise to considerable debate.Footnote 6 Addressing the ethics of gene editing and human reproduction, the Council adopted a similar approach to that in its report on NIPT. Following extensive consultation – and, in this case, an earlier, more general, reportFootnote 7 – there is a careful consideration of a range of legitimate interests, following which a relatively conservative position is taken. Once again, although the position taken is not manifestly unreasonable, it is not entirely clear why this particular position is taken.
Yet, in this second report, there is a sense that something more than balancing might be at stake.Footnote 8 For example, the Council contemplates the possibility that genome editing might inadvertently lead to the extinction of the human species – or, conversely, that genome editing might be the salvation of humans who have catastrophically compromised the conditions for their existence. In these short reflections about the interests of ‘humanity’, we can detect a bigger picture of regulatory responsibilities.
27.3 The Bigger Picture of Regulatory Responsibilities
In this part of the chapter, I sketch what I see as the bigger – three-tier – picture of regulatory responsibilities and then speak briefly to the first two tiers.
27.3.1 The Bigger Picture
My claim is that regulators have a first-tier ‘stewardship’ responsibility for maintaining the pre-conditions for any kind of human social community (‘the commons’). At the second tier, regulators have a responsibility to respect the fundamental values of a particular human community, that is to say, the values that give that community its particular identity. At the third tier, regulators have a responsibility to seek out an acceptable balance of legitimate interests. The responsibilities at the first tier are cosmopolitan and non-negotiable. The responsibilities at the second and third tiers are contingent, depending on the fundamental values and the interests recognised in each particular community. Conflicts between commons-related interests, community values and individual or group interests are to be resolved by reference to the lexical ordering of the tiers: responsibilities in a higher tier always outrank those in a lower tier. Granted, this does not resolve all issues about trade-offs and compromises because we still have to handle horizontal conflicts within a particular tier. But, by identifying the tiers of responsibility, we take an important step towards giving some structure to the bigger picture.
Regulatory responsibilities start with the existence conditions that support the particular biological needs of humans. Beyond this, however, as agents, humans characteristically have the capacity to pursue various projects and plans whether as individuals, in partnerships, in groups, or in whole communities. Sometimes, the various projects and plans that they pursue will be harmonious; but often – as when the acceptability of the automation of healthcare and research is at issue – human agents will find themselves in conflict with one another. Accordingly, regulators also have a responsibility to maintain the conditions – conditions that are entirely neutral between the particular plans and projects that agents individually favour – that constitute the context for agency itself.
Building on this analysis, the claim is that the paramount responsibility for regulators is to protect, preserve, and promote:
the essential conditions for human existence (given human biological needs);
the generic conditions for human agency and self-development; and,
the essential conditions for the development and practice of moral agency.
These, it bears repeating, are imperatives in all regulatory spaces, whether international or national, public or private. Of course, determining the nature of these conditions will not be a mechanical process. Nevertheless, let me indicate how the distinctive contribution of each segment of the commons might be elaborated.
In the first instance, regulators should take steps to maintain the natural ecosystem for human life.Footnote 9 At minimum, this entails that the physical well-being of humans must be secured: humans need oxygen, they need food and water, they need shelter, they need protection against contagious diseases, if they are sick they need whatever treatment is available, and they need to be protected against assaults by other humans or non-human beings. When the Nuffield Council on Bioethics discusses catastrophic modifications to the human genome or to the ecosystem, it is this segment of the commons that is at issue.
Second, the conditions for meaningful self-development and agency need to be constructed: there needs to be sufficient trust and confidence in one’s fellow agents, together with sufficient predictability to plan, so as to operate in a way that is interactive and purposeful rather than merely defensive. Let me suggest that the distinctive capacities of prospective agents include being able: to form a sense of what is in one’s own self-interest; to choose one’s own ends, goals, purposes and so on (‘to do one’s own thing’); and to form a sense of one’s own identity (‘to be one’s own person’).
Third, the commons must secure the conditions for an aspirant moral community, whether the particular community is guided by teleological or deontological standards, by rights or by duties, by communitarian or liberal or libertarian values, by virtue ethics, and so on. The generic context for moral community is impartial between competing moral visions, values, and ideals; but it must be conducive to ‘moral’ development and ‘moral’ agency in the sense of forming a view about what is the ‘right thing’ to do relative to the interests of both oneself and others.
On this analysis, each human agent is a stakeholder in the commons where this represents the essential conditions for human existence together with the generic conditions of both self-regarding and other-regarding agency. While respect for the commons’ conditions is binding on all human agents, it should be emphasised that these conditions do not rule out the possibility of prudential or moral pluralism. Rather, the commons represents the pre-conditions for both individual self-development and community debate, giving each agent the opportunity to develop his or her own view of what is prudent, as well as what should be morally prohibited, permitted or required.
Beyond the stewardship responsibilities, regulators are also responsible for ensuring that the fundamental values of their particular community are respected. Just as each individual human agent has the capacity to develop their own distinctive identity, the same is true if we scale this up to communities of human agents. There are common needs and interests but also distinctive identities.
In the particular case of the United Kingdom: although there is not a general commitment to the value of social solidarity, arguably this is actually the value that underpins the NHS. Accordingly, if it were proposed that access to NHS patient data – data, as Philip Aldrick has put it, that is ‘a treasure trove … for developers of next-generation medical devices’Footnote 10 – should be part of a transatlantic trade deal, there would surely be an uproar because this would be seen as betraying the kind of healthcare community that we think we are.
More generally, many nation states have expressed their fundamental (constitutional) values in terms of respect for human rights and human dignity.Footnote 11 These values clearly intersect with the commons’ conditions and there is much to debate about the nature of this relationship and the extent of any overlap – for example, if we understand the root idea of human dignity in terms of humans having the capacity freely to do the right thing for the right reason,Footnote 12 then human dignity reaches directly to the commons’ conditions for moral agency.Footnote 13 However, those nation states that articulate their particular identities by reference to their commitment to respect for human dignity are far from homogeneous. Whereas in some communities, the emphasis of human dignity is on individual empowerment and autonomy, in others it is on constraints relating to the sanctity, non-commercialisation, non-commodification and non-instrumentalisation of human life.Footnote 14 These differences in emphasis mean that communities articulate in very different ways on a range of beginning-of-life and end-of-life questions as well as on questions of acceptable health-related research, and so on.
Given the conspicuous interest of today’s regulators in exploring technological solutions, an increasingly important question will be whether, and if so, how far, a community sees itself as distinguished by its commitment to regulation by rule and by human agents. In some smaller-scale communities or self-regulating groups, there might be resistance to a technocratic approach because automated compliance compromises the context for trust and for responsibility. Or, again, a community might prefer to stick with regulation by rules and by human agents because it is worried that with a more technocratic approach, there might be both reduced public participation in the regulatory enterprise and a loss of flexibility in the application of technological measures.
If a community decides that it is generally happy with an approach that relies on technological measures rather than rules, it then has to decide whether it is also happy for humans to be out of the loop. Furthermore, once a community is asking itself such questions, it will need to clarify its understanding of the relationship between humans and robots – in particular, whether it treats robots as having moral status, or legal personality, and the like.
These are questions that each community must answer in its own way. The answers given speak to the kind of community that a group aspires to be. That said, it is, of course, essential that the fundamental values to which a particular community commits itself are consistent with (or cohere with) the commons’ conditions.
One of the features of the NHS Long Term PlanFootnote 15 – in which the NHS is described as ‘a hotbed of innovation and technological revolution in clinical practice’Footnote 16 – is the anticipated role to be played by technology in ‘helping clinicians use the full range of their skills, reducing bureaucracy, stimulating research and enabling service transformation’.Footnote 17 Moreover, speaking about the newly created unit, NHSX (a new joint organisation for digital, data and technology), the Health Secretary, Matt Hancock, said that this was ‘just the beginning of the tech revolution, building on our Long Term Plan to create a predictive, preventative and unrivalled NHS’.Footnote 18
In this context, what should we make of the regulatory challenge presented by smart machines and devices that incorporate the latest AI and machine learning algorithms for healthcare and research purposes? Typically, these technologies need data on which to train and to improve their performance. While the consensus is that the collection and use of personal data needs governance and that big datasets (interrogated by state of the art algorithmic tools) need it a fortiori, there is no agreement as to what might be the appropriate terms and conditions for the collection, processing and use of personal data or how to govern these matters.Footnote 19
In its recent final report on Ethics Guidelines for Trustworthy AI,Footnote 20 the European Commission (EC) independent high-level expert group on artificial intelligence takes it as axiomatic that the development and use of AI should be ‘human-centric’. To this end, the group highlights four key principles for the governance of AI, namely: respect for human autonomy, prevention of harm, fairness and explicability. Where tensions arise between these principles, then they should be dealt with by ‘methods of accountable deliberation’ involving ‘reasoned, evidence-based reflection rather than intuition or random discretion’.Footnote 21 Nevertheless, it is emphasised that there might be cases where ‘no ethically acceptable trade-offs can be identified. Certain fundamental rights and correlated principles are absolute and cannot be subject to a balancing exercise (e.g. human dignity)’.Footnote 22
In line with this analysis, my position is that while there might be many cases where simple balancing is appropriate, there are some considerations that should never be put into a simple balance. The group mentions human rights and human dignity. I agree. Where a community treats human rights and human dignity as its constitutive principles or values, they act – in Ronald Dworkin’s evocative terms – as ‘trumps’.Footnote 23 Beyond that, the interest of humanity in the commons should be treated as even more foundational (so to speak, as a super-trump).
It follows that the first question for regulators is whether new AI technologies for healthcare and research present any threat to the existence conditions for humans, to the generic conditions for self-development, and to the context for moral development. It is only once this question has been answered that we get to the question of compatibility with the community’s particular constitutive values, and, then, after that, to a balancing judgment. If governance is to be ‘human-centric’, it is not enough that no individual human is exposed to an unacceptable risk or is not actually harmed. To be fully human-centric, technologies must be designed to respect both the commons and the constitutive values of particular human communities.
Guided by these regulatory imperatives, we can offer some short reflections on the three elements of the commons and how they might be compromised by the automation of research and healthcare.
Famously, Stephen Hawking remarked that ‘the advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity’.Footnote 24 As the best thing, AI would contribute to ‘[the eradication of] disease and poverty’Footnote 25 as well as ‘[helping to] reverse paralysis in people with spinal-cord injuries’.Footnote 26 However, on the downside, some might fear that in our quest for greater safety and well-being, we will develop and embed ever more intelligent devices to the point that there is a risk of the extinction of humans – or, if not that, then a risk of humanity surviving ‘in some highly suboptimal state or in which a large portion of our potential for desirable development is irreversibly squandered’.Footnote 27 If this concern is well-founded, then communities will need to be extremely careful about how far and how fast they go with intelligent devices.
Of course, this is not specifically a concern about the use of smart machines in the hospital or in the research facility: the concern about the existential threat posed to humans by smart machines arises across the board; and, indeed, concerns about existential threats are provoked by a range of emerging technologies.Footnote 28 In such circumstances, a regulatory policy of precaution and zero risk is indicated; and while stewardship might mean that the development and application of some technologies that we value has to be restricted, this is better than finding that they have compromised the very conditions on which the enjoyment of such technologies is predicated.
The developers of smart devices are hungry for data: data from patients, data from research participants, data from the general public. This raises concerns about privacy and data protection. While it is widely accepted that our privacy interests – in a broad sense – are ‘contextual’,Footnote 29 it is important to understand not just that ‘there are contexts and contexts’ but that there is a Context in which we all have a common interest. What most urgently needs to be clarified is whether any interests that we have in privacy and data protection touch and concern the essential conditions (the Context).
If, on analysis, we judge that privacy reaches through to the interests that agents necessarily have in the commons’ conditions – particularly in the conditions for self-development and agency – it is neither rational nor reasonable for agents, individually or collectively, to authorise acts that compromise these conditions (unless they do so in order to protect some more important condition of the commons). As Bert-Jaap Koops has so clearly expressed it, privacy has an ‘infrastructural character’, ‘having privacy spaces is an important presupposition for autonomy [and] self-development’.Footnote 30 Without such spaces, there is no opportunity to be oneself.Footnote 31 On this reading, privacy is not so much a matter of protecting goods – informational or spatial – in which one has a personal interest, but protecting infrastructural goods in which there is either a common interest (engaging first-tier responsibilities) or a distinctive community interest (engaging second-tier responsibilities).
By contrast, if privacy – and, likewise, data protection – is simply a legitimate informational interest that has to be weighed in an all things considered balance of interests, then we should recognise that what each community will recognise as a privacy interest and as an acceptable balance of interests might well change over time. To this extent, our reasonable expectations of privacy might be both ‘contextual’ and contingent on social practices.
As I have indicated, I take it that the fundamental aspiration of any moral community is that regulators and regulatees alike should try to do the right thing. However, this presupposes a process of moral reflection and then action that accords with one’s moral judgment. In this way, agents exercise judgment in trying to do the right thing and they do what they do for the right reason in the sense that they act in accordance with their moral judgment. Accordingly, if automated research and healthcare relieves researchers and clinicians from their moral responsibilities, even though well intended, this might result in a significant compromising of their dignity, qua the conditions for moral agency.Footnote 32
Equally, if robots or other smart machines are used for healthcare and research purposes, some patients and participants might feel that this compromises their ‘dignity’ – robots might not physically harm humans, but even caring machines, so to speak, ‘do not really care’.Footnote 33 The question then is whether regulators should treat the interests of such persons as a matter of individual interest to be balanced against the legitimate interests of others, or as concerns about dignity that speak to matters of either (first-tier) common or (second-tier) community interest.
In this regard, consider the case of Ernest Quintana whose family were shocked to find that, at a particular Californian hospital, a ‘robot’ displaying a doctor on a screen was used to tell Ernest that the medical team could do no more for him and that he would soon die.Footnote 34 What should we make of this? Should we read the family’s shock as simply expressing a preference for the human touch or as going deeper to the community’s constitutive values or even to the commons’ conditions? Depending on how this question is answered, regulators will know whether a simple balance of interests is appropriate.
In this chapter, I have argued that it is not always appropriate to respond to new technologies for healthcare and research simply by enjoining regulators to seek out an acceptable balance of interests. My point is not that we should eschew either the balancing approach or the idea of ‘acceptability’ but that regulators should respond in a way that is sensitised to the full range of their responsibilities.
To the simple balancing approach, with its broad margin for ‘acceptable’ accommodation, we must add the regulatory responsibility to be responsive to the red lines and basic values that are distinctive of the particular community. Any claimed interest or proposed accommodation of interests that crosses these red lines or that is incompatible with the community’s basic values is ‘unacceptable’ – but this is for a different reason to that which applies where a simple balancing calculation is undertaken.
Most fundamentally, however, regulators have a stewardship responsibility in relation to the anterior conditions for humans to exist and for them to function as a community of agents. We should certainly say that any claimed interest or proposed accommodation of interests that is incompatible with the maintenance of these conditions is totally ‘unacceptable’ – but it is more than that. Unlike the red lines or basic values to which a particular community commits itself – red lines and basic values that may legitimately vary from one community to another – the commons’ conditions are not contingent or negotiable. For human agents to compromise the conditions upon which human existence and agency is itself predicated is simply unthinkable.
Finally, it should be said that my sketch of the regulatory responsibilities is incomplete – in particular, concepts such as the ‘public interest’ and the ‘public good’ need to be located within this bigger picture; and, there is more to be said about the handling of horizontal conflicts and tensions within a particular tier. Nevertheless, the ‘take home message’ is clear. Quite simply: while automated healthcare and research might be efficient and productive, new technologies should not present unacceptable risks to the legitimate interests of humans; beyond mere balancing, new technologies should be compatible with the fundamental values of particular communities; and, above all, these technologies should do no harm to the commons’ conditions – supporting human existence and agency – on which we all rely and which we undervalue at our peril.
1 See, further, R. Brownsword, Law, Technology and Society: Re-imagining the Regulatory Environment (Abingdon: Routledge, 2019), Ch. 4.
2 Nuffield Council on Bioethics, ‘Non-invasive Prenatal Testing: Ethical Issues’, (March 2017); for discussion, see R. Brownsword and J. Wale, ‘Testing Times Ahead: Non-Invasive Prenatal Testing and the Kind of Community that We Want to Be’, (2018) Modern Law Review, 81(4), 646–672.
3 Nuffield Council on Bioethics, ‘Genome Editing and Human Reproduction: Social and Ethical Issues’, (July 2018).
4 Nuffield Council on Bioethics, ‘Non-Invasive Prenatal Testing’, para 5.20.
5 Compare N. J. Wald et al., ‘Response to Walker’, (2018) Genetics in Medicine, 20(10), 1295; and in Canada, see the second phase of the Pegasus project, Pegasus, ‘About the Project’, www.pegasus-pegase.ca/pegasus/about-the-project/.
6 See, e.g., J. Harris and D. R. Lawrence, ‘New Technologies, Old Attitudes, and Legislative Rigidity’ in R. Brownsword et al. (eds) Oxford Handbook of Law, Regulation and Technology (Oxford University Press, 2017), pp. 915–928.
7 Nuffield Council on Bioethics, ‘Genome Editing: An Ethical Review’, (September 2016).
8 Nuffield Council on Bioethics, ‘Genome Editing and Human Reproduction’, paras 3.72–3.78.
9 Compare, J. Rockström et al., ‘Planetary Boundaries: Exploring the Safe Operating Space for Humanity’ (2009) Ecology and Society, 14(2); K. Raworth, Doughnut Economics (Random House Business Books, 2017), pp. 43–53.
10 P. Aldrick, ‘Make No Mistake, One Way or Another NHS Data Is on the Table in America Trade Talks’, The Times, (8 June 2019), 51.
11 See R. Brownsword, ‘Human Dignity from a Legal Perspective’ in M. Duwell et al. (eds), Cambridge Handbook of Human Dignity (Cambridge University Press, 2014), pp. 1–22.
12 For such a view, see R. Brownsword, ‘Human Dignity, Human Rights, and Simply Trying to Do the Right Thing’ in C. McCrudden (ed), Understanding Human Dignity – Proceedings of the British Academy 192 (The British Academy and Oxford University Press, 2013), pp. 345–358.
13 See R. Brownsword, ‘From Erewhon to Alpha Go: For the Sake of Human Dignity Should We Destroy the Machines?’, (2017) Law, Innovation and Technology, 9(1), 117–153.
14 See D. Beyleveld and R. Brownsword, Human Dignity in Bioethics and Biolaw (Oxford University Press, 2001);R. Brownsword, Rights, Regulation and the Technological Revolution (Oxford University Press, 2008).
15 NHS, ‘NHS Long Term Plan’, (January 2019), www.longtermplan.nhs.uk.
16 Footnote Ibid., 91.
18 Department of Health and Social Care, ‘NHSX: New Joint Organisation for Digital, Data and Technology’, (19 February 2019), www.gov.uk/government/news/nhsx-new-joint-organisation-for-digital-data-and-technology.
19 Generally, see R. Brownsword, ‘Law, Technology and Society’, Ch. 12; D. Schönberger, ‘Artificial Intelligence in Healthcare: A Critical Analysis of the Legal and Ethical Implications’, (2019) International Journal of Law and Information Technology, 27(2), 171–203.
For the much-debated collaboration between the Royal Free London NHS Foundation Trust and Google DeepMind, see, J. Powles, ‘Google DeepMind and healthcare in an age of algorithms’, (2017) Health and Technology, 7(4), 351–367.
20 European Commission, ‘Ethics Guidelines for Trustworthy AI’, (8 April 2019).
21 Footnote Ibid.,13.
22 Footnote Ibid., emphasis added.
23 R. Dworkin, Taking Rights Seriously, revised edition (London: Duckworth, 1978).
24 S. Hawking, Brief Answers to the Big Questions (London: John Murray, 2018) p. 188.
25 Footnote Ibid., p. 189.
26 Footnote Ibid., p. 194.
27 See, N. Bostrom, Superintelligence (Oxford University Press, 2014), p. 281 (Footnote note 1);M. Ford, The Rise of the Robots (London: Oneworld, 2015), Ch. 9.
28 For an indication of the range and breadth of this concern, see e.g. ‘Resources on Existential Risk’, (2015), www.futureoflife.org/data/documents/Existential%20Risk%20Resources%20(2015-08-24).pdf.
29 See, for example, D. J. Solove, Understanding Privacy (Cambridge, MA: Harvard University Press, 2008); H. Nissenbaum, Privacy in Context (Palo Alto, CA: Stanford University Press, 2010).
30 B. Koops, ‘Privacy Spaces’, (2018) West Virginia Law Review, 121(2), 611–665, 621.
31 Compare, too, M. Brincker, ‘Privacy in Public and the Contextual Conditions of Agency’ in T. Timan, et al. (eds), Privacy in Public Space (Cheltenham: Edward Elgar, 2017), pp. 64–90; M. Hu, ‘Orwell’s 1984 and a Fourth Amendment Cybersurveillance Nonintrustion Test’, (2017) Washington Law Review, 92(4), 1819–1904, 1903–1904.
32 Compare K. Yeung and M. Dixon-Woods, ‘Design-Based Regulation and Patient Safety: A Regulatory Studies Perspective’, (2010) Social Science and Medicine, 71(3), 502–509.
33 Compare R. Brownsword, ‘Regulating Patient Safety: Is It Time for a Technological Response?’, (2014) Law, Innovation and Technology, 6(1), 1–29.
34 See M. Cook, ‘Bedside Manner 101: How to Deliver Very Bad News’, Bioedge (17 March 2019), www.bioedge.org/bioethics/bedside-manner-101-how-to-deliver-very-bad-news/12998.
The sheer diversity of topics in health research makes for a daunting task in the development, establishment, and application of oversight mechanisms and various methods of governance. The authors of this section illustrate how this task is made even more complex by emerging technologies, applications and context, as well as the presence of a variety of actors both in the research and the governance landscape. Nevertheless, key themes emerge, and these sometimes trouble existing paradigms and parameters, and shift and widen our regulatory lenses. A key anchor is the relationship between governance and time: be it the urgent nature of research conducted in global health emergencies; the appropriate weight given to historical data in establishing evidence, anticipating future risk, benefit or harm; or the historical and current forces that have shaped regulatory structures as we meet them today. The perspectives explored in this section can be seen to illustrate different kinds of liminality, which result in regulatory complexity but also offer potential for new kinds of imaginaries, norms and processes.
A first kind of shift in lens is created by the nature of research contexts: for example, whether research is carried out in labs, in clinical settings, traditional healing encounters or, indeed, in a pandemic. These spaces might be the site where values, interests or rules conflict, or they might be characterised by the absence of regulation. Additional tension might be brought about in the interaction of what is being regulated, with how it is being regulated: emerging interventions in already established processes, traditional interventions in more recently developed but strongly established paradigms, or marginal interventions precipitated to the centre by outside forces (crises, economic profit, unexpected findings, imminent or certain injury or death). These shifts give rise to considerations of flexibility and resilience in regulation, of the legitimacy and authority of different actors, and the epistemic soundness in the development and deployment of innovative, experimental, or less established practices.
In Chapter 28, Ho addresses the key concept of risk, and its role within the governance of artificial intelligence (AI) and machine learning (ML) as medical devices. Using the illustration of AI/ML as clinical decision support in the diagnosis of diabetic retinopathy, the author situates their position in qualified opposition to those who perceive governance as an impediment to development and economic gain and those who favour more oversight of AI/ML. In managing such algorithms as risk objects in governance, Ho advocates a governance structure that re-characterises risk as a form of iterative learning process, rather than a rule-based one-time evaluation and regulatory approval based on the quantification of future risk.
The theme of regulation as obstacle is also explored in the following chapter (Chapter 29) by Lipworth et al., in the context of autologous mesenchymal stem cell-based interventions. Here, too, the perspective of the authors is set against those who see traditional governance and translational pathways as an impediment to addressing life-threatening and debilitating illnesses. They also resist the reimagination of healthcare as a marketplace (complete with aggressive marketing and dubious claims) where the patient is seen as a consumer, and the decision to access emerging and novel (unproven and potentially risky) interventions merely as a matter of shared decision-making between patient and clinician. The authors recommend the strengthening a multipronged governance framework, which includes professional regulation, marketplace regulation, regulation of therapeutic products, and research oversight.
In Chapter 30, Haas and Cloatre also explore the difficult task of aligning interventions and products within established regulatory and translational pathways. Here, however, the challenge is not novel or emerging interventions, but traditional or non-conventional medicine, which challenges establishes governance frameworks based on the biomedical paradigm, and yet which millions of patients worldwide rely on as their primary form of healthcare. Here, uncertainty relates to the epistemic legitimacy of non-conventional forms of knowledge gathering. Actors in conflict with established epistemic processes are informed by historical and contextual evidence and practices that far predate the establishment of current frameworks. Traditional and non-conventional interventions are, nevertheless, pushed towards hegemonic governance pathways, often in the ‘scientised and commercial’ forms, in order to gain recognition and legitimacy.
When considering pathways to legitimacy, a key role is played by ethics, in its multiple forms. In Chapter 31, Pickersgill explores ethics in its multiple forms through the eyes of neuroscience researchers, who in their daily practice experience the ethical dimensions of neuroscience and negotiate ethics as a regulatory tool. Ethics can be seen as obstacle to good science, and the (institutional) ethics of human research is often seen as prone to obfuscation and in lack of clear guidance. This results in novel practices and norms within the community, which are informed by a commitment to doing the right thing and by institutional requirements. In order to minimise potential subversion (even well-meant) of ethics in research, Pickersgill advocates the development of governance that arises not only from collaborations between scientists and regulators but also those who can act as critical friends to both of these groups of actors.
Ethics guidance and ethical practices are also explored by Ganguli-Mitra and Hunt (Chapter 32), this time in the context of research carried out in global health emergencies (GHEs). These contexts are characterised by various factors that complicate ethical norms and practices, as well as trouble existing frameworks and paradigms. GHEs are sites of multiple kinds of practices (humanitarian, medical, public health, development) and of multiple actors, whose goals and norms of conduct might be in conflict in a context that is characterised by urgency and high risk of injury and death. Using the examples of recent emergencies, the authors explore the changing nature of ethics and ethical practices in extraordinary circumstances.
In the final chapter of this section (Chapter 33), Arzuaga offers an illustration of regulatory development, touching upon the many actors, values, interests, and forces explored in the earlier chapters. Arzuaga reports on the governance of advanced therapeutic medicinal products (ATMPs) in Argentina, moving from a situation of non-intervention on the part of the state, to the establishment of a governance framework. Here, the role of hard and soft law as adding both resilience and flexibility to regulation is explored, fostering innovation without abdicating ethical concerns. Arzuaga describes early, unsuccessful attempts at regulating stem cell-based interventions, echoing the concerns presented by Lipworth et al., before exploring a more promising exercise in legal foresighting, which included a variety of actors and collaboration, as well a combination of top-down models and bottom-up, iterative processes.
The regulatory governance of Artificial Intelligence and Machine Learning (AI/ML) technologies as medical devices in healthcare challenges the regulatory divide between research and clinical care, which is typically of pharmaceutical products. This chapter considers the regulatory governance of an AI/ML clinical decision support (CDS) software for the diagnosis of diabetic retinopathy as a ‘risk object’ by the Food and Drug Administration (FDA) in the United States (US). The FDA’s regulatory principles and approach may play an influential role in how other countries govern this and other software as a medical device (SaMD). The disruptions that AI/ML technologies can cause are well publicised in the lay and academic media alike, although the more serious ‘risks’ of harm are still essentially anticipatory. In some quarters, there is a prevailing sense that a ‘light-touch’ approach to regulatory governance should be adopted to ensure that the advancement of AI – particularly in ways that are expected to generate economic gain – should not be unduly burdened. Hence, in response to the question of whether regulation of AI is needed now, scholars like Chris Reed have responded with a qualified ‘No’. As Reed explains, the use of the technology in medicine is already regulated by the profession, and regulation will be adapted piecemeal as new AI technologies come into use anyway.Footnote 1 A ‘wait and see’ approach is likely to produce better long-term results than hurried regulation based on a very partial understanding of what needs to be regulated. It is also perhaps consistent with this mind-set that the commercial development and application of AI and AI-based technologies remain largely unregulated.
This chapter takes a different view on the issue, and argues that the response should be a qualified ‘Yes’ instead, partly because there is already an existing regulatory framework in place that may be adapted to meet anticipated challenges. As a ‘risk object’, the regulation of AI/ML medical devices cannot be understood and managed separately from a broader ‘risk culture’ within which it is embedded. Contrary to what an approach in ‘command-and-control’ suggests, regulatory governance of AI/ML medical devices should not be understood merely as the application of external forces to contain ills that must somehow be managed in order to derive the desired effects. Arguably, it is this limited conception of ‘risks’ and its relationship with regulation that give rise to liminality. As Laurie and others clearly explains,Footnote 2 a liminal space is created contemporaneously with the uncertainties generated by new and emerging technologies. Drawing on the works of Arnold van Gennep and Victor Turner, ‘liminality’ is presented as an analytic to engage with the processual and experiential dynamics of transitional and transformational inter-structural boundary or marginal spaces. It is itself an intermediary process in a three-part pattern of experience, that begins with separation from an existing order, and concludes with re-integration into a new world.Footnote 3 Mapping liminal spaces and the changing boundaries entailed can help to highlight gaps in regulatory regimes.Footnote 4
Risk-based evaluation is often a feature of such liminal spaces, and when they become sites for battles of power and values, ethical issues arise. Whereas liminality has been applied to account for human experiences within regulated spaces, this chapter considers the epistemic quality of ‘risks’ and its situatedness within regulatory governance as a discursive practice and as a matter of social reality. In this respect, regulation is not necessarily extrinsic to its regulatory object, but constitutive of it. Concerns about ‘risks’ from technological innovations and the need to tame them have been central to regulatory governance.Footnote 5 Whereas governance has been a longstanding cultural phenomenon that relates to ‘the system of shared beliefs, values, customs, behaviours and artifacts that members of society use to cope with their world and with one another, and that are transmitted from generation to generation through learning’,Footnote 6 it is the regulatory turn that is especially instructive. Here, regulatory response is taken to reduce the uncertainty and instability of mitigating potential risks and harms and by directing or influencing actors’ behaviour to accord with socially accepted norms and/or to promote desirable social outcomes, and regulation encompasses any instrument (legal or non-legal in character) that is designed to channel group behaviour.Footnote 7 The high connectivity of AL/ML SaMDs that are capable of adapting to their digital environment in order to optimise performance suggests that the research agenda persists beyond what may be currently limited to the pilot or feasibility stages of medical device trials. If continuous risk-monitoring is required to support the use of SaMDs in a learning healthcare system, more robust and responsive regulatory mechanisms are needed, not less.Footnote 8
28.2 AI/ML Software as Clinical Decision Support
In April 2018, the FDA granted approval for IDx-DR (DEN180001) to be marketed as the first AI diagnostic system that does not require clinician interpretation to detect greater than a mild level of diabetic retinopathy in adults diagnosed with diabetes.Footnote 9 In essence, this SaMD applies an AI algorithm to analyse images of the eye taken with a retinal camera that are uploaded to a cloud server. A screening decision is made by the device as to whether the individual concerned is detected with ‘more than mild diabetic retinopathy’ and, if so, is referred to an eye care professional for medical attention. Where the screening result is negative, the individual will be rescreened in twelve months. IDx-DR was reviewed under the FDA’s De Novo premarket review pathway and was granted Breakthrough Device designation,Footnote 10 as the SaMD is novel and of low to moderate risk. On the whole, the regulatory process did not detract substantially from the existing regulatory framework for medical devices in the USA. A medical device is defined broadly to include low-risk adhesive bandages to sophisticated implanted devices. In the USA, a similar approach is adopted in the definition of the term ‘device’ in Section 201(h) of the Federal Food, Drug and Cosmetic Act.Footnote 11
For regulatory purposes, medical devices are classified based on their intended use and indications for use, degree of invasiveness, duration of use, and the risks and potential harms associated with their use. At the classification stage, a manufacturer is not expected to have gathered sufficient data to demonstrate that its proposed product meets the applicable marketing authorisation standard (e.g. data demonstrating effectiveness). Therefore, the focus of the FDA’s classification analysis is on how the product is expected to achieve its primary intended purposes.Footnote 12 The FDA has established classifications for approximately 1700 different generic types of devices and grouped them into sixteen medical specialties referred to as ‘panels’. Each of these generic types of devices is assigned to one of three regulatory classes based on the level of control necessary to assure the safety and effectiveness of the device. The class to which the device is assigned determines, among other things, the type of premarketing submission/application required for FDA clearance to market. All classes of devices are subject to General Controls,Footnote 13 which are the baseline requirements of the FD&C Act that apply to all medical devices. Special Controls are regulatory requirements for Class II devices, and are usually device-specific and include performance standards, postmarket surveillance, patient registries, special labelling requirements, premarket data requirements and operational guidelines. For Class III devices, active regulatory review in the form of premarket approval is required (see Table 28.1).
|Class||Risk||Level of regulatory controls||Whether clinical trials required||Examples|
|I||Low||General||No||Gauze, adhesive bandages, toothbrush|
|II||Moderate||General and special||Maybe||Suture, diagnostic X-rays|
|III||High||General and premarket approval||Yes||Pacemakers, implantable defibrillators, spinal cord stimulators|
Clinical trials of medical devices, where required, are often non-randomised, non-blinded, do not have active control groups, and lack hard endpoints, since randomisation and blinding of patients or physicians for implantable devices will in many instances be technically challenging and ethically unacceptable.Footnote 14 Table 28.2 shows key differences between clinical trials of pharmaceuticals in contrast to medical devices.Footnote 15 Class I and some Class II devices may be introduced into the US market without having been tested in humans through an approval process that is based on predicates. Through what is known as the 510(k) pathway, a manufacturer needs to show that its ‘new’ device is at least as safe and effective as (or substantially equivalent to) a legally marketed predicate device (as was the case for IDx-DR).Footnote 16
(Pilot/exploratory ; not all drugs undergo this phase)
|10–15 participants with disease or condition||Test very small (subtherapeutic) dosage to study effects and mechanisms||Pilot/early feasibility/|
|10–15 participants with disease or condition||Collect preliminary safety and performance data to guide development|
(Safety and toxicity)
|10–100 healthy participants||Test safety and tolerance|
Determine dosing and major adverse effects
|Feasibility||20–30 participants with disease or condition||Assess safety and efficacy of near-final or final device design|
Guides design of pivotal study
(Safety and effectiveness)
|50–200 participants with disease or condition||Test safety and effectiveness|
Confirm dosing and major adverse effects
|>100–1000 participants with disease or condition||Test safety and effectiveness|
Determine drug–drug interaction and minor adverse effects
|Pivotal||>100–300 participants with disease or condition||Establish clinical efficacy, safety and risks|
|>1000||Collect long-term data and adverse effects||Post-approval study||>1000||Collect long-term data and adverse effects|
The nature of regulatory control is changing; regulatory control does not arise solely through the exertion of regulatory power over a regulated entity but also acts intrinsically from within the entity itself. It is argued that risk-based regulation draws on different knowledge domains to constitute the AI/ML algorithm as a ‘risk object’, and not merely to subjugate it. Risk objectification renders the regulated entity calculable. Control does not thereby arise because the regulated entity behaves strictly in adherence to specific commands but rather because of the predictability of its actions. Where risk cannot be precisely calculated however, liminal spaces may help to articulate various ‘scenarios’ with different degrees of plausibility. These liminal spaces are thereby themselves a means by which uncertainty is managed. Typically, owing to conditions that operate outside of direct regulatory control, liminal spaces can either help to maintain a broader regulatory space to which they are peripheral, or contribute to its re-configuration through a ‘domaining effect’. This aspect will be considered in the penultimate section of this chapter.
28.3 Re-embedding Risk and a Return to Sociality
The regulatory construction of IDx-DR as a ‘risk object’ is accomplished by linking the causal attributes of economic and social risks, and risks to human safety and agency, to its constitutive algorithms reified as a medical device.Footnote 17 This ‘risk object’ is made epistemically ‘real’ when integrated through a risk discourse, by which risk attributions and relations have come to define identities, responsibilities, and socialities. While risk objectification has been effective in paving a way forward to market approval for IDx-DR, this technological capability is pushed further into liminality. The study that supported the FDA’s approval was conducted under highly controlled conditions where a relatively small group of carefully selected patients had been recruited to test a diagnostic system that had a narrow usage criteria.Footnote 18 It is questionable whether the AI/ML feature was itself tested, since the auto-didactic aspect of the algorithm was locked prior to the clinical trial, which greatly constrained the variability of the range of outputs.Footnote 19 At this stage, IDx-DR is not capable of evaluating the most severe forms of diabetic retinopathy that requires urgent ophthalmic intervention. However, IDx-DR is capable of ML, which is a subset of AI and refers to a set of methods that have the ability to automatically detect patterns in data in order to predict future data trends or for decision-making under uncertain conditions.Footnote 20 Deep learning (DL) is in turn a subtype of ML (and a subfield of representation learning) that is capable of delivering a higher level of performance, and does not require a human to identify and compute the discriminatory features for it. From the 1980s onwards, DL software has been applied in computer-aided detection systems, and the field of radiomics (a process that extracts large number of quantitative features from medical images) is broadly concerned with computer-aided diagnosis systems, where DL has enabled the use of computer-learned tumour signatures.Footnote 21 It has the potential to detect abnormalities, make differential diagnoses and generate preliminary radiology reports in the future, but only a few methods are able to manage the wide range of radiological presentations of subtle disease states. In the foreseeable future, unsupervised AI/ML will test the limits of conventional means of regulation of medical devices.Footnote 22 The challenges to risk assessment, management and mitigation will be amplified as AI/ML medical devices change rapidly and become less predictable.Footnote 23
Regulatory conservatism reflects a particular positionality and related interests that are at stake. For many high-level policy documents on AI, competitive advantage for economic gain is a key interest.Footnote 24 This position appears to support a ‘light touch’ approach to regulatory governance of AI in order to sustain technological development and advance national economic interests. If policymakers, as a matter of socio-political construction, consider regulation as impeding technological development, then regulatory governance is unlikely to see meaningful progression. Not surprisingly, the private sector has had a dominant presence in defining the agenda and shape of AI and related technologies. While this is not in and of itself problematic, the narrow regulatory focus and absence of broader participation could be. For instance, it is not entirely clear to what extent the development of AI/ML algorithms is determined primarily by sectorial interests.Footnote 25
Initial risk assessment is essentially consequentialist in its focus on intended use of the SaMD to achieve particular clinical outcomes. Risk characterisation is abstracted to two factors:Footnote 26 (1) significance of the information provided by the SaMD to the healthcare decision; and (2) state of the healthcare situation or condition. Risk is thereby derived from ‘objective’ information that is provided by the manufacturer on intended use of the information provided by the SaMD in clinical management. Such use may be significant in one of three ways: (1) to treat or to diagnose, (2) to drive clinical management or (3) to inform clinical management. The significance of an intended use is then associated with a healthcare situation or condition (i.e. critical, serious or non-serious). Schematically, Table 28.3 presents the risk characterisation framework based on four different levels of impact on the health of patients or target populations. Level IV of the framework (e.g. SaMD that performs diagnostic image analysis for making treatment decisions in patients with acute stroke, or screens for mutable pandemic outbreak that can be highly communicable through direct contact or other means) relates to the highest impact while Level I (e.g. SaMD that analyses optical images to guide next diagnostic action of astigmatism) relates to the lowest.Footnote 27
|State of healthcare situation or condition||Significance of information provided by SaMD to healthcare decision|
|Treat or diagnose||Drive clinical management||Inform clinical management|
To counter the possible deepening of regulatory impoverishment, regulatory governance as concept and process will need to re-characterise risk management as a form of learning and experimentation rather than rule-based processes, thus placing stronger reliance on human capabilities to imagine alternative futures instead of quantitative ambitions to predict the future. Additionally, a regulatory approach that is based on total project lifecycle needs to be taken up. This better accounts for modifications that will be made to the device through real-world learning and adaptation. Such adaptation enables a device to change its behaviour over time based on new data and optimise its performance in real time with the goal of improving health outcomes. As the FDA’s conventional review procedures for medical devices discussed above are not adequately responsive to assess adaptive AI/ML technologies, the FDA has proposed for a premarket review mechanism to be developed.Footnote 28 This mechanism seeks to introduce a predetermined change control plan in the premarket submission, in order to give effect to the risk categorisation and risk management principles, as well as the total product lifecycle approach, of the IMDRF. The plan will include the types of anticipated modifications (or pre-specifications) and associated methodology that is used to implement the changes in a controlled manner while allowing risks to patients to be managed (referred to as Algorithm Change Protocol). In essence, the proposed changes will place on manufacturers a greater responsibility of monitoring the real-world performance of their medical devices and to make available the performance data through periodic updates on what changes were made as part of the approved pre-specifications and the Algorithm Change Protocol. In totality, these proposed changes will enable the FDA to evaluate and monitor, collaboratively with manufacturers, an AI/ML software as a medical device from its premarket development to postmarket performance. The nature of the FDA’s regulatory oversight will also become more iterative and responsive in assessing the impact of device optimisation on patient safety.
As the IMDRF also explains, every SaMD will have its own risk category according to its definition statement even when it is interfaced with other SaMD, other hardware medical devices or used as a module in a larger system. Importantly, manufacturers are expected to have an appropriate level of control to manage changes during the lifecycle of the SaMD. The IMDRF labels any modifications made throughout the lifecycle of the SaMD, including its maintenance phase, as ‘SaMD Changes’.Footnote 29 Software maintenance is in turn defined in terms of post-marketing modifications that could occur in the software lifecycle processes identified by the International Organization for Standardization.Footnote 30 It is generally recognised that testing of software is not sufficient to ensure safety in its operation. Safety features need to be built into the software at the design and development stages, and supported by quality management and post marketing surveillance after the SaMD has been installed. Post market surveillance includes monitoring, measurement and analysis of quality data through logging and tracking of complaints, clearing technical issues, determining problem causes and actions to address, identify, collect, analyse and report on critical quality characteristics of products developed. However, monitoring software quality alone does not guarantee that the objectives for a process are being achieved.Footnote 31
As a concern of Quality Management System (QMS), the IMDRF requires that maintenance activities preserve the integrity of the SaMD without introducing new safety, effectiveness, performance and security hazards. It recommends that a risk assessment, including considerations in relation to patient safety and clinical environment and technology and systems environment, should be performed to determine if the changes affect the SaMD categorisation and the core functionality of SaMD as set out in its definition statement. The proposed QMS complements the risk categorisation framework through its goal of incorporating good software quality and engineering practices into the device. Principles underscoring QMS are set out in terms of organisational support structure, lifecycle support processes, and a set of realisation and use processes for assuring safety, effectiveness and performance. These principles have been endorsed by the FDA in its final guidance to describe an internally agreed upon understanding (among regulators) of clinical evaluation and principles for demonstrating the safety, effectiveness and performance of the device, and activities that manufacturers can take to clinically evaluate their device.Footnote 32
28.4 Regulatory Governance as Participatory Learning System
In this penultimate section of this chapter, it is argued that the regulatory approach considered in the preceding sections is intended to support a participatory learning system comprising at least two key features: (1) a platform and/or mechanisms that enable constructive engagement with, and participation of, members of society; and (2) the means by which a common fund of knowledges (to be explained below) may be pooled to generate an anticipatory knowledge that could guide collective action. In some instances, institutionalisation could advance this agenda, but it is beyond the scope of this manuscript to examine this possibility to a satisfactory degree.
There is a diverse range of modalities through which constituents of a society engage in collaborative learning. As Annelise Riles’s PAWORNET illustrates, each modality has its own goals, character, strengths and limitations. In her study, Riles observes that networkers did not understand themselves to share a set of values, interests or culture.Footnote 33 Instead, they understood themselves to be sharing their involvement in a certain network that was a form of institutionalised association devoted to information sharing. What defined networkers most of all was the fact that they were personally and institutionally connected or knowledgeable about the world of specific institutions and networks. In particular, it was the work of creating documents, organising conferences or producing funding proposals that generated a set of personal relations that drew people together and also created divisions of its own. In the author’s own study,Footnote 34 ethnographic findings illustrate how the ‘publics’ of human stem cell research and oocyte donation were co-produced with an institutionalised ‘bioethics-as-public-policy’ entity known as the Bioethics Advisory Body. In that context, the ‘publics’ comprised institutions and a number of individuals – often institutionally connected – that represented a diverse set of values, interests and perhaps cultures (construed in terms of their day-to-day practices in the least). These ‘publics’ resemble a network in a number of ways. They were brought into a particular set of relationship within a deliberative space created mainly by the consultation papers and reinforced through a variety of means that included public meetings, conferences, and feedback sessions. Arguably, even individual feedback from a public outreach platform known as ‘REACH’ encompassed a certain kind of pre-existing (sub-) network that has been formed with a view to soliciting relatively more spontaneous and independent, uninvited forms of civil participatory action. But this ‘network’ is not a static one. It varied with, but was also shaped by, the broader phenomenon of science and expectations as to how science ought to be engaged. In this connection, Riles’s observation is instructive: ‘It is not that networks “reflect” a form of society, therefore, nor that society creates its artifacts … Rather, it is all within the recursivity of a form that literally speaks about itself’.Footnote 35
A ‘risk culture’ that supports learning and experimentation rather than rule-based processes must embed the operation of AI and related technologies as ‘risk objects’ within a common fund of knowledges. Legal processes are inherent to understanding the risk, such as that of a repeat sexual offence under ‘Megan’s Law’, which encompasses the US community notification statutes relating to sexual offenders.Footnote 36 Comprising three tiers, this risk assessment process determines the scope of community notification. In examining the constitutional basis of Megan’s Law, Mariana Valverde et al. observe that ‘the courts have emphasised the scientific expertise that is said to be behind the registrant risk assessment scale (RRAS) in order to argue that Megan’s Law is not a tool of punishment but rather an objective measure to regulate a social problem’.Footnote 37 However, reliance on Megan’s Law as grounded in objective scientific knowledge has given rise to an ‘intermediary knowledge in which legal actors – prosecutors and judges – are said not only to be more fair but even more reliable and accurate in determining a registrant’s risk of re-offence’.Footnote 38 In this, the study also illustrates a translation from scientific knowledge and processes to legal ones, and how the ‘law’ may be cognitively and normatively open.
Finally, the articulation of possible harms and dangers as ‘risks’ involves the generation of ‘anticipatory knowledge’, which is defined as ‘social mechanisms and institutional capacities involved in producing, disseminating, and using such forms [as] … forecasts, models, scenarios, foresight exercises, threat assessments, and narratives about possible technological and societal futures’.Footnote 39 Like Ian Hacking’s ‘looping effect’, anticipatory knowledge is about knowledge-making about the future, and could operate as a means to gap-filling. The study by Hugh Gusterson of the Reliable Replacement Warhead (RRW) program is illustrative of this point, where US weapons laboratories could design new and highly reliable nuclear weapons that are safe to manufacture and maintain.Footnote 40 Gusterson shows that struggle over the RRW Program, initiated by the US Congress in 2004, occurred across four intersecting ‘plateaus of nuclear calculations’ – geopolitical, strategic, enviropolitical, and technoscientific – each with its own contending narratives of the future. He indicates that ‘advocates must stabilise and align anticipatory knowledge from each plateau of calculation into a coherent-enough narrative of the future in the face of opponents seeking to generate and secure alternative anticipatory knowledges’.Footnote 41 Hence the interconnectedness of the four plateaus of calculation, including the trade-offs entailed, was evident in the production of anticipatory knowledge vis-à-vis the RRW program. In addition, the issues of performativity and ‘social construction of ambiguity’ were also evident. Gusterson observes that being craft items, no two nuclear weapons are exactly alike. However, the proscription of testing through detonation meant that both performativity and ambiguity over reliability became matters of speculation, determined through extrapolation from the past to fill knowledge ‘gaps’ in the present and future. This attempt at anticipatory knowledge creation also prescribed a form that the future was to take. Applying a similar analysis from a legal standpoint, Graeme Laurie and others explain that foresighting as a means of devising anticipatory knowledge is neither simple opinion surveying nor mere public participation.Footnote 42 It must instead be directed at the discovery of shared values, the development of shared lexicons, the forging of a common vision of the future and the taking of steps to realise the vision with the understanding that this is being done from a position of partial knowledge about the future. As we have considered earlier on in this chapter, this visionary account captures the approach that has been adopted by the IMDRF impressively well.
Liminality highlights the need for a processual-oriented mode of regulation in order to recognise the flexibility and fluidity of the regulatory context (inclusive of its objects and subjects) and the need for iterative interactions, as well as to possess the capacity to provide non-directive guidance.Footnote 43 If one considers law as representing nothing more than certainty, structure and directed agency, then we should rightly be concerned as to whether the law can envision and support the creation of genuinely liminal regulatory spaces, which is typified by uncertainty, anti-structure and an absence of agency.Footnote 44 The crucial contribution of regulatory governance however, is its conceptualisation of law as an epistemically open enterprise, and in respect of which learning and experimentation are possible.
1 C. Reed, ‘How Should We Regulate Artificial Intelligence?’, (2018) Philosophical Transactions of the Royal Society, Series A 376(2128), 20170360.
2 G. Laurie, ‘Liminality and the Limits of Law in Health Research Regulation: What Are We Missing in the Spaces In-Between?’, (2016) Medical Law Review, 25(1), 47–72; G. Laurie, ‘What Does It Mean to Take an Ethics+ Approach to Global Biobank Governance?’, (2017) Asian Bioethics Review, 9(4), 285–300; S. Taylor-Alexander et al., ‘Beyond Regulatory Compression: Confronting the Liminal Spaces of Health Research Regulation’, (2016) Law Innovation and Technology, 8(2), 149–176.
3 Laurie, ‘Liminality and the Limits of Law’, 69.
4 Taylor-Alexander et al., ‘Beyond Regulatory Compression’, 172.
5 R. Brownsword et al., ‘Law, Regulation and Technology: The Field, Frame, and Focal Questions’ in R. Brownsword et al. (eds), The Oxford Handbook of Law, Regulation and Technology (Oxford University Press, 2017), pp. 1–36.
6 D. G. Bates and F. Plog, Cultural anthropology. (New York: McGraw-Hill, 1990), p. 7.
7 R. Brownsword, Law, Technology and Society: Re-Imaging the Regulatory Environment (Abingdon: Routledge, 2019), p. 45.
8 B. Babic et al., ‘Algorithms on Regulatory Lockdown in Medicine: Prioritizing Risk Monitoring to Address the ‘Update Problem’’, (2019) Science, 366(6470), 1202–1204.
9 Food and Drug Administration, ‘FDA Permits Marketing of Artificial Intelligence-Based Device to Detect Certain Diabetes-related Eye Problems’, (FDA New Release, 11 April 2018).
10 Food and Drug Administration, ‘Classification of Products as Drugs and Devices & Additional Product Classification Issues: Guidance for Industry and FDA Staff’, (FDA, 2017).
11 Federal Food, Drug, and Cosmetic Act (25 June 1938), 21 USC §321(h).
12 The regulatory approaches adopted in the European Union and the United Kingdom are broadly similar to that of the FDA. See: J. Ordish et al., Algorithms as Medical Devices (Cambridge: PHG Foundation, 2019).
13 Food and Drug Administration, ‘Regulatory Controls’, (FDA, 27 March 2018), www.fda.gov/medical-devices/overview-device-regulation/regulatory-controls.
14 G. A. Van Norman, ‘Drugs and Devices: Comparison of European and US Approval Processes’, (2016) JACC Basic to Translational Science, 1(5), 399–412.
15 Details in Table 28.2 are adapted from the following sources: International Organization of Standards, ‘Clinical Investigation of Medical Devices for Human Subjects – Good Clinical Practice’, (ISO, 2019), ISO/FDIS 14155 (3rd edition); Genesis Research Services, ‘Clinical Trials – Medical Device Trials’, (Genesis Research Services, 5 September 2018), www.genesisresearchservices.com/clinical-trials-medical-device-trials/; B. Chittester, ‘Medical Device Clinical Trials – How Do They Compare with Drug Trials?’, (Master Control, 7 May 2020), www.mastercontrol.com/gxp-lifeline/medical-device-clinical-trials-how-do-they-compare-with-drug-trials-/.
16 J. P. Jarow and J. H. Baxley, ‘Medical Devices: US Medical Device Regulation’, (2015) Urologic Oncology: Seminars and Original Investigations, 33(3), 128–132.
17 A. Bowser et al., Artificial Intelligence: A Policy-Oriented Introduction (Washington, DC: Woodrow Wilson International Center for Scholars, 2017).
18 M. D. Abràmoff et al., ‘Pivotal Trial of an Autonomous AI-Based Diagnostic System for Detection of Diabetic Retinopathy in Primary Care Offices’, (2018) NPJ Digital Medicine, 39(1), 1.
19 P. A. Keane and E. J. Topol, ‘With an Eye to AI and Autonomous Diagnosis’, (2018) NPJ Digital Medicine, 1, 40.
20 K. P. Murphy, Machine Learning: A Probabilistic Perspective (Cambridge, MA: MIT Press, 2012).
21 M. L. Giger, ‘Machine Learning in Medical Imaging’, (2018) Journal of the American College of Radiology, 15(3), 512–520.
22 E. Topol, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again (New York: Basic Books, 2019); A. Tang et al., ‘Canadian Association of Radiologists White Paper on Artificial Intelligence in Radiology’, (2018) Canadian Association of Radiologists Journal, 69(2), 120–135.
23 Babic et al., ‘Algorithms’; M. U. Scherer, ‘Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies and Strategies’, (2016) Harvard Journal of Law & Technology, 29(2), 354–400.
24 Executive Office of the President, Artificial Intelligence, Automation and the Economy (Washington, DC: US Government, 2016); House of Commons, Science and Technology Committee (2016), Robotics and Artificial Intelligence: Fifth Report of Session 2016–17, HC 145 (London, 12 October 2016).
25 C. W. L. Ho et al., ‘Governance of Automated Image Analysis and Artificial Intelligence Analytics in Healthcare’, (2019) Clinical Radiology, 74(5), 329–337.
26 IMDRF Software as a Medical Device (SaMD) Working Group, ‘Software as a Medical Device: Possible Framework for Risk Categorization and Corresponding Considerations’, (International Medical Device Regulators Forum, 2014), para. 4.
27 Footnote Ibid., p. 14, para. 7.2.
28 Food and Drug Administration, ‘Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD): Discussion Paper and Request for Feedback’, (US Department of Health and Human Services, 2019).
29 IMDRF SaMD Working Group, ‘Software as a Medical Device (SaMD): Key Definitions’, (International Medical Decive Regulators Forum, 2013).
30 International Organization of Standards, ‘ISO/IEC 14764:2006 Software Engineering – Software Life Cycle Processes – Maintenance (2nd Edition)’, (International Organization of Standards, 2006).
31 IMDRF SaMD, ‘Software as a Medical Device (SaMD): Application of Quality Management System. IMDRF/SaMD WG/N23 FINAL 2015’, (International Medical Device Regulators Forum, 2015), para. 7.5.
32 Food and Drug Administration, ‘Software as a Medical Device (SAMD): Clinical Evaluation’, (US Department of Health and Human Services, 2017).
33 A. Riles, The Network Inside Out (Ann Arbor, MI: University of Michigan Press, 2001), pp. 58–59 and p. 68.
34 C. W. L. Ho, Juridification in Bioethics (London: Imperial College Press, 2016).
35 Riles, The Network, p. 69.
36 M. Valverde et al., Legal Knowledges of Risk. In Law Commission of Canada, Law and Risk (Vancouver, BC: University of British Columbia Press, 2005), pp. 86–120, p. 103 and p. 106.
37 Footnote Ibid., p. 106.
39 N. Nelson et al., ‘Introduction: The Anticipatory State: Making Policy-relevant Knowledge About the Future’, (2008) Science and Public Policy, 35(8), 546–550.
40 H. Gusterson, ‘Nuclear Futures: Anticipator Knowledge, Expert Judgment, and the Lack that Cannot Be Filled’, (2008) Science and Public Policy, 35(8), 551–560.
41 Footnote Ibid., 553.
42 G. Laurie et al., ‘Foresighting Futures: Law, New Technologies, and the Challenges of Regulating for Uncertainty’, (2012) Law, Innovation and Technology, 4(1), 1–33.
43 Laurie, ‘Liminality and the Limits of Law’, 68–69; Taylor-Alexander et al., ‘Beyond Regulatory Compression’, 158.
44 Laurie, ‘Liminality and the Limits of Law’, 71.
Clinical innovation is ubiquitous in medical practice and is generally viewed as both necessary and desirable. While innovation has been the source of considerable benefit, many clinical innovations have failed to demonstrate evidence of clinical benefit and/or caused harm. Given uncertainty regarding the consequences of innovation, it is broadly accepted that it needs some form of oversight. But there is also pushback against what is perceived to be obstruction of access to innovative interventions. In this chapter, we argue that this pushback is misguided and dangerous – particularly because of the myriad competing and conflicting interests that drive and shape clinical innovation.
While the therapeutics lifecycle is usually thought of as one in which research precedes clinical application, it is common for health professionals to offer interventions that differ from standard practice, and that have either not (yet) been shown to be safe or effective or have been shown to be safe but not yet subjected to large phase 3 trials. This practice is often referred to as ‘clinical innovation’.Footnote 1 The scope of clinical innovation is broad, ranging from minor alterations to established practice – for example using a novel suturing technique – to more significant departures from standard practice – for example using an invasive device that has not been formally tested in any population.
For the most part, clinical innovation is viewed as necessary and desirable. Medicine has always involved the translation of ideas into treatment and it is recognised that ideas originate in the clinic as well as in the research setting, and that research and practice inform each other in an iterative manner.Footnote 2 It is also recognised that the standard trajectory of research followed by health technology assessment, registration and subsidisation may be too slow for patients with life-limiting or debilitating diseases and that clinical innovation can provide an important avenue for access to novel treatments.Footnote 3 There are also limitations to the systems that are used to determine what counts as ‘standard’ practice because it is up to – usually commercial – sponsors to seek formal registration for particular indications.Footnote 4
While many clinical innovations have positively transformed medicine, others have failed to demonstrate evidence of clinical benefit,Footnote 5 or exposed patients to considerable harm – for example, the use of transvaginal mesh for the treatment of pelvic organ prolapse.Footnote 6 Many innovative interventions are also substantially more expensive than traditional treatments,Footnote 7 imposing costs on both patients and health systems. It is therefore broadly accepted that innovation requires some form of oversight. In most jurisdictions, oversight of innovation consists of a combination of legally based regulations and less formal governance mechanisms. These, in turn, can be focused on:
1. the oversight of clinical practice by professional organisations, medical boards, healthcare complaints bodies and legal regimes;
2. the registration of therapeutic products by agencies such as the US Food and Drug Administration, the European Medicines Agency and Australia’s Therapeutic Goods Administration;
3. consumer protection, such as laws aimed at identifying and punishing misleading advertising; and
4. the oversight of research when innovation takes place in parallel with clinical trials or is accompanied by the generation of ‘real world evidence’ through, for example, clinical registries.
The need for some degree of oversight is relatively uncontroversial. But there is also pushback against what is perceived to be obstruction of access to innovative interventions.Footnote 8 There are two main arguments underpinning this position. First, it is argued that existing forms of oversight create barriers to clinical innovation. Salter and colleagues, for example, view efforts to assert external control over clinical innovation as manifestations of conservative biomedical hegemony that deliberately hinders clinical innovation in favour of more traditional translational pathways.Footnote 9 It has also been argued that medical negligence law deters clinical innovationFootnote 10 and that health technology regulation is excessively slow and conservative, denying patients the ‘right to try’ interventions that have not received formal regulatory approval.Footnote 11
Second, it is argued that barriers are philosophically and politically inappropriate on the grounds that patients are not actually ‘patients’, but rather ‘consumers’. According to these arguments, consumers should be free to decide for themselves what goods and services they wish to purchase without having their choices restricted by regulation and governance systems – including those typically referred to as ‘consumer’ (rather than ‘patient’) protections. Following this line of reasoning, Salter and colleaguesFootnote 12 argue that decisions about access to innovative interventions should respect and support ‘the informed health consumer’ who:
assumes she/he has the right to make their own choices to buy treatment in a health care market which is another form of mass consumption…Footnote 13
and who is able to draw on:
a wide range of [information] sources which include not only the formally approved outlets of science and state but also the burgeoning information banks of the internet.Footnote 14
There are, however, several problems with these arguments. First, there is little evidence to support the claim that there is, in fact, an anti-innovative biomedical hegemony that is creating serious barriers to clinical innovation. While medical boards can censure doctors for misconduct, and the legal system can find them liable for trespass or negligence, these wrongs are no easier to prevent or prove in the context of innovation than in any other clinical context. Product regulation is similarly facilitative of innovation, with doctors being free to offer interventions ‘off-label’ and patients being allowed to apply for case-by-case access to experimental therapies. The notion that current oversight systems are anti-innovative is therefore not well founded.
Second, it is highly contestable that patients are ‘simply’ consumers – and doctors are ‘simply’ providers of goods and services – in a free market. For several reasons, healthcare functions as a very imperfect market: there is often little or no information available to guide purchases; there are major information asymmetries – exacerbated by misinformation on the internet; and patients may be pressured into accepting interventions when they have few, if any, other therapeutic options.Footnote 15 Furthermore, even if patients were consumers acting in a marketplace, it would not follow that the marketplace should be completely unregulated, for even the most libertarian societies have regulatory structures in place to prevent bad actors misleading people or exploiting them financially (e.g. through false advertising, price fixing or offering services that they are unqualified to provide).
This leaves one other possible objection to the oversight of clinical innovation – that patients are under the care of professionals who are able to collaborate with them in making decisions through shared decision-making. Here, the argument is that innovation (1) should not be overseen because it is an issue that arises between a doctor and a patient, and (2) does not need to be overseen because doctors are professionals who have their patients’ interests at heart. These are compelling arguments because they are consistent with both the emphasis on autonomy in liberal democracies and with commonly accepted ideas about professionals and their obligations.
Two objections can, however, be raised. First, these arguments ignore the fact that professionalism is concerned not only with patient well-being but also with commitments to the just distribution of finite resources, furthering scientific knowledge and maintaining public trust.Footnote 16 The second problem with these arguments is that they are premised on the assumption that all innovating clinicians are consistently alert to their professional obligations and willing to fulfil them. Unfortunately, this assumption is open to doubt. To illustrate this point, we turn to the case of autologous mesenchymal stem cell-based interventions.
29.3 The Case of Autologous Mesenchymal Stem Cell Interventions
Stem cell-based interventions are procedures in which stem cells – cells that have the potential to self-replicate and to differentiate into a range of different cell types – or cells derived from stem cells are administered to patients for therapeutic purposes. Autologous stem cell-based interventions involve administering cells to the same person from whom they were obtained. The two most common sources of such stem cells are blood and bone marrow (haematopoietic) cells and connective tissue (mesenchymal) cells.
Autologous haematopoietic stem cells are extracted from blood or bone marrow and used to reconstitute the bone marrow and immune system following high dose chemotherapy. Autologous mesenchymal cells are extracted most commonly from fat and then injected – either directly from the tissue extracts or after expansion in the laboratory – into joints, skin, muscle, blood stream, spinal fluid, brain, eyes, heart and so on, in order to ‘treat’ degenerative or inflammatory conditions. The hope is that because mesenchymal stem cells may have immunomodulatory properties they may support tissue regeneration.
The use of autologous haematopoietic stem cells is an established standard of care therapy for treating certain blood and solid malignancies and there is emerging evidence that they may also be beneficial in the treatment of immunological disorders, such as multiple sclerosis and scleroderma. In contrast, evidence to support the use of autologous mesenchymal stem cell interventions is weak and limited to only a small number of conditions (e.g. knee osteoarthritis).Footnote 17 And even in these cases, it is unclear what the precise biological mechanism is and whether the cells involved should even be referred to as ‘stem cells’Footnote 18 (we use this phrase in what follows for convenience).
Despite this, autologous mesenchymal stem cell interventions (henceforth AMSCIs) are offered for a wide range conditions for which there is no evidence of effectiveness, including spinal cord injury, motor neuron disease, dementia, cerebral palsy and autism.Footnote 19 Clinics offering these and other claimed ‘stem cell therapies’ have proliferated globally, primarily in the private healthcare sector – including in jurisdictions with well-developed regulatory systems – and there are now both domestic markets and international markets based on stem cell tourism.Footnote 20
While AMSCIs are relatively safe, they are far from risk-free, with harm potentially arising from the surgical procedures used to extract cells (e.g. bleeding from liposuction), the manipulation of cells outside of the body (e.g. infection) and the injection of cells into the bloodstream (e.g. immunological reactions, fever, emboli) or other tissues (e.g. cyst formation, microcalcifications).Footnote 21 Despite these risks, many of the practitioners offering AMSCIs have exploited loopholes in product regulation to offer these interventions to large numbers of patients.Footnote 22 To make matters worse, these interventions are offered without obvious concern for professional obligations, as evident in aggressive and misleading marketing, financial exploitation and poor-quality evidence-generation practices.
First, despite limited efficacy and safety, AMSCIs are marketed aggressively through clinic websites, advertisements and appearances in popular media.Footnote 23 This is inappropriate both because the interventions being promoted are experimental and should therefore be offered to the minimum number of patients outside the context of clinical trials, and because marketing is often highly misleading. In some cases, this takes the form of blatant misinformation – for example, claims that AMSCIs are effective for autism, dementia and motor neuron disease. In other cases, consumers are misled by what have been referred to as ‘tokens of legitimacy’. These include patient testimonials, references to incomplete or poor-quality research studies, links to scientifically dubious articles and conference presentations, displays of certification and accreditation from unrecognised organisations, use of meaningless titles such as ‘stem cell physician’ and questionable claims of ethical oversight. Advertising of AMSCIs is also rife with accounts of biological processes that give the impression that autologous stem cells are entirely safe – because they come from the patient’s own body – and possess almost magical healing qualities.Footnote 24
Second, AMSCIs are expensive, with patients paying thousands of dollars (not including follow-up care or the costs associated with travel).Footnote 25 In many cases, patients take drastic measures to finance access to stem cells, including mortgaging their houses and crowd-sourcing funding from their communities. Clinicians offering AMSCIs claim that such costs are justified given the complexities of the procedures and the lack of insurance subsidies to pay for them.Footnote 26 However, the costs of AMSCIs seem to be determined by the business model of the industry and by a determination of ‘what the market will bear’ – which in the circumstances of illness, is substantial. Furthermore, clinicians offering AMSCIs also conduct ‘pay-to-participate’ clinical trials and ask patients to pay for their information to be included in clinical registries. Such practices are generally frowned upon as they exacerbate the therapeutic misconception and remove any incentive to complete and report results in a timely manner.Footnote 27
Finally, contrary to the expectation that innovating clinicians should actively contribute to generating generalisable knowledge through research, clinics offering AMSCIs have proliferated in the absence of robust clinical trials.Footnote 28 Furthermore, providers of AMSCIs tend to overstate what is known about efficacyFootnote 29 and to misrepresent what trials are for, arguing that they simply ‘measure and validate the effect of (a) new treatment’.Footnote 30 Registries that have been established to generate observational evidence about innovative AMSCIs are similarly problematic because participation is voluntary, outcome measures are subjective and results are not made public. There are also problems with the overall framing of the registries, which are presented as alternatives – rather than supplements – to robust clinical trials.Footnote 31 And because many AMSCIs are prepared and offered in private practice, there is lack of oversight and independent evaluation of what is actually administered to the patient, making it impossible to compare outcomes in a meaningful way.Footnote 32
While it is possible that doctors offering autologous stem cell interventions simply lack awareness of the norms relating to clinical innovation, this seems highly unlikely, as many of these clinicians are active participants in policy debates about innovation and are routinely censured for behaviour that conflicts with accepted professional obligations. A more likely explanation, therefore, is that the clinicians offering autologous stem cell interventions are motivated not (only) by concern for their patients’ well-being, but also by other interests such as the desire to make money, achieve fame and satisfy their intellectual curiosity. In other words, they have competing and conflicting interests that override their concerns for patient well-being and the generation of valid evidence.
Unfortunately, the case of AMSCIs is far from unique. Other situations in which clinicians appear to be abusing the privilege of using their judgement to offer non-evidence-based therapies include orthopaedic surgeons over-using arthroscopies for degenerative joint disease,Footnote 33 assisted reproductive technology specialists who offer unproven ‘add-ons’ to traditional in-vitro fertilisationFootnote 34 and health professionals engaging in irresponsible off-label prescribing of psychotropic medicines.Footnote 35
Clinicians in all of these contexts are embedded in a complex web of financial and non-financial interests such as the desire to earn money, create product opportunities, pursue intellectual projects, achieve professional recognition and career advancement, and develop knowledge for the good of future patientsFootnote 36 – all of which motivate their actions. Clinicians are also susceptible to biases such as the ‘optimism bias’, which might lead them to over-value innovative technologies and they are impacted upon by external pressures, such as industry marketingFootnote 37 and pressure from patients desperate for a ‘miracle cure’.Footnote 38
With these realities in mind, arguments against the oversight of innovation – or, more precisely, a reliance on consumer choice – become less compelling. Indeed, it could be argued that the oversight of innovation needs to be strengthened in order to protect patients from exploitation by those with competing and conflicting interests. That said, it is important that the oversight of clinical innovation does not assume that all innovating clinicians are motivated primarily by personal gain and, correspondingly, that it does not stifle responsible clinical innovation.
In order to strike the right balance, it is useful – following Lysaght and colleaguesFootnote 39 – for oversight efforts to be framed in terms of, and account for, three separate functions: a negative function (focused on protecting consumers and sanctioning unacceptable practices, such as through tort and criminal law); a permissive function (concerned with frameworks that license health professionals and enable product development, such as through regulation of therapeutic products); and a positive function (dedicated to improving professional ethical behaviour, such as through professional registration and disciplinary systems). With that in mind, we now present some examples of oversight mechanisms that could be employed.
Those with responsibility for overseeing clinical practice need to enable clinicians to offer innovative treatments to selected patients outside the context of clinical trials, while at the same time preventing clinicians from exploiting patients for personal or socio-political reasons. Some steps that could be taken to both encourage responsible clinical innovation and discourage clinicians from acting on conflicts of interest might include:
requiring that all clinicians have appropriate qualifications, specialisation, training and competency;
mandating disclosure of competing and conflicting interests on clinic websites and as part of patient consent;
requiring that consent be obtained by an independent health professional who is an expert in the patient’s disease (if necessary at a distance for patients in rural and remote regions);
ensuring that all innovating clinicians participate in clinical quality registries that are independently managed, scientifically rigorous and publicly accessible;
requiring independent oversight to ensure that appropriate product manufacturing standards are met;
ensuring adequate pre-operative assessment, peri-operative care and post-operative monitoring and follow-up;
ensuring that patients are not charged excessive amounts for experimental treatments, primarily by limiting expenses to cost-recovery; and
determining that some innovative interventions should be offered only in a limited number of specialist facilities.
Professional bodies (such as specialist colleges), professional regulatory agencies, clinical ethics committees, drugs and therapeutics committees and other institutional clinical governance bodies would have an important role to play in ensuring that such processes are adhered to.
There may also be a need to extend current disciplinary and legal regimes regarding conflicts of interest (or at least ensure better enforcement of existing regimes). Many professional codes of practice already require physicians to be transparent about, and refrain from acting on, conflicts of interest. And laws in some jurisdictions already recognise that financial interests should be disclosed to patients, that patients should be referred for independent advice and that innovating clinicians need to demonstrate concern for patient well-being and professional consensus.Footnote 40
With respect to advertising, there is a need to prevent aggressive and misleading direct-to-consumer advertising while still ensuring that all patients who might benefit from an innovative intervention are aware that such interventions are being offered. With this in mind, it would seem reasonable to strengthen existing advertising oversight (which, in many jurisdictions, is weak and ad hoc). It may also be reasonable to prohibit innovating clinicians from advertising interventions directly to patients – including indirectly through ‘educational’ campaigns and media appearances – and instead develop systems that alert referring doctors to the existence of doctors offering innovative interventions.
Those regulating access to therapeutic products need to strike a balance between facilitating timely access to the products that patients want, and ensuring that those with competing interests are not granted licence to market products that are unsafe or ineffective. In this regard, it is important to note that product regulation is generally lenient when it comes to clinical innovation and it is arguable that there is a need to push back against current efforts to accelerate access to health technologies – efforts that are rapidly eroding regulatory processes and creating a situation in which patients are being exposed to an increasing number of ineffective and unsafe interventions.Footnote 41 In addition, loopholes in therapeutic product regulation that can be exploited by clinicians with conflicts of interest should be predicted and closed wherever possible.
Although clinical innovation is not under the direct control of research ethics and governance committees, such committees have an important role to play in ensuring that those clinical trials and registries established to support innovation are not distorted by commercial and other imperatives. The task for such committees is to strike a balance between assuming that all researcher/innovators are committed to the generation of valid evidence and placing excessive burdens on responsible innovators who wish to conduct high-quality research. In this regard, research ethics committees could:
ensure that participants in trials and registries are informed about conflicts of interest;
ensure that independent consent processes are in place so that patients are not pressured into participating in research or registries; and
consider whether it is ever acceptable to ask patients to ‘pay to participate’ in trials or in registries.
Research ethics committees also have an important role in minimising biases in the design, conduct and dissemination of innovation-supporting research. This can be achieved by ensuring that:
trials and registries have undergone rigorous, independent scientific peer review;
data are collected and analysed by independent third parties (e.g. Departments of Health);
data are freely available to any researcher who wants to analyse it; and
results – including negative results – are widely disseminated in peer-reviewed journals.
While this chapter has focused on traditional ‘top-down’ approaches to regulation and professional governance, it might also be possible to make use of what Devaney has referred to as ‘reputation-affecting’ regulatory approaches.Footnote 42 Such approaches would reward those who maintain their independence or manage their conflicts effectively with reputation-enhancing measures such as access to funding and publication in esteemed journals. In this regard, other parties not traditionally thought of as regulators – such as employing institutions, research funders, journal reviewers and editors and the media – might have an important role to play in the oversight of clinical innovation.
Importantly, none of the oversight mechanisms we have suggested here would discourage responsible clinical innovation. Indeed, an approach to the oversight of clinical innovation that explicitly accounts for the realities of competing and conflicting interests could make it easier for well-motivated clinicians to obtain the trust of both individual patients and broader social licence to innovate.
Clinical innovation has an important and established role in biomedicine and in the development and diffusion of new technologies. But it is also the case that claims about patients’ – or consumers’ – rights and about the sanctity of the doctor–patient relationship, can be used to obscure both the risks of innovation and the vested interests that drive some clinicians’ decision to offer innovative interventions. In this context, adequate oversight of clinical innovation is crucial. After all, attempts to exploit the language and concept of innovation not only harms patients, but also threatens legitimate clinical innovation and undermines public trust. Efforts to push back against the robust oversight of clinical innovation need, therefore, to be viewed with caution.
1 W. Lipworth et al., ‘The Need for Beneficence and Prudence in Clinical Innovation with Autologous Stem Cells’, (2018) Perspectives in Biology and Medicine, 61(1), 90–105.
2 P. L. Taylor, ‘Overseeing Innovative Therapy without Mistaking It for Research: A Function‐Based Model Based on Old Truths, New Capacities, and Lessons from Stem Cells’, (2010) The Journal of Law, Medicine & Ethics, 38(2), 286–302.
3 B. Salter et al., ‘Hegemony in the Marketplace of Biomedical Innovation: Consumer Demand and Stem Cell Science’, (2015) Social Science & Medicine, 131, 156–163.
4 N. Ghinea et al., ‘Ethics & Evidence in Medical Debates: The Case of Recombinant Activated Factor VII’, (2014) Hastings Center Report, 44(2), 38–45.
5 C. Davis, ‘Drugs, Cancer and End-of-Life Care: A Case Study of Pharmaceuticalization?’, (2015) Social Science & Medicine, 131, 207–214; D. W. Light and J. Lexchin, ‘Pharmaceutical Research and Development: What Do We Get for All That Money?’, (2012) BMJ, 345, e4348; C. Y. Roh and S. H. Kim, ‘Medical Innovation and Social Externality’, (2017) Journal of Open Innovation: Technology, Market, and Complexity, 3(1), 3; S. Salas-Vega et al., ‘Assessment of Overall Survival, Quality of Life, and Safety Benefits Associated with New Cancer Medicines’, (2017) JAMA Oncology, 3(3), 382–390.
6 K. Hutchinson and W. Rogers, ‘Hips, Knees, and Hernia Mesh: When Does Gender Matter in Surgery?’, (2017) International Journal of Feminist Approaches to Bioethics, 10(1), 26.
7 Davis, ‘Drugs, Cancer’; T. Fojo et al., ‘Unintended Consequences of Expensive Cancer Therapeutics – The Pursuit of Marginal Indications and a Me-Too Mentality that Stifles Innovation and Creativity: The John Conley Lecture’, (2014) JAMA Otolaryngology – Head and Neck Surgery, 140(12), 1225–1236; S. C. Overley et al., ‘Navigation and Robotics in Spinal Surgery: Where Are We Now?’, (2017) Neurosurgery, 80(3S), S86.
8 D. Cohen, ‘Devices and Desires: Industry Fights Toughening of Medical Device Regulation in Europe’, (2013) BMJ, 347, f6204; C. Di Mario et al., ‘Commentary: The Risk of Over-regulation’, (2011) BMJ, 342, d3021; O. Dyer, ‘Trump Signs Bill to Give Patients Right to Try Drugs’, (2018) BMJ, 361, k2429; S. F. Halabi, ‘Off-label Marketing’s Audiences: The 21st Century Cures Act and the Relaxation of Standards for Evidence-based Therapeutic and Cost-comparative Claims’, (2018) American Journal of Law & Medicine, 44(2–3), 181–196; M. D. Rawlins, ‘The “Saatchi Bill” will Allow Responsible Innovation in Treatment’, (2014) BMJ, 348, g2771; Salter et al., ‘Hegemony in the Marketplace’.
9 Salter et al., ‘Hegemony in the Marketplace’.
10 Rawlins, ‘The “Saatchi Bill”’.
11 Dyer, ‘Trump Signs Bill’.
12 Salter et al., ‘Hegemony in the Marketplace’.
13 Footnote Ibid., 159.
15 T. Cockburn and M. Fay, ‘Consent to Innovative Treatment’, (2019) Law, Innovation and Technology, 11(1), 34–54; T. Hendl, ‘Vulnerabilities and the Use of Autologous Stem Cells for Medical Conditions in Australia’, (2018) Perspectives in Biology and Medicine, 61(1), 76–89.
16 Medical Professionalism Project, ‘Medical Professionalism in the New Millennium: A Physicians’ Charter’, (2002) Lancet, 359(9305), 520–522.
17 H. Iijima et al., ‘Effectiveness of Mesenchymal Stem Cells for Treating Patients with Knee Osteoarthritis: A Meta-analysis Toward the Establishment of Effective Regenerative Rehabilitation’, (2018) NPJ Regenerative Medicine, 3(1), 15.
18 D. Sipp et al., ‘Clear Up this Stem-cell Mess’, (2018) Nature, 561, 455–457.
19 M. Munsie et al., ‘Open for Business: A Comparative Study of Websites Selling Autologous Stem Cells in Australia and Japan’, (2017) Regenerative Medicine, 12(7); L. Turner and P. Knoepfler, ‘Selling Stem Cells in the USA: Assessing the Direct-to-Consumer Industry’, (2016) Cell Stem Cell, 19(2), 154–157.
20 I. Berger et al., ‘Global Distribution of Businesses Marketing Stem Cell-based Interventions’,(2016) Cell Stem Cell, 19(2), 158–162; D. Sipp et al., ‘Marketing of Unproven Stem Cell–Based Interventions: A Call to Action’, (2017) Science Translational Medicine, 9(397); M. Sleeboom-Faulkner and P. K. Patra, ‘Experimental Stem Cell Therapy: Biohierarchies and Bionetworking in Japan and India’, (2011) Social Studies of Science, 41(5), 645–666.
21 G. Bauer, et al., ‘Concise Review: A Comprehensive Analysis of Reported Adverse Events in Patients Receiving Unproven Stem Cell‐based Interventions’, (2018) Stem Cells Translational Medicine, 7(9), 676–685; T. Lysaght et al., ‘The Deadly Business of an Unregulated Global Stem Cell Industry’, (2017) Journal of Medical Ethics, 43, 744–746.
22 Sipp et al., ‘Clear Up’.
23 T. Caulfield et al., ‘Confronting Stem Cell Hype’, (2016) Science, 352(6287), 776–777; A. K. McLean et al., ‘The Emergence and Popularisation of Autologous Somatic Cellular Therapies in Australia: Therapeutic Innovation or Regulatory Failure?’, (2014) Journal of Law and Medicine, 22(1), 65–89; Sipp et al., ‘Clear Up’.
24 Munsie et al., ‘Open for Business’; Sipp et al., ‘Marketing’.
25 A. Petersen et al., ‘Therapeutic Journeys: The Hopeful Travails of Stem Cell Tourists’, (2014) Sociology of Health and Illness, 36(5), 670–685.
26 Worldhealth.net, ‘Why Is Stem Cell Therapy So Expensive?’, (WorldHealth.Net, 2018), www.worldhealth.net/news/why-stem-cell-therapy-so-expensive/.
27 D. Sipp, ‘Pay-to-Participate Funding Schemes in Human Cell and Tissue Clinical Studies’, (2012) Regenerative Medicine, 7(6s), 105–111.
28 Sipp et al., ‘Clear Up’.
29 Sipp et al., ‘Marketing’.
30 R. T. Bright, ‘Submission to the TGA Public Consultation: Regulation of Autologous Stem Cell Therapies: Discussion Paper for Consultation’, (Macquarie Stem Cell Centres of Excellence, 2015), 4, www.tga.gov.au/sites/default/files/submissions-received-regulation-autologous-stem-cell-therapies-msc.pdf.
31 Adult Stem Cell Foundation, ‘Adult Stem Cell Foundation’, www.adultstemcellfoundation.org; M. Berman and E. Lander, ‘A Prospective Safety Study of Autologous Adipose-Derived Stromal Vascular Fraction Using a Specialized Surgical Processing System’, (2017) The American Journal of Cosmetic Surgery, 34(3), 129–142; International Cellular Medicine Society, ‘Open Treatment Registry’, (ICMS, 2010), www.cellmedicinesociety.org/attachments/184_ICMS%20Open%20Treatment%20Registry%20-%20Overview.pdf.
32 Sipp et al., ‘Marketing’.
33 P. F. Stahel, ‘Why Do Surgeons Continue to Perform Unnecessary Surgery?’, (2017) Patient Safety in Surgery, 11(1), 1.
34 J. Wise, ‘Show Patients Evidence for Treatment “Add-ons”, Fertility Clinics are Told’, (2019) BMJ, 364, I226.
35 P. Sugarman et al., ‘Off-Licence Prescribing and Regulation in Psychiatry: Current Challenges Require a New Model of Governance’, (2013) Therapeutic Advances in Psychopharmacology, 3(4), 233–243.
36 T. E. Chan, ‘Legal and Regulatory Responses to Innovative Treatment’, (2012) Medical Law Review, 21(1), 92–130; T. Keren-Paz and A. J. El Haj, ‘Liability versus Innovation: The Legal Case for Regenerative Medicine’, (2014) Tissue Engineering Part A, 20(19–20), 2555–2560; J. Montgomery, ‘The “Tragedy” of Charlie Gard: A Case Study for Regulation of Innovation?’, (2019) Law, Innovation and Technology, 11(1), 155–174; K. Raus, ‘An Analysis of Common Ethical Justifications for Compassionate Use Programs for Experimental Drugs’, (2016) BMC Medical Ethics, 17(1), 60; P. L. Taylor, ‘Innovation Incentives or Corrupt Conflicts of Interest? Moving Beyond Jekyll and Hyde in Regulating Biomedical Academic-Industry Relationships’, (2013) Yale Journal of Health Policy, Law, and Ethics, 13(1), 135–197.
37 Chan, ‘Legal and Regulatory Responses’; Taylor, ‘Innovation Incentives’.
38 Chan, ‘Legal and Regulatory Responses’.
39 T. Lysaght et al., ‘A Roundtable on Responsible Innovation with Autologous Stem Cells in Australia, Japan and Singapore’, (2018) Cytotherapy, 20(9), 1103–1109.
40 Cockburn and Fay, ‘Consent’; Keren-Paz and El Haj, ‘Liability versus Innovation’.
41 J. Pace et al., ‘Demands for Access to New Therapies: Are There Alternatives to Accelerated Access?’, (2017) BMJ, 359, j4494.
42 S. Devaney, ‘Enhancing the International Regulation of Science Innovators: Reputation to the Rescue?’, (2019) Law, Innovation and Technology, 11(1), 134–154.
Governments and stakeholders have struggled to find a common ground on how to regulate research for different (‘proven’ or ‘unproven’) practices. Research on traditional, alternative and complementary medicines is often characterised as following weak research protocols and as producing evidence too poor to stand the test of systematic reviews, thus rendering individual case studies results insignificant. Although millions of people rely on traditional and alternative medicine for their primary care needs, the regulation of research into, and practice of, these therapies is governed by biomedical parameters. This chapter asks how, despite efforts to accommodate other forms of evidence, regulation of research concerning traditional and alternative medicines is ambiguous as to what sort of evidence – and therefore what sort of research – can be used by regulators when deciding how to deal with practices that are not based on biomedical epistemologies. Building on ideas from science and technology studies (STS), in this chapter we analyse different approaches to the regulation of traditional and non-conventional medicines adopted by national, regional and global governmental bodies and authorities, and we identify challenges to the inclusion of other modes of ‘evidence’ based on traditional and hybrid epistemologies.
Non-conventional medicines are treatments that are not integrated to conventional medicine and are not necessarily delivered by a person with a degree in medical science. This may include complementary, alternative and traditional healers who may derive their knowledge from local or foreign knowledges, skills or practices.Footnote 1 For the World Health Organization (WHO), traditional medicine may be based on explicable or non-explicable theories, beliefs and experiences of different indigenous cultures.Footnote 2 That being said, traditional medicine is often included within the umbrella term of ‘non-conventional medicine’ in countries where biomedicine is the norm. However, this is often considered a misnomer insofar as traditional medicine may be the main source of healthcare in many countries, independent of its legitimate or illegitimate status. Given the high demand for traditional and non-conventional therapies, governments have sought to bring these therapies into the fold of regulation, yet, the processes involved to accomplish this task have been complicated by the tendency to rely on biomedicine’s standards of practice as a baseline. For example, the absence of and/or limited data produced by traditional and non-conventional medicine research and the unsatisfactory methodologies that do not stand the test of internationally recognised norms and standards for research involving human subjects have been cited as common barriers to the development of legislation and regulation of traditional and non-conventional medicine.Footnote 3 In 2019, the WHO reported that 99 out of 133 countries considered the absence of research as one of the main challenges to regulating these fields.Footnote 4 At the same time, governments have been reluctant to integrate traditional and non-conventional medicines as legitimate healthcare providers because their research is not based on the ‘gold standard’, namely multi-phase clinical trials.Footnote 5 Without evidence produced through conventional research methodologies, it is argued that people are at risk of falling prey to charlatans who peddle magical cures – namely placebos without any concrete therapeutic value – or that money is wasted on therapies and products based on outdated or disparate bodies of knowledge rather than systematic clinical research.Footnote 6 While governments have recognised to some extent the need to accommodate traditional and non-conventional medicines for a variety of reasonsFootnote 7 – including the protection of cultural rights, consumer rights, health rights, intellectual property and biodiversityFootnote 8 – critics suggest that there is no reason why these modalities of medicine should be exempted from providing quality evidence.Footnote 9
Picking up on some of these debates, this chapter charts the challenges arising from attempts to regulate issues relevant to research in the context of traditional and alternative medicine. From the outset, it explores what kinds of evidence and what kinds of research are accepted in the contemporary regulatory environment. It outlines some of the sticky points arising out of debates about research of traditional and non-conventional medicines, in particular, the role of placebo effects and evidence. Section 30.4 explores two examples of research regulation: WHO’s Guidelines for Methodologies on Research and Evaluation of Traditional Medicine and the European Directive on Traditional Herbal Medicine Products (THMPD). Both incorporate mixed methodologies into research protocols and allow the use of historical data as evidence of efficacy, thus recognising the specificity of traditional medicine and non-conventional medicine. However, we argue that these strategies may themselves become subordinated to the biomedical logics, calling into question the extent to which other epistemologies or processes are allowed to shape what is considered as acceptable evidence. Section 30.5 focuses on the UK, as an example of how other processes and rationalities, namely economic governmentalities, shape the spaces that non-conventional medicine can inhabit. Section 30.6 untangles and critically analyses the assumptions and effects arising out of the process of deciding what counts as evidence in healthcare research regulations. It suggests that despite attempts to include different modalities, ambiguities persist due to acknowledged and unacknowledged hierarchies of knowledge-production explored in this chapter. The last section opens up a conversation about what is at stake when the logic underpinning the regulation of research creates a space for difference, including different medical traditions and what counts as evidence.Footnote 10
Evidence-based medicine (EBM) stands for the movement which suggests that the scientific method allows researchers to find the best evidence available in order to make informed decisions about patient care. To find the best evidence possible, which essentially means that the many is more significant than the particular, EBM relies on multiple randomised controlled trials (RCTs) and evidence from these is eventually aggregated and compared.Footnote 11 Evidence is hierarchically organised, whereby meta-reviews and systematic reviews based on RCTs stand at the top, followed by non-randomised controlled trials, observational studies with comparison groups, case series and reports, single case studies, expert opinion, community evidence and individual testimonies at the bottom. In addition to reliance on quantity, the quality of the research matters. Overall, it means that the best evidence is based on data from blinded trials, which show a causal relation between therapeutic interventions and the effect, and isolates results from placebo-effects.
From a historical perspective, the turn to blinded tests represented a significant shift in medical practice insofar as it diminished the relevance of expert opinion, which was itself based on a hierarchy of knowledge that tended to value authority and theory over empirical evidence. Physicians used to prescribe substances, such as mercury, that although believed to be effective for many ailments, were later found to be highly toxic.Footnote 12 Thus, the notion of evidence arising out of blinded trials closed the gap between science and practice, and also partially displaced physicians’ authority. Blinded trials and placebo controls had other effects: they became a tool to demarcate ‘real’ medicine from ‘fake’ medicine, proper doctors from ‘quacks’ and ‘snake-oil’ peddlers. By exposing the absence of a causal relationships between the therapy and the physical effect, some therapies and knowledges associated with them were rebranded as fraudulent or as superstitions. While the placebo effect might retrospectively explain why some of these discarded therapies were seen as effective, in practice, EBM’s hierarchy of evidence dismisses patients’ subjective accounts.Footnote 13 While explanations about the placebo effect side-lined the role of autosuggestion in therapeutic interventions, they did not clarify either the source or the benefits of self-suggestion.
Social studies suggest that the role of imagination has been overlooked as a key element mediating therapeutic interactions. Phoebe Friesen argues that, rather than being an ‘obstacle’ that modern medicine needed to overcome, imagination ‘is a powerful instrument of healing that can, and ought to be, subjected to experimental investigations.’Footnote 14 At the same time, when the positive role of the placebo effect and self-suggestion has been raised, scholarship research has pointed out dilemmas that remain unsolved, for example: Is it ethical to give a person a placebo in the conduct of research on non-orthodox therapies, and when is it justifiable, and for which conditions? Or, could public authorities justify the use of tax-payers money for so-called ‘sham’ treatments when people themselves, empowered by consumer choice rhetoric and patient autonomy, demand it? As elaborated in this chapter, some governments have been challenged for using public money to fund therapies deemed to be ‘unscientific’, while others have tightened control, fearing that self-help gurus, regarded as ‘cultish’ sect-leaders, are exploiting vulnerable patients.
To the extent that physiological mechanisms of both placebo and nocebo effects are still unclear, there does not seem to be a place in mainstream public healthcare for therapies that do not fit the EBM model because it is difficult to justify them politically and judicially, especially as healthcare regulations rely heavily on science to demonstrate public accountability.Footnote 15 And yet, while the importance of safety, quality and efficacy of therapeutic practices cannot be easily dismissed, the reliance on EBM as a method to demarcate effective from non-effective therapies dismisses too quickly the reasons why people are attracted to these therapies. When it comes to non-conventional medicines, biomedicine and the scientific method do not factor in issues such as patient choice or the social dimension of medical practice.Footnote 16 In that respect, questions as to how non-conventional medicine knowledges can demonstrate whether they are effective or not signal broader concerns. First, is it possible to disentangle the reliance of public accountability from science in order to solve the ethical, political, social and cultural dilemmas embedded in the practice of traditional and alternative medicine? Second, if we are to broaden the scope of how evidence is assessed, are there other processes or actors that shape what is considered effective from the perspective of healthcare regulation, for example, patient choice or consumer rights? And, finally, if science is not to be considered as the sole arbiter of healing, what are the spaces afforded for other epistemologies of healing? Without necessarily answering all of these questions, the aim of this chapter is to signpost a few sticky points in these debates. The next section explores three examples, at different jurisdictional levels – national, regional and international – of how healthcare regulators have sought to provide guidelines on how to incorporate other types of evidence into research dealing with traditional and non-conventional medicine.
30.4 Integration as Subordination: Guidelines and Regulations on Evidence and Research Methodologies
Traditional medicine has been part of the WHO’s political declarations and strategies born in the lead up to the 1978 Declaration of Alma Ata.Footnote 17 Since then, the WHO has been at the forefront of developing regulations aimed at carving out spaces for traditional medicines. However, the organisation has moved away from its original understanding of health, which was more holistic and focused on social practices of healing. Regional political mobilisations underpinned by postcolonial critiques of scientific universalism were gradually replaced again by biomedical logics of health from the 1980s onwards.Footnote 18 This approach, favouring biomedical standards of practice, can be appreciated to some extent in the ‘General Guidelines for the Research of Non-Conventional Medicines’, which is prefaced by the need to improve research data and methodologies with a view of furthering the regulation and integration of traditional herbal medicines and procedure-based therapies.Footnote 19 The guidelines state that conventional methodologies should not hamper people’s access to traditional therapies; and instead, reaffirms the plurality of non-orthodox practices.Footnote 20 Noting the great diversity of practices and epistemologies framing traditional medicine, the guidelines re-organised them around two broad classifications – medicines and procedure-based therapies.
Based on these categories, the guidelines suggest that efficacy can be demonstrated through different research methodologies and types of evidence, including historical evidence of traditional use. To ensure safety and efficacy standards are met, herbal medicines ought to be first differentiated through botanical identification based on scientific Latin plant names. Meanwhile, the guidelines leave some room for the use of historical records of traditional evidence of efficacy and safety, which should be demonstrated through a variety of sources including literature reviews, theories and concepts of system of traditional medicine, as well as clinical trials. It also affirms that post-marketing surveillance systems used for conventional medicines are relevant in monitoring, reporting and evaluating adverse effects of traditional medicine.
More importantly, the guidelines contemplate the use of mixed methodologies, whereby EBM can make up for the gaps of evidence of efficacy in traditional medicine. And, where claims are based on different traditions, for example, Traditional Chinese Medicine (TCM) and Western Herbalism, the guidelines require evidence linking them together; and where there is none, scientific evidence should be the basis. If there are any contradictions between them, ‘the claim used must reflect the truth, on balance of the evidence available’.Footnote 21 Although these research methodologies give the impression of integrating traditional medicine into the mainstream, the guidelines reflect policy transformations since the late 1980s, when plants appeared more clearly as medical objects in the Declaration of Chiang Mai.Footnote 22 Drawing on good manufacturing practice guidelines as tools to assess the safety and quality of medicines, WHO’s guidelines and declarations between 1990 and 2000 increasingly framed herbal medicines as an object of both pharmacological research and healthcare governance.Footnote 23
WHO’s approach resonates with contemporary European Union legislation, namely the Directive 2004/24/EC on the registration of traditional herbal medicines.Footnote 24 This Directive also appears to be more open to qualitative evidence based on historical sources, but ultimately subordinates evidence to the biomedical mantra of safety and quality that characterises the regulation of conventional medicines. Traditional herbal medicine applications should demonstrate thirty years of traditional use of the herbal substances or combination thereof, of which fifteen years should be in the European Union (EU). In comparison with conventional medicines requiring multiphase clinical trials in humans, the Directive simplifies the authorisation procedure by admitting bibliographic evidence of efficacy. However, applications must be supplemented with non-clinical studies – namely, toxicology studies – especially if the herbal substance or preparation is not listed in the Community Pharmacopeia.Footnote 25 In the end, these regulations subordinate traditional knowledges to the research concepts and methodologies of conventional medicine. Research centres of non-conventional medicines in the EU also align mission statements to integration-based approaches, whereby inclusion of traditional and non-conventional medicine is premised on their modernisation through science.Footnote 26 However, as we argue in the next section, science is not the sole arbiter of what comes to be excluded or not in the pursuit of evidence. Indeed, drawing on the UK as a case study, we argue that economic rationalities are part of the regulatory environment shaping what is or is not included as evidence in healthcare research.
30.5 Beyond Evidence: The Economic Reasoning of Clinical Guidelines
Despite there being no specific restrictions preventing the use of non-conventional treatments within the National Health Service (NHS), authorities involved in the procurement of health or social care work have been under increasing pressure to define the hierarchy of scientific evidence in public affairs. For example, under pressure of being judicially reviewed, the Charities Commission opened up a consultation that produced new guidance for legal caseworkers assessing applications from charities promoting the use of complementary and alternative medicine. Charities have to define their purpose and how this benefits publics. For example, if the declared purpose is to cure cancer through yoga, it will have to demonstrate evidence of public benefit, based on accepted sources of evidence and EBM’s ‘recognised scales of evidence’. Although observations, personal testimonies or expert opinion are not excluded per se, they cannot substitute scientific medical explanation.Footnote 27 For the Commission, claims that fail the scientifically-based standard are meant to be regarded as cultural or religious beliefs.
There have also been more conspicuous ways in which evidence, as understood through a ‘scientific-bureaucratic-medicine’ model, has been used to limit the space for non-conventional medicines.Footnote 28 Clinical guidelines are a key feature of this regulatory model – increasingly institutionalised in the UK since the 1980s. The main body charged with this task is the National Institute for Health and Care Excellence (NICE), a non-departmental public body with statutory footing through the Health and Social Care Act 2012. The purpose of NICE clinical guidelines is to reduce variability in both quality and availability in the delivery of treatments and care and to confirm an intervention’s effectiveness. Although not compulsory, compliance with the guidelines is the norm and exceptions are ‘both rare and carefully documented’Footnote 29 because institutional performance is tied to their implementation and non-adherence may have a financial impact.Footnote 30 Following a campaign by ‘The Good Thinking Society’, an anti-pseudoscience charity, NHS bodies across London, Wales and the North of England have stopped funding homeopathic services.Footnote 31 Meanwhile, an NHS England consultation also led to the ban of the prescription of products considered to be of ‘low clinical value’, such as homeopathic and herbal products. Responding to critics, the Department of Health defended its decision to defund non-conventional medicine products stating they were neither clinically nor cost effective.Footnote 32 However, it is also worth noting that outside of the remit of publicly funded institutions, traditional and non-conventional medicines have been tolerated, or even encouraged, as a solution to relieve the pressure from austerity healthcare policies. For example, the Professional Standards Authority (PSA) has noted that accredited registered health and social care practitioners – which include acupuncturists, sports therapists, aromatherapy practitioners, etc. – could help relieve critical demand for NHS services.Footnote 33 This raises questions about what counts as evidence and how different regulators respond to specific practices that are not based on biomedical epistemologies, particularly what sort of research is acceptable in healthcare policy-making. What we have sought to demonstrate in this section is the extent to which, under the current regulatory landscape, the production of knowledge has become increasingly enmeshed with various layers of laws and regulations drafted by state and non-state actors.Footnote 34 Although the discourse has focused on problems with the kind of evidence and research methodologies used by advocates of non-conventional medicine, a bureaucratic application of EBM in the UK has limited access to traditional and non-conventional medicines in the public healthcare sector. In addition to policing the boundaries between ‘fake’ and ‘real’ medicines, clinical guidelines also delimit which therapies should be funded or not by the state. Thus, this chapter has sketched the links between evidence-based medicine and law, and the processes that influence what kind of research and what kind of evidence are appropriate for the purpose of delivering healthcare. Regulation, whether through laws implementing the EU Directives on the registration of traditional herbal medicines, or clinical guidelines produced by NICE, can be seen as operating as normative forces shaping healthcare knowledge production. The final section analyses the social and cultural dimensions of knowledge production and it argues that contemporary regulatory approaches discussed in the preceding sections assume non-conventional knowledges follow a linear development. Premised upon notions of scientific progress and modernity, this view ultimately fails to grasp the complexity of knowledge-production and the hybrid nature of healing practices.
30.6 Regulating for Uncertainty: Messy Knowledges and Practices
Hope for a cure, dissatisfaction with medical authority, highly bureaucratised healthcare systems or limited access to primary healthcare, are among some of the many reasons that drive people to try untested as well as the unregulated pills and practice-based therapies from traditional and non-conventional medicines. While EBM encourages a regulatory environment averse to the miracle medicines or testimonies of overnight cures and home-made remedies, Lucas Richert argues ‘unknown unknowns fail to dissuade the sick, dying or curious from experimenting with drugs’.Footnote 35 The problem, however, is the assumption that medicines, and also law, progress in a linear trajectory. In other words, that unregulated drugs became regulated through standardised testing and licensing regulations that carefully assess medicines quality, safety and efficacy before and after they are approved into the market.Footnote 36
Instead, medicines’ legal status may not always follow this linear evolution. We have argued so far that the regulatory environment of biomedicine demarcates boundaries between legitimate knowledge-makers/objects and illegitimate ones, such as street/home laboratories and self-experimenting patients.Footnote 37 But ‘evidence’ also acts as a signpost for a myriad of battles to secure some kind of authority over what is legitimate or not between different stakeholders (patient groups, doctors, regulators, industry, etc.).Footnote 38 Thus, by looking beyond laboratories and clinical settings, and expanding the scope of research to the social history of drugs, STS scholarship suggests that the legal regulation of research and medicines is based on more fragmented and dislocated encounters between different social spaces where experimentation happens.Footnote 39 For example, Mei Zhan argues that knowledge is ‘always already impure, tenuously modern, and permanently entangled in the networks of people, institutions, histories, and discourses within which they are produced’.Footnote 40 This means neither ‘Western’ biomedical science or ‘traditional’ medicines have ever been static and hermeneutically sealed spaces. Instead, therapeutic interventions and encounters are often ‘uneven’ and messy, linking dissimilar traditions and bringing together local and global healing practices, to the point that they constantly disturb assumptions about ‘the Great Divides’ in medicine. For example, acupuncture’s commodification and marketisation in Western countries reflects how Traditional Chinese Medicine has been transformed through circulation across time and space, enlisting various types of actors from different professional healthcare backgrounds – such as legitimate physicians, physiotherapists, nurses, etc. – as well as lay people who have not received formal training in a biomedical profession. New actors with different backgrounds take part in the negotiations for medical legitimacy and authority that are central to the reinvention of traditional and non-conventional medicine. These are processes of ‘translocation’ – understood as the circulation of knowledges across different circuits of exchange value – which reconfigure healing communities worldwide.Footnote 41
So, in the process of making guidelines, decisions and norms about research on traditional and non-conventional medicines, the notion of ‘evidence’ could also signify a somewhat impermanent conclusion to a struggle between different actors. As a social and political space, the integration of traditional medicine and non-conventional medicine is not merely a procedural matter dictated by the logic of medical sciences. Instead, what is accepted or not as legitimate is constantly ‘remodelled’ by political, economic and social circumstances.Footnote 42 In that sense, Stacey Langwick argues that evidence stands at the centre of ontological struggles rather than simply being contestations of authority insofar it is a ‘highly politicized and deeply intimate battle over who and what has the right to exist’.Footnote 43 For her, determination of what counts as evidence is at the heart of struggles of postcoloniality. When regulations based on EBM discard indigenous epistemologies of healing or the hybrid practices of individuals and communities who pick up knowledge in fragmented fashion, they also categorise their experiences, histories and effects as non-events. This denial compounds the political and economic vulnerability of traditional and non-conventional healers insofar as their survival depends on their ability to adapt their practice to conventional medicine, by mimicking biomedical practices and norms.Footnote 44 Hence, as Marie Andree Jacobs argues, the challenge for traditional and non-conventional medicines lies in translating ‘the alternativeness of its knowledge into genuinely alternative research practices’ and contributes to reimagining alternative models of regulation.Footnote 45
This chapter analysed how regulators respond to questions of evidence of traditional and non-conventional medicines. It argued that these strategies tend to subordinate data that is not based on EBM’s hierarchies of evidence, allowing regulators to demarcate the boundaries of legitimate research as well as situating the ‘oddities’ of non-conventional medicines outside of science (e.g. as ‘cultural’ or ‘religious’ issues in the UK’s case). In order to gain legitimacy and authority, as exemplified through the analysis of specific guidelines and regulations of research of traditional and non-conventional medicines, the regulatory environment favours the translation and transformation of traditional and non-conventional medicines into scientised and commercial versions of themselves. Drawing on STS scholarship, we suggested understanding these debates as political and social struggles reflecting changes about how people heal themselves and others in social communities that are in constant flux. More importantly, they reflect struggles of healing communities seeking to establish their own viability and right to exist within the dominant scientific-bureaucratic model of biomedicine. This chapter teased out limits of research regulation on non-conventional medicines, insofar practices and knowledges are already immersed in constantly shifting processes, transformed by the very efforts to pin them down into coherent and artificially closed-off systems. By pointing out the messy configurations of social healing spaces, we hope to open up a space of discussion with the chapters in this section. Indeed, how can we widen the lens of research regulation, and accommodate non-conventional medicines, without compromising the safety and quality of healthcare interventions? At the very minimum, research on regulation could engage with the social and political context of medicine-taking, and further the understanding of how and why patients seek one therapy over another.
1 P. Lannoye, ‘Report on the Status of Non-Conventional Medicine’, (Committee on the Environment, Public Health and Consumer Protection, 6 March 1997).
2 WHO, ‘WHO Global Report on Traditional and Complementary Medicine 2019’, (WHO, 2019).
3 E. Ernst, ‘Commentary on: Close et al. (2014) A Systematic Review Investigating the Effectiveness of Complementary and Alternative Medicine (CAM) for the Management of Low Back and/or Pelvic Pain (LBPP) in Pregnancy’, (2014) Journal of Advanced Nursing, 70(8), 1702–1716; WHO, ‘General Guidelines for Methodologies on Research and Evaluation of Traditional Medicine’, (WHO, 2000).
4 WHO, ‘Global Report on Traditional and Complementary Medicine 2019’, (WHO, 2019).
5 House of Lords, Select Committee on Science and Technology: Sixth Report (2000, HL).
6 M. K. Sheppard, ‘The Paradox of Non-evidence Based, Publicly Funded Complementary Alternative Medicine in the English National Health Service: An Explanation’, (2015) Health Policy, 119(10), 1375–1381.
7 The International Bioethics Committee (IBC) of the United Nations Educational, Scientific and Cultural Organization (UNESCO), the World Intellectual Property Organisation (WIPO), the World Trade Organisation (WTO) and WHO have stated support for the protection of traditional knowledges, including traditional medicines.
8 Such as the European Red List of Medicinal Plants, which documents species endangered by human economic activities and loss of biodiversity.
9 K. Hansen and K. Kappel, ‘Complementary/Alternative Medicine and the Evidence Requirement’ in M. Solomon et al. (eds), The Routledge Companion to Philosophy of Medicine (New York and Abingdon: Routledge, 2016).
10 M. Zhan, Other Wordly: Making Chinese Medicine through Transnational Frames (London: Duke University Press, 2009); C. Schurr and K. Abdo, ‘Rethinking the Place of Emotions in the Field through Social Laboratories’, (2016) Gender, Place and Culture, 23(1), 120–133.
11 D. L. Sackett et al., ‘Evidence Based Medicine: What It Is and What It Isn’t’, (1996) British Medical Journal, 312(7023), 71–72.
12 R. Porter, The Greatest Benefit to Mankind: A Medical History of Humanity from Antiquity to the Present (New York: Fontana Press, 1999).
13 A. Wahlberg, ‘Above and Beyond Superstition – Western Herbal Medicine and the Decriminalizing of Placebo’, (2008) History of the Human Sciences, 21(1), 77–101; A. Harrington, ‘The Many Meanings of the Placebo Effect: Where They Came From, Why They Matter’, (2006) BioSocieties, 1(2), 181–193; P. Friesen, ‘Mesmer, the Placebo Effect, and the Efficacy Paradox: Lessons for Evidence Based Medicine and Complementary and Alternative Medicine Medicine’, (2019) Critical Public Health, 29(4), 435–447.
14 Friesen, ‘Mesmer’, 436.
15 B. Goldacre, ‘The Benefits and Risks of Homeopathy’, (2007) Lancet, 370(9600), 1672–1673.
16 E. Cloatre, ‘Regulating Alternative Healing in France, and the Problem of “Non-Medicine”’, (2018) Medical Law Review, 27(2), 189–214.
17 WHO, ‘Declaration of Alma-Ata, International Conference on Primary Health Care, Alma-Ata, USSR, 6–12’, (WHO, September 1978).
18 S. Langwick, ‘From Non-aligned Medicines to Market-Based Herbals: China’s Relationship to the Shifting Politics of Traditional Medicine in Tanzania’, (2010) Medical Anthropology, 29(1), 15–43.
19 WHO, ‘General Guidelines for Methodologies on Research and Evaluation of Traditional Medicine’, (WHO, 2000).
21 Ibid., 42.
22 O. Akerele et al. (eds) Conservation of Medicinal Plants (Cambridge University Press, 1991).
23 M. Saxer, Manufacturing Tibetan Medicine: The Creating of an Industry and the Moral Economy of Tibetanness (New York: Berghan Books, 2013).
24 Directive 2004/24/EC of the European Parliament and of the Council of 31 March 2004 amending, as regards traditional herbal medicinal products, Directive 2001/83/EC on the Community code relating to medicinal products for human use, OJ 2004 No. L136, 30 April 2004.
25 T. P. Fan et al., ‘Future Development of Global Regulations of Chinese Herbal Products’, (2012) Journal of Ethnopharmacology, 140(3), 568–586.
26 V. Fønnebø et al., ‘Legal Status and Regulation of CAM in Europe Part II – Herbal and Homeopathic Medicinal Products’, (CAMbrella, 2012).
27 Charity Commission for England and Wales, ‘Operational Guidance (OG) 304 Complementary and Alternative Medicine’, (Charity Commission for England and Wales, 2018).
28 S. Harrison and K. Checkland, ‘Evidence-Based Practice in UK Health Policy’ in J. Gabe and M. Calnan (eds), The New Sociology of Health Service (Abingdon: Routledge, 2009).
29 Footnote Ibid., p. 126.
30 R. McDonald and S. Harrison, ‘The Micropolitics of Clinical Guidelines: An Empirical Study’, (2004) Policy and Politics, 32(2), 223–239.
31 The Good Thinking Society, ‘NHS Homeopathy Spending’, (The Good Thinking Society, 2018), www.goodthinkingsociety.org/projects/nhs-homeopathy-legal-challenge/nhs-homeopathy-spending/.
32 UK Government and Parliament, ‘Stop NHS England from Removing Herbal and Homeopathic Medicines’, (UK Government and Parliament, 2017), www.petition.parliament.uk/petitions/200154.
33 Professional Standards Authority, ‘Untapped Resources: Accredited Registers in the Wider Workforce’, (Professional Standards Authority, 2017).
34 M. Jacob, ‘The Relationship between the Advancement of CAM Knowledge and the Regulation of Biomedical Research’ in J. McHale and N. Gale (eds), The Routledge Handbook on Complementary and Alternative Medicine: Perspectives from Social Science and Law (Abingdon: Routledge, 2015), p. 359.
35 L. Richert, Strange Trips: Science, Culture, and the Regulation of Drugs (Montreal: McGill University Press, 2018), p. 174.
36 J. Barnes, ‘Pharmacovigilance of Herbal Medicines: A UK Perspective’, (2003) Drug Safety, 26(12), 829– 851.
37 E. Cloatre, ‘Law and Biomedicine and the Making of “Genuine” Traditional Medicines in Global Health’, (2019) Critical Public Health, 29(4), 424–434.
38 Richert, Strange Trips, pp. 56–76.
39 J. Kim, ‘Alternative Medicine’s Encounter with Laboratory Science: The Scientific Construction of Korean Medicine in a Global Age’, (2007) Social Studies of Science, 37(6), 855–880.
40 Zhan, Other Wordly, p. 72.
41 Footnote Ibid., p. 18
42 Richert, Strange Trips, p. 172.
43 S. A. Langwick, Bodies, Politics and African Healing: The Matter of Maladies in Tanzania (Indiana University Press, 2011), p. 233.
44 Footnote Ibid., p. 223.
45 Jacob, ‘CAM Knowledge’, p. 358.
31.1 IntroductionFootnote 1
Over the last decade or so, sociologists and other social scientists concerned with the development and application of biomedical research have come to explore the lived realities of regulation and governance in science. In particular, the instantiation of ethics as a form of governance within scientific practice – via, for instance, research ethics committees (RECs) – has been extensively interrogated.Footnote 2 Social scientists have demonstrated the reciprocally constitutive nature of science and ethics, which renders problematic any assumption that ethics simply follows (or stifles) science in any straightforward way.Footnote 3
This chapter draws on and contributes to such discussion through analysing the relationship between neuroscience (as one case study of scientific work) and research ethics. I draw on data from six focus groups with scientists in the UK (most of whom worked with human subjects) to reflect on how ethical questions and the requirements of RECs as a form of regulation are experienced within (neuro)science. The focus groups were conducted in light of a conceptual concern with how ‘issues and identities interweave’; i.e. how personal and professional identities relate to how particular matters of concern are comprehended and engaged with, and how those engagements themselves participate in the building of identities.Footnote 4 The specific analysis presented is informed by the work of science and technology studies (STS) scholar Sheila Jasanoff and other social scientists who have highlighted the intertwinement of knowledge with social order and practices.Footnote 5 In what follows, I explore issues that the neuroscientists I spoke with deem to be raised by their work, and characterise how both informal ideas about ethics and formal ethical governance (e.g. RECs) are experienced and linked to their research. In doing so, I demonstrate some of the lived realities of scientists who must necessarily grapple with the heterogenous forms of health-related research regulation the editors of this volume highlight in their Introduction, while seeking to conduct research with epistemic and social value.Footnote 6
31.2 Negotiating the Ethical Dimensions of Neuroscience
It is well known that scientists are not lovers of the bureaucracies of research management, which are commonly taken to include the completion of ethical review forms. This was a topic of discussion in the focus groups: one scientist, for instance, spoke of the ‘dread’ (M3, Group 5) felt at the prospect of applying for ethical approvals. Such an idiom will no doubt be familiar to many lawyers, ethicists and regulatory studies scholars who have engaged with life scientists about the normative dimensions of their work.
Research governance – specifically, ethical approvals – could, in fact, be seen as having the potential of hampering science, without necessarily making it more ethical. In one focus group (Group 1), three postdoctoral neuroscientists discussed the different terms ethics committees had asked them to use in recruitment materials. One scientist (F3) expressed irritation that another (F2) was required to alter a recruitment poster, in order that it clearly stated that participants would receive an ‘inconvenience allowance’ rather than be ‘paid’. The scientists did not think that this would facilitate recruitment into a study, nor enable it to be undertaken any more ethically. F3 described how ‘it’s just so hard to get subjects. Also if you need to get subjects from the general public, you know, you need these tricks’. It was considered that changing recruitment posters would not make the research more ethical – but it might prevent it happening in the first place.
All that being said, scientists also feel motivated to ensure their research is conducted ‘ethically’. As the power of neuroimaging techniques increases, it is often said that it becomes all the more crucial for neuroscientists to engage with ethical questions.Footnote 7 The scientists in my focus groups shared this sentiment, commonly expressed by senior scientists and ethicists. As one participant reflected, ‘the ethics and management of brain imaging is really becoming a very key feature of […] everyday imaging’ (F2, Group 4). Another scientist (F1, Group 2) summarised the perspectives expressed by all those who participated in the focus groups:
I think the scope of what we can do is broadening all the time and every time you find out something new, you have to consider the implications on your [research] population.
What scientists consider to be sited within the territory of the ‘ethical’ is wide-ranging, underscoring the scope of neuroscientific research, and the diverse institutional and personal norms through which it is shaped and governed. One researcher (F1, Group 2) reflected that ethical research was not merely that which had been formally warranted as such:
I think when I say you know ‘ethical research’, I don’t mean research passed by an ethics committee I mean ethical to what I would consider ethical and I couldn’t bring myself to do anything that I didn’t consider ethical in my job even if it’s been passed by an ethics committee. I guess researchers should hold themselves to that standard.
Conflicts about what was formally deemed ethical and what scientists felt was ethical were not altogether rare. In particular, instances of unease and ambivalence around international collaboration were reflected upon in some of the focus group discussions. Specifically, these were in relation to collaboration with nations that the scientists perceived as having relatively lax ethical governance as compared to the UK. This could leave scientists with a ‘slight uneasy feeling in your stomach’ (F2, Group 4). Despite my participants constructing some countries as being more or less ‘ethical’, no focus group participant described any collaborations having collapsed as a consequence of diverging perspectives on ethical research. However, the possibility that differences between nations exist, and that these difference could create problems in collaboration, was important to the scientists I spoke with. There was unease attached to collaborating with a ‘country that doesn’t have the same ethics’ (F2, Group 4). To an extent, then, an assumption of a shared normative agenda seemed to have significance as an underpinning for cross-national team science.
The need to ensure confidentiality while also sharing data with colleagues and collaborators was another source of friction. This was deemed to be a particularly acute issue for neuroscience, since neuroimaging techniques were seen as being able to generate and collect particularly sensitive information about a person (given both the biological salience of the brain and the role of knowledge about it in crafting identities).Footnote 8 The need to separate data from anything that could contribute to identifying the human subject it was obtained from impacted scientists’ relationships with their research. In one focus group (Group 3), M3 pointed out that no longer were scientists owners of data, but rather, they were responsible chaperones for it.
Fears were expressed in the focus groups that neuroscientific data might inadvertently impact upon research participants, for instance, affecting their hopes for later life, legal credibility and insurance premiums. Echoing concerns raised in both ethics and social scientific literatures, my participants described a wariness about any attempt to predict ‘pathological’ behaviours, since this could result in the ‘labelling’ (F1, Group 4) or ‘compartmentalising’ (F2, Group 4) of people.Footnote 9 As such, these scientists avoided involving themselves in research that necessarily entailed children, prisoners, or ‘vulnerable people’ (F2, group 4). Intra-institutional tensions could emerge when colleagues were carrying out studies that the scientists I spoke with did not regard as ethically acceptable.
Some focus group participants highlighted the hyping of neuroscience, and argued that it was important to resist this.Footnote 10 These scientists nevertheless granted the possibility that some of the wilder promises made about neuroscience (e.g. ‘mind reading’) could one day be realised – generating ethical problems in the process:
there’s definitely a lot of ethical implications on that in terms of what the average person thinks that these methods can do and can’t do, and what they actually can do. And if the methods should get to the point where they could do things like that, to what extent is it going to get used in what way. (F1, group 1)
Scientists expressed anxiety about ‘develop[ing] your imaging techniques’ but then being unable to ‘control’ the application of these (F2, Group 4). Yet, not one of my participants stated that limits should be placed on ‘dangerous’ research. Developments in neuroscience were seen neither as intrinsically good nor as essentially bad, with nuclear power sometimes invoked as a similar example of how, to their mind, normativity adheres to deployments of scientific knowledge rather than its generation. More plainly: the rightness or wrongness of new research findings were believed to ‘come down to the people who use it’ (F1, Group 1), not to the findings per se. Procedures almost universally mandated by RECs were invoked as a way of giving licence to research: ‘a good experiment is a good experiment as long as you’ve got full informed consent, actually!’ (F1, Group 3). Another said:
I think you can research any question you want. The question is how you design your research, how ethical is the design in order to answer the question you’re looking at. (F2, Group 2)
Despite refraining from some areas of work themselves, due to the associated social and ethical implications my participants either found it difficult to think of anything that should not be researched at all, or asserted that science should not treat anything as ‘off-limits’. One scientist laughed in mock horror when asked if there were any branches of research that should not be progressed: ‘Absolutely not!’ (F1 Group 3). This participant described how ‘you just can’t stop research’, and prohibitions in the UK would simply mean scientists in another country would conduct those studies instead. In this specific respect, ethical issues seemed to be somewhat secondary to the socially produced sense of competition that appears to drive forward much biomedical research.
31.3 Incidental Findings within Neuroimaging Research
The challenge of what to do with incidental findings is a significant one for neuroscientists, and a matter that has exercised ethicists and lawyers (see Postan, Chapter 23 in this volume).Footnote 11 They pose a particular problem for scientists undertaking brain imaging. Incidental findings have been defined as ‘observations of potential clinical significance unexpectedly discovered in healthy subjects or in patients recruited to brain imaging research studies and unrelated to the purpose or variables of the study’.Footnote 12 The possibilities and management of incidental findings were key issues in the focus group discussions I convened, with a participant in one group terming them ‘a whole can of worms’ (F1, Group 3). Another scientist reflected on the issue, and their talk underscores the affective dimensions of ethically challenging situations:
I remember the first time [I discovered an incidental finding] ’cos we were in the scanner room we were scanning the child and we see it online basically, that there might be something. It’s a horrible feeling because you then, you obviously at this point you know the child from a few hours, since a few hours already, you’ve been working with the child and it’s … you have a personal investment, emotional investment in that already but the important thing is then once the child comes out of the scanner, you can’t say anything, you can’t let them feel anything, you know realise anything, so you have to be just really back to normal and pretend there’s nothing wrong. Same with the parents, you can’t give any kind of indication to them at all until you’ve got feedback from an expert, which obviously takes so many days, so on the day you can’t let anything go and no, yeah it was, not a nice experience. (F2, Group 2)
Part of the difficulties inherent in this ethically (and emotionally) fraught area lies in the relationality between scientist and research subject. Brief yet close relationships between scientists and those they research are necessary to ensure the smooth running of studies.Footnote 13 This intimacy, though, makes the management of incidental findings even more challenging. Further, the impacts of ethically significant issues on teamwork and collaboration are complex; for instance, what happens if incidental findings are located in the scans of co-workers, rather than previously unknown research subjects? One respondent described how these would be ‘even more difficult to deal with’ (F1, Group 1). Others reflected that they would refrain from ‘helping out’ by participating in a colleague’s scan when, for instance, refining a protocol. This was due to the potential of neuroimaging to inadvertently reveal bodily or psychological information that they would not want their colleagues to know.
The challenge of incidental findings is one that involves a correspondence between a particular technical apparatus (i.e. imaging methods that could detect tumours) and an assemblage of normative imperatives (which perhaps most notably includes a duty of care towards research participants). This correspondence is reciprocally impactful: as is well known, technoscientific advances shift the terrain of ethical concern – but so too does the normative shape the scientific. In the case of incidental findings, for example, scientists increasingly felt obliged to cost in an (expensive) radiologist into their grants, to inspect each participant’s scan; a scientist might ‘feel uncomfortable showing anybody their research scan without having had a radiologist look at it to reassure you it was normal’ (F1, Group 3). Hence, ‘to be truly ethical puts the cost up’ (F2, Group 4). Not every scientist is able to command such sums from funders, who might also demand more epistemic bang for the buck when faced with increasingly costly research proposals. What we can know is intimately linked to what we can, and are willing to, spend. And if being ‘truly ethical’ indeed ‘puts the cost up’, then what science is sponsored, and who undertakes this, will be affected.
31.4 Normative Uncertainties in Neuroscience
Scientific research using human and animal subjects in the UK is widely felt to be an amply regulated domain of work. We might, then, predict that issues like incidental findings can be rendered less challenging to deal with through recourse to governance frameworks. Those neuroscientists who exclusively researched animals indeed regarded the parameters and procedures defining what was acceptable and legal in their work to be reasonable and clear. In fact, strict regulation was described as enjoining self-reflection about whether the science they were undertaking was ‘worth doing’ (F1, Group 6). This was not, however, the case for my participants working with humans. Rather, they regarded regulation in general as complicated, as well as vague: in the words of two respondents, ‘too broad’ and ‘open to interpretation’ (F1, Group 2), and ‘a bit woolly’ and ‘ambiguous’ (F2, group 2). Take, for instance, the Data Protection Act: in one focus group (Group 3) a participant (F1) noted that a given university would ‘take their own view’ about what was required by the Act, with different departments and laboratories in turn developing further – potentially diverging – interpretations.
Within the (neuro)sciences, procedural ambiguity can exist in relation to what scientists, practically, should do – and how ethically valorous it is to do so. Normative uncertainty can be complicated further by regulatory multiplicity. The participants of one focus group, for example, told me about three distinct yet ostensibly nested ethical jurisdictions they inhabited: their home department of psychology, their university medical school and their local National Health Service Research Ethics Committee (NHS REC). The scientists I spoke with understood these to have different purviews, with different procedural requirements for research, and different perspectives on the proper way enactment of ethical practices, such as obtaining informed consent in human subjects research.
Given such normative uncertainty, scientists often developed what we might term ‘ethical workarounds’. By this, I mean that they sought to navigate situations where they were unsure of what, technically, was the ‘right’ thing to do by establishing their own individual and community norms for the ethical conduct of research, which might only be loosely connected to formal requirements. In sum, they worked around uncertainty by developing their own default practices that gave them some sense of surety. One participant (F1, Group 2) described this in relation to drawing blood from people who took part in her research. To her mind, this should be attempted only twice before being abandoned. She asserted that this was not formally required by any research regulation, but instead was an informal standard to which she and colleagues nevertheless adhered.
In the same focus group discussion, another scientist articulated a version of regulatory underdetermination to describe the limits of governance:
not every little detail can be written down in the ethics and a lot of it is in terms of if you’re a researcher you have to you know make your mind up in terms of the ethical procedures you have to adhere to yourself and what would you want to be done to yourself or not to be done … (F2, Group 2)
Incidental findings were a key example of normative uncertainty and the ethical workarounds that resulted from this. Although ‘not every little detail can be written down’, specificity in guidelines can be regarded as a virtue in research that is seen to have considerable ethical significance, and where notable variations in practice were known to exist. The scientist quoted above also discussed how practical and ethical decisions must be made as a result of the detection of clinically relevant incidental findings, but that their precise nature was uncertain: scientists were ‘struggling’ due to being ‘unsure’ what the correct course of action should be. Hence, ‘proper guidelines’ were invoked as potentially useful, but these were seemingly considered to be hard to come by.
The irritations stimulated by a perceived lack of clarity on the ethically and/or legally right way to proceed are similarly apparent in the response of this scientist to a question about her feelings upon discovering, for the first time, a clinically relevant incidental finding in the course of her neuroimaging work:
It was unnerving! And also because it was the first time I wasn’t really sure how to deal with it all, so I had to go back in the, see my supervisor and talk to them about it and, try to find out how exactly we’re dealing now with this issue because I wasn’t aware of the exact clear guidelines. (F2, Group 2)
Different scientists and different institutions were reported to have ‘all got a different way of handling’ (F2, Group 4) the challenge of incidental findings. Institutional diversity was foregrounded, such as in the comments of F1 (Group 1). She described how when working at one US university ‘there was always a doctor that had to read the scans so it was just required’. She emphasised how there was no decision-making around this on behalf of the scientist or the research participant: it was simply a requirement. On the other hand, at a different university this was not the case – no doctor was on call to assess neuroimages for incidental findings.
An exchange between two researchers (F1 and F2, Group 2) also illustrates the problems of procedural diversity. Based in the same university but in different departments, they discussed how the complexities of managing incidental findings was related, in part, to practices of informed consent. Too lengthy a dialogue to fully reproduce here, two key features stood out. First, differences existed in whether the study team would, in practice, inform a research subject’s physician in the event of an individual finding: in F2’s case, it was routine for the physician to be contacted, but F1’s participants could opt out of this. However, obtaining physician contact details was itself a tricky undertaking:
we don’t have the details of the GP so if we found something we would have to contact them [the participant] and we’d have to ask them for the GP contact and in that case they could say no, we don’t want to, so it’s up to them to decide really, but we can’t actually say anything directly to them what we’ve found or what we think there might be because we don’t know, ’cos the GP then will have to send them to proper scans to determine the exact problem, ’cos our scans are obviously not designed for any kind of medical diagnosis are they? So I suppose they’ve still got the option to say no. (F2, Group 2)
It is also worth noting at this point the lack of certitude of the scientists I spoke with about where directives around ethical practice came from, and what regulatory force these had. F1 (Group 1) and F2 (Group 2) above, for instance, spoke about how certain processes were ‘just required’ or how they ‘have to’ do particular things to be ‘ethical’. This underscores the proliferation and heterogeneity of regulation the editors of this volume note in their Introduction, and the challenges of comprehending and negotiating it in practice by busy and already stretched professionals.
The ethical aspects of science often require discursive and institutional work to become recognised as such, and managed thereafter. In other words, for an issue to be regarded as specifically ethical, scientists and universities need to, in some sense, agree that it is; matters that ethicists, for instance, might take almost for granted as being intrinsically normative can often escape the attention of scientists themselves. After an issue has been characterised by researchers as ethical, addressing it can necessitate bureaucratic innovation, and the reorganisation of work practices (including new roles and changing responsibilities). Scientists are not always satisfied with the extent to which they are able, and enabled, to make these changes. The ethics of neuroscience, and the everyday conversations and practices that come into play to deal with them, can also have epistemic effects: ethical issues can and do shape scientists relationships with the work, research participants, and processes of knowledge-production itself.
The scientists I spoke with listed a range of issues as having ethical significance, to varying degrees. Key among these were incidental findings. The scientists also engaged in what sociologist Steven Wainwright and colleagues call ‘ethical boundary work’; i.e. they sometimes erected boundaries between scientific matters and normative concerns, but also collapsed these when equating good science with ethical science.Footnote 14 This has the effect of enabling scientists to present research they hold in high regard as being normatively valorous, while also bracketing off ethical questions they consider too administratively or philosophical challenging to deal with as being insufficiently salient to science itself to necessitate sustained engagement.
Still, though, ethics is part and parcel of scientific work and of being a scientist. Normative reflection is, to varying degrees, embedded within the practices of researchers, and can surface not only in focus group discussions but also in corridor talk and coffee room chats. This is, in part, a consequence of the considerable health-related research regulation to which scientists are subject. It is also a consequence of the fact that scientists are moral agents: people who live and act in a world with other persons, and who have an everyday sense of right and wrong. This sense is inevitably and essentially context-dependent, and it inflects their scientific practice and will be contoured in turn by this. It is these interpretations of regulation in conjunction with the mundane normativity of daily life that intertwine to constitute scientists’ ethical actions within the laboratory and beyond, and in particular that cultivate their ethical workarounds in conditions of uncertainty.
In this chapter I have summarised and discussed data regarding how neuroscientists construct and regard the ethical dimensions of their work, and reflected on how they negotiate health-related research regulation in practice. Where does this leave regulators? For a start, we need more sustained, empirical studies of how scientists comprehend and negotiate the ethical dimensions of their research in actual scientific work, in order to ground the development and enforcement of regulation.Footnote 15 What is already apparent, however, is that any regulation that demands actions that require sharp changes in practice, to no clear benefit to research participants, scientists, or wider society, is unlikely to invite adherence. Nor are frameworks that place demands on scientists to act in ways they consider unethical, or which place unrealistic burdens (e.g. liaising with GPs without the knowledge of research participants) on scientists that leave them anxious and afraid that they are, for instance, ‘breaking the law’ when failing to act in a practically unfeasible way.
It is important to recognise that scientists bring to bear their everyday ethical expertise to their research, and it is vital that this is worked with rather than ridden over. At the same time, it takes a particular kind of scientist to call into question the ethical basis of their research or that of close colleagues, not least given an impulse to conflate good science with ethical science. Consequently, developing regulation in close collaboration with scientists also needs the considered input of critical friends to both regulators and to life scientists (including but not limited to social scientific observers of the life sciences). This would help mitigate the possibility of the inadvertent reworking or even subverting of regulation designed to protect human subjects by well-meaning scientists who inevitably want to do good (in every sense of the word) research.
1 This chapter revisits and reworks a paper previous published as: M. Pickersgill, ‘The Co-production of Science, Ethics and Emotion’, (2012) Science, Technology & Human Values, 37(6), 579–603. Data are reproduced by kind permission of the journal and content used by permission of the publisher, SAGE Publications, Inc.
2 M. M. Easter et al., ‘The Many Meanings of Care in Clinical Research’, (2006) Sociology of Health & Illness, 28(6), 695–712; U. Felt et al., ‘Unruly Ethics: On the Difficulties of a Bottom-up Approach to Ethics in the Field of Genomics’, (2009) Public Understanding of Science, 18(3), 354–371; A. Hedgecoe, ‘Context, Ethics and Pharmacogenetics’, (2006) Studies in History and Philosophy of Biological and Biomedical Sciences, 37(3), 566–582; A. Hedgecoe and P. Martin, ‘The Drugs Don’t Work: Expectations and the Shaping of Pharmacogenetics’, (2003) Social Studies of Science, 33(3), 327–364; B. Salter ‘Bioethics, Politics and the Moral Economy of Human Embryonic Stem Cell Science: The Case of the European Union’s Sixth Framework Programme’, (2007) New Genetics & Society, 26(3), 269–288; S. Sperling, ‘Managing Potential Selves: Stem Cells, Immigrants, and German Identity’, (2004) Science & Public Policy, 31(2), 139–149; M. N. Svendsen and L. Koch, ‘Between Neutrality and Engagement: A Case Study of Recruitment to Pharmacogenomic Research in Denmark’, (2008) BioSocieties, 3(4), 399–418; S. P. Wainwright et al., ‘Ethical Boundary-Work in the Embryonic Stem Cell Laboratory’, (2006) Sociology of Health & Illness, 28(6), 732–748.
3 M. Pickersgill, ‘From “Implications” to “Dimensions”: Science, Medicine and Ethics in Society’, (2013) Health Care Analysis, 21(1), 31–42.
4 C. Waterton and B. Wynne, ‘Can Focus Groups Access Community Views?’ in R. S. Barbour and J. Kitzinger (eds), Developing Focus Group Research: Politics, Theory and Practice (London: Sage, 1999), pp. 127–143, 142. The methodology of these focus groups is more fully described in the following: M. Pickersgill et al., ‘Constituting Neurologic Subjects: Neuroscience, Subjectivity and the Mundane Significance of the Brain’, (2011) Subjectivity, 4(3), 346–365; M. Pickersgill et al., ‘The Changing Brain: Neuroscience and the Enduring Import of Everyday Experience’, (2015), Public Understanding of Science, 24(7), 878–892; Pickersgill, ‘The Co-production of Science’.
5 S. Jasanoff, S. (ed.) States of Knowledge: The Co-Production of Science and Social Order, Oxford (Routledge, 2004), pp. 1–12; P. Brodwin, ‘The Coproduction of Moral Discourse in US Community Psychiatry’, (2008) Medical Anthropology Quarterly, 22(2), 127–147.
6 See Introduction of this volume; A. Ganguli-Mitra, et al., ‘Reconfiguring Social Value in Health Research through the Lens of Liminality’, (2017) Bioethics, 31(2), 87–96.
7 M. J. Farah, ‘Emerging Ethical Issues in Neuroscience’, (2002) Nature Neuroscience, 5(11), 1123–1129; T. Fuchs, ‘Ethical Issues in Neuroscience’, (2006) Current Opinion in Psychiatry, 19(6), 600–607; J. Illes and É. Racine, ‘Imaging or Imagining? A Neuroethics Challenge Informed by Genetics’, (2005) American Journal of Bioethics, 5(2), 5–18.
8 E. Postan, ‘Defining Ourselves: Personal Bioinformation as a Tool of Narrative Self-conception’, Journal of Bioethical Inquiry, 13(1), 133–151. See also Postan, Chapter 23 in this volume.
9 Farah, ‘Emerging Ethical Issues’; Illes and Racine, ‘Imaging or Imagining?’; M. Gazzaniga, The Ethical Brain (Chicago: Dana Press, 2005).
10 Hedgecoe and Martin, ‘The Drugs Don’t Work’, 8.
11 T. C. Booth et al., ‘Incidental Findings in “Healthy” Volunteers during Imaging Performed for Research: Current Legal and Ethical Implications’, (2010) British Journal of Radiology, 83(990), 456–465; N. A. Scott et al., ‘Incidental Findings in Neuroimaging Research: A Framework for Anticipating the Next Frontier’, (2012) Journal of Empirical Research on Human Research Ethics, 7(1), 53–57; S. A. Tovino, ‘Incidental Findings; A Common Law Approach’, (2008) Accountability in Research, 15(4), 242–261.
12 J. Illes et al., ‘Incidental Findings in Brain Imaging Research’, Science, 311(5762), 783–784, 783.
13 S. Cohn, ‘Making Objective Facts from Intimate Relations: The Case of Neuroscience and Its Entanglements with Volunteers’, (2008) History of the Human Sciences, 21(4), 86–103; S. Shostak and M. Waggoner, ‘Narration and Neuroscience: Encountering the Social on the “Last Frontier of Medicine”’, in M. D. Pickersgill and I. van Keulen, (eds), Sociological Reflections on the Neurosciences (Bingley: Emerald, 2011), pp. 51–74.
14 Wainwright et al., ‘Ethical Boundary-Work’.
15 M. Pickersgill et al., ‘Biomedicine, Self and Society: An Agenda for Collaboration and Engagement’, (2019) Wellcome Open Research, 4(9).
Global health emergencies (GHEs) are situations of heightened and widespread health crisis that usually require the attention and mobilisation of actors and institutions beyond national borders. Conducting research in such contexts is both ethically imperative and requires particular ethical and regulatory scrutiny. While global health emergency research (GHER) serves a crucial function of learning how to improve care and services for individuals and communities affected by war, natural disasters or epidemics, conducting research in such settings is also challenging at various levels. Logistics are difficult, funding is elusive, risks are elevated and likely to fluctuate, social and institutional structures are particularly strained, infrastructure destroyed. GHER is diverse. It includes biomedical research, such as studies on novel vaccines and treatments, or on appropriate humanitarian and medical responses. Research might also include the development of novel public health interventions, or measures to strength public health infrastructure and capacity building. Social science and humanities research might also be warranted, in order to develop future GHE responses that better support affected individuals and populations. Standard methodologies, including those related to ethical procedures, might be particularly difficult to implement in such contexts.
The ethics of GHER relates to a variety of considerations. First are the ethical and justice-based justifications to conduct research at all in conditions of emergency. Second, the ethics of GHER considers whether research is designed and implemented in an ethically robust manner. Finally, ethical issues also relate to questions arising in the course of carrying out research studies. GHER is characterised by a heterogeneity (of risk, nature, contexts, urgency, scope) which itself gives rise to various kinds of ethical implications:Footnote 1 why research is done, who conducts research, where and when it is conducted, what kind of research is done and how. It is therefore difficult to fully capture the range of ethical considerations that arise, let alone provide a one-size-fits-all solution to such questions. Using illustrations drawn from research projects conducted during GHEs, we discuss key ethical and governance concerns arising in GHER – beyond those traditionally associated with biomedical research – and explore the future direction of oversight for GHER. After setting out the complex context of GHER, we illustrate the various ethical issues associated with justifying research, as well as considerations related to context, social value and engagement with the affected communities. Finally, we explore some of the new orientations and lenses in the governance of GHER through recent guidelines and emerging practices.
32.2 The Context of Global Health Emergency Research
GHEs are large-scale crises that affect health and that are of global concern (epidemics, pandemics, as well as health-related crises arising from conflicts, natural disasters or forced displacement). They are characterised by various kinds of urgency, driven by the need to rapidly and appropriately respond to the needs of affected populations. However, effective responses, treatments or preventative measures require solid evidence bases, and the establishment of such knowledge is heavily dependent on findings from research (including biomedical research) carried out in contexts of crises.Footnote 2 As the Council for International Organizations of Medical Sciences (CIOMS) guidelines point out: ‘disasters can be difficult to prevent and the evidence base for effectively preventing or mitigating their public health impact is limited’.Footnote 3 Generating relevant knowledge in emergencies is therefore necessary to enhance the care of individuals and communities, for example through treatments, vaccines or improved palliative care. Research can also consolidate preparedness for public health and humanitarian responses (including triage protocols) and contribute to capacity building (for example, by training healthcare professionals) in order to strengthen health systems in the long run. Ethical consideration and regulation must therefore adapt to both immediate and urgent issues, as well as contribute to developing sustainable and long-term processes and practices.
Adding to this is the fact that responses to GHEs involve a variety of actors: humanitarian responders, health professionals, public health officials, researchers, media, state officials, armed forces, national governments and international organisations. Actors conducting both humanitarian work and research can encounter particular ethical challenges, given the very different motivations behind response and research. Such dual roles might, at times, pull in different directions and therefore warrant added ethical scrutiny and awareness, even where such actors might be best placed to deliver both aims, given their presence and knowledge of the context, and especially if they have existing relationships with affected communities.Footnote 4 Medical and humanitarian responses to GHEs are difficult contexts for ethical deliberation – for ethics review and those involved in research governance – where various kinds of motivations and values collide, potentially giving rise to conflicting values and aims, or to incompatible lines of accountabilityFootnote 5 (for example, towards humanitarian versus research organisations or towards national authorities versus international organisations).
Given the high level of contextual and temporal complexity, and the heightened vulnerability to harm of those affected by GHEs, there is a broad consensus within the ethics literature that research carried out in such contexts requires both a higher level of justification and careful ongoing ethical scrutiny. Attention to vulnerability is, of course, not new to research ethics. It has catalysed many developments in the field, such as the establishment of frameworks, principles, and rules aiming to ensure that participants are not at risk of additional harm, and that their interests are not sacrificed to the needs and goals of research. It has also been a struggle in research governance, however, to find appropriate regulatory measures and measures of oversight that are not overly protectionist; ones that do not stereotype and silence individuals and groups but ensure that their interests and well-being are protected. The relationship between research and vulnerability becomes particularly knotty in contexts of emergency. How should we best attend to vulnerability when it is pervasive?Footnote 6 On the one hand, all participants are in a heightened context of vulnerability when compared to populations under ordinary circumstances. On the other hand, those individuals who suffer from systematic and structural inequality, disadvantage and marginalisation, will also see their potential vulnerabilities exacerbated in conditions of public health and humanitarian emergencies. The presence of these multiple sources and forms of vulnerability adds to the difficulty in determining whether research and its design are ethically justified.
32.3 Justifying Research: Why, Where and When?
While research is rightly considered an integral part of humanitarian and public health responses to GHEs,Footnote 7 and while there may indeed, as the WHO suggests, be an ‘ethical obligation to learn as much as possible, as quickly as possible’,Footnote 8 research must be ethically justified on various fronts. At a minimum, GHER must not impede current humanitarian and public health responses, even as it is deployed with the aim of improving future responses. Nor should it drain existing resource and skills. Additionally, the social value of such research derives from its relevance to the particular context and the crisis at hand.Footnote 9 Decisions regarding location, recruitment of participants, as well as study design (including risk–benefit calculations) must ensure that scientific and social value are not compromisedFootnote 10 in the process. The Working Group on Disaster Research Ethics (WGDRE), formed in response to the 2004 Indian Ocean tsunami, has argued that while ethical research can be conducted in contexts of emergencies, such research must respond to local needs and priorities, in order avoid being opportunistic.Footnote 11 Similar considerations were reiterated during the 2014–2016 Ebola outbreak. Concern was expressed that ‘some clinical sites could be perversely incentivized to establish research collaborations based on resources promised, political pressure or simply the powers of persuasion of prospective researchers – rather than a careful evaluation of the merits of the science or the potential benefit for patients. Some decision-makers at clinical sites may not have the expertise to evaluate the scientific merits of the research being proposed’.Footnote 12 Such observation reflects considerations that have been identified in a range of GHE settings.
The question of social value is not only related to the ultimate or broad aims of research. Specific research questions can only be justified if these cannot be investigated in non-crisis conditions,Footnote 13 and as specified above, where the answers to such questions is expected to be of benefit to the community in question – or to relevantly similar communities, be it during the current emergency, or in the future. Relatedly, research should be conducted within settings that are most likely to benefit from the generation of such knowledge, perhaps because they are the site of cyclical disasters or endemic outbreaks that frequently disrupt social structures. Given the heightened precarity of GHE contexts, the risk of exposing study participants to additional harm is particularly salient, and such potential risk must therefore be systematically justified. If considerations of social value are key, these need to extend to priority-setting in GHER. Yet, the funding and development of GHER is not immune to how research priority is set globally. Consequently, this divergence (between the kind of research that is currently being funded and developed, and the research that might be required in specific contexts of crisis) will present particular governance challenges at the local, national, and global levels. Stakeholders from contexts of scarce resources have warned that priority-setting in GHE might mirror broader global research agendas, where the health concerns and needs of low- and middle-income countries (LMICs) are systematically given lower priority.Footnote 14 The global research agenda is not generally directed by the specific needs arising from crises (especially crisis in resource-poor contexts), and yet the less well-funded and less resilient health systems of LMICs frequently bear the brunt and severity of crises. The ethical challenges associated with conducting research in contexts of crisis therefore are consistently present at all levels, from the broader global research agenda, to the choice of context and participants, from how research is designed and conducted, to how research data and findings are used and shared.
32.4 Justifying Research: What and How?
GHER includes a wide range of activities, from minimally invasive collection of dataFootnote 15 and systems research aimed at strengthening health infrastructure,Footnote 16 to more controversial procedures including testing of experimental therapeutics and vaccines.Footnote 17 A common issue of GHER, one that has arisen prominently during recent epidemics and infectious disease outbreaks, is the challenge to long-established standards and trials designs, in particular to what is known as the ‘gold standard’: randomised, double-blind clinical trials as the standard developmental pathway for new drugs and interventions. The ethical intuitions and debates often pull in different direction. As discussed earlier in the chapter, the justification for conducting research in crises must be ethically robust, as must research design and deployment. Equally, in the context of the COVID-19 pandemic, a strong argument has been made for the need to ensure methodologically rigorous research design and not to accept lower scientific standards as a form of ‘pandemic research exceptionalism’.Footnote 18 At the time of writing, human challenge trials – the intentional infection of research participants with the virus – proposed as a way to accelerate the development of a vaccine for the novel coronavirus, remain ethically and scientifically controversial. While some commentators have suggested that this may be a rapid and necessary route to vaccine development,Footnote 19 others have argued that the criteria for ethical justification of human challenge studies, including social value and fair participation selection, are not likely to be met.Footnote 20
Such tensions are particularly heightened in contexts of high infectious rates, morbidity and mortality. During the 2014–2016 Ebola outbreak in West Africa, several unregistered interventions were approved for use as investigational therapeutics. Importantly, while these were approved for emergency use, they were to be deployed under the MEURI scheme: ‘monitored emergency use of unregistered and experimental interventions (MEURI)’,Footnote 21 that is, through a process where results of an intervention’s use are shared with the scientific and medical community, and not under the medical label of ‘compassionate use’. This approach allows clinical data to be compiled and thus contributes to the process of generating generalisable evidence. The deployment of experimental drugs was once again considered – alongside the deployment of experimental vaccines – early during the 2018 Ebola outbreak in the Democratic Republic of the Congo.Footnote 22 This time, regulatory and ethical frameworks were in place to approve access to five investigational therapeutics under the MEURI scheme,Footnote 23 two of which have since shown promise during the clinical trials conducted in 2018. The first Ebola vaccine, approved in 2019, was tested through ring vaccine trials first conducted during the 2014–2016 West African outbreak. Methods and study designs need to be aligned with the needs of the humanitarian response, and yet it is not an easy task to translate the values of humanitarian responses onto research design. How experimental interventions should be deployed under the MEURI scheme was heavily debated and contested by local communities, who saw these interventions as their only and last resort against the epidemic.
While success stories in GHER heavily depend on global cooperation, suitable infrastructure, and often collaboration between the public and private sector, such interventions are unlikely to succeed without the collaboration and engagement of local researchers and communities, and without establishing a relationship based on trust. Engaging with communities and establishing relationships of trust and respect are key to successful research endeavours in all contexts, but are particularly crucial where social structures have broken down and where individuals and communities are at heightened risk of harm. Community engagement, especially for endeavours not directly related to response and medical care, is also particularly challenging to implement. These challenges are most significant in sudden-onset GHE such as earthquakes,Footnote 24 if prior relationships do not exist between researchers and the communities. During the 2014–2016 Ebola outbreak, the infection and its spread caused ‘panic in the communities by the lack of credible information and playing to people’s deepest fears’.Footnote 25 Similarly, distrust arose during the subsequent outbreak in eastern DRC, a region already affected by conflict, where low trust in institutions and officials resulted in low acceptance of vaccines and a spread of the virus.Footnote 26 Similarly, in the aftermath of Hurricane Katrina there was widespread frustration and distrust of the US federal response by those engaged in civil society and community-led responses.Footnote 27 However, such contexts have also given rise to new forms solidarity and cooperation. The recent Ebola outbreaks, the aftermath of Katrina, the 2004 Indian Ocean tsunami and the Fukushima disaster have also given rise to unprecedent levels engagement and leadership by members of the affected communities.Footnote 28 Given that successful responses to GHEs are heavily dependent on trust as well as on the engagement and ownership of response activities by local communities, there is little doubt that successful endeavours in GHER will also depend on establishing close, trustworthy and respectful collaborations between researchers, responders, local NGOs, civil society and members of the affected population.
The difficulty of conducting GHER is compounded by much complexity at the level of regulation, governance and oversight. Those involved in research in these contexts are working in and around various ethical frameworks including humanitarian ethics, medical ethics, public health ethics and research ethics. Each framework has traditionally been developed with very different actors, values and interests in mind. Navigating these might result in various kinds of conflicts or dissonance, and at the very least make GHER a particularly challenging endeavour. Such concerns are then compounded by regulatory complexity, including existing national laws and guidelines, international regulations and guidance produced by different international bodies (for example, the International Health Regulations 2005 by the WHO and Good Clinical Practice by the National Institute for Health Research in the United Kingdom), all of which are engaged in a context of urgency, shifting timelines and rapidly evolving background conditions. Two recent pieces of guidance are worth highlighting in this context. The first are the revised CIOMS guidelines, published in 2016, which have a newly added entry (Guideline 20) specifically addressing GHER. The CIOMS guidelines recognise that ‘[d]isasters unfold quickly and study designs need to be chosen so that studies will yield meaningful data in a rapidly evolving situation. Study designs must be feasible in a disaster situation but still appropriate to ensure the study’s scientific validity’.Footnote 29 While reaffirming the cornerstones of research ethics, Guideline 20 also refers to the need to ensure equitable distribution of risks and benefits; the importance of community engagement; the need for flexibility and expediency in oversight while providing due scrutiny; and the need to ensure the validity of informed consent obtained under conditions of duress. CIOMS also responds to the need for flexible and alternative study designs and suggests that GHER studies should ideally be planned ahead and that generic versions of protocols could be pre-reviewed prior to a disaster occurring.
Although acting at a different governance level to CIOMS, the Nuffield Council on Bioethics has also recently published a report on GHER,Footnote 30 engaging with emerging ethical issues and echoing the central questions and values reflected in current discussions and regulatory frameworks. Reflecting on the lessons learned from various GHEs over the last couple of decades, the report encourages the development of an ethical compass for GHER that focuses on respect, reducing suffering, and fairness.Footnote 31 The report is notable for recommending that GHER endeavours attend not just to whose needs are being met (that is, questions of social value and responsiveness) but also to who has been involved in defining those needs. In other words, the report reminds us that beyond principles and values guiding study design and implementation, ethical GHER requires attention to a wider ethics ecosystem that includes all stakeholders, and that upholding fairness is not only a feature of recruitment or access to the benefits of research, but must also exist in collaborative practices with local researchers, authorities and communities.
All guidelines and regulations need interpretation on the ground,Footnote 32 at various levels of governance, as well as by researcher themselves. The last couple of decades have seen a variety of innovative and adaptive practices being developed for GHER, including the establishment of research ethics committees specifically associated with humanitarian organisations. Similarly, many research ethics committees that are tasked with reviewing GHER protocols have adapted their standard procedures in line with the urgency and developing context of GHEs.Footnote 33 Such strategies include convening ad-hoc meetings, prioritising these protocols in the queue for review, waiving deadlines, having advisors pre-review protocols and conducting reviews by teleconference.Footnote 34 Another approach can be found in the development of pre-approved, or pre-reviewed protocol templates, which allow research ethics committees to conduct an initial review of proposed research ahead of a crisis occurring, or to review generic policies for the transfer of samples and data. Following their experience in reviewing GHER protocols during the 2014–2016 Ebola outbreak, members of the World Health Organization Ethics Review Committee recommended the formation of a joint research ethics committee for future GHEs.Footnote 35 A need for greater involvement and interaction between ethics committees and researchers has been indicated by various commentators, pointing to the need for ethical review to be an ongoing and iterative process. One such model for critical and ongoing engagement, entitled ‘real-time responsiveness’,Footnote 36 proposes a more dynamic approach to ethics oversight for GHER, including more engagement between researchers, research ethics committees, and advisors once the research is underway. An iterative review process has been proposed for research in ordinary contextsFootnote 37 but is particularly relevant to GHER, given the urgency and rapidly changing context.
It is important to also consider how to promote and sustain the ethical capacities of researchers in humanitarian settings. Such capacities include the following, which have been linked to ethical humanitarian action:Footnote 38 foresighting (the ability to anticipate potential for harms), attentiveness (especially for the social and relational dynamics of particular GHE contexts), and responsiveness to the often-shifting features of a crisis, and their implications for the conduct of the research. These capacities point to the role of virtues, in addition to guidelines and principles, in the context of GHER. As highlighted by O’Mathuna, humanitarian research calls for virtuous action on the part of researchers in crisis settings ‘to ensure that researchers do what they believe is ethically right and resist what is unethical’.Footnote 39 Ethics therefore is not merely a feature of approval or bureaucratic procedure. It must be actively engaged with at various levels and also by all involved, including by researchers themselves.
32.6 New Orientations and Lenses
As outlined above, GHEs present a distinctive context for the conduct of research. Tailored ethics guidance for GHER has been developed by various bodies, and it has been acknowledged that GHER can be a challenging fit for standard models to ethics oversight and review. As a result, greater flexibility in review procedures has been endorsed, while emphasising the importance of upholding rigorous appraisal of protocol. Particular attention has been given to the proportionality of ethical scrutiny to the ethical concerns (risk of harm, issues of equity, situations of vulnerability) associated with particular studies. Novel approaches, such as the preparation and pre-review of generic protocols, have also been incorporated into more recent guidance documents (e.g. CIOMS) and implemented by research ethics committees associated with humanitarian organisations.Footnote 40 These innovations reflect the importance of temporal sensitivity in GHER and in its review. As well as promoting timely review processes for urgent protocols, scrutiny is also needed to identify research that does not need to be conducted in an acute crisis and whose initiation ought to be delayed.
Discussions about GHER, and on disaster risk reduction more broadly, also point to the importance of preparedness and anticipation. Sudden onset events and crises often require quick response and reaction. Nonetheless, there are many opportunities to lay advance groundwork for research and also for research ethics oversight. In this sense, pre-review of protocols, careful preparation of standard procedures, and even research ethics committees undertaking their own planning procedures for reviewing GHER, are all warranted. It also suggests that while methodological innovation and adaptive designs may be required, methodological standards should be respected in crisis research and can be promoted with more planning and preparation.
Research conducted in GHEs present a particularly difficult context in terms of governance. While each kind of emergency presents its own particular challenge, there are recurring patterns, characterised by urgency in terms of injury and death, extreme temporal constraints, and uncertainty in terms of development and outcome. Research endeavours have to be ushered through a plethora of regulation at various levels, not all of which have been developed with GHER in mind. Several sectors are necessarily involved: humanitarian, medical, public health, and political to name just a few. Conducting research in these contexts is necessary, however, in order to contribute to a robust evidence-base for future emergencies. Ethical considerations are crucial in the implementation and interpretation of guidance, and in rigorously evaluating justification for research. Governance must find a balance between the protection of research participants, who find themselves in particular circumstances of precarity, and the need for flexibility, preparedness, and responsiveness as emergencies unfold. Novel ethical orientations suggest the need, at times, to rethink established procedures, such as one-off ethics approval, or gold standard clinical trials, as well as to establish novel ethical procedures and practice, such as specially trained ethics committees, and pre-approval of research protocols. However, the ethics of such research also suggest that time, risk and uncertainty should not work against key ethical considerations relating to social value, fairness in recruitment or against meaningful and ongoing engagement with the community in all phases of response and research. A dynamic approach to the governance of GHER will also require supporting the ability of researchers, ethics committees and those governing research to engage with and act according to the ethical values at stake.
1 M. Hunt et al., ‘Ethical Implications of Diversity in Disaster Research’, (2012) American Journal of Disaster Medicine, 7(3), 211–221.
2 N. M. Thielman et al., ‘Ebola Clinical Trials: Five Lessons Learned and a Way Forward’, (2016) Clinical Trials, 13(1), 86–86.
3 Council for International Organizations of Medical Sciences, ‘International Ethical Guidelines for Health-related Research Involving Humans’, (CIOMS, 2016), Guideline 20.
4 A. Levine, ‘Academics Are from Mars, Humanitarians Are from Venus: Finding Common Ground to Improve Research during Humanitarian Emergencies,’ (2016) Clinical Trials, 13(1), 79–82.
5 Nuffield Council on Bioethics, ‘Research in Global Health Emergencies: Ethical Issues,’ (Nuffield Council on Bioethics, 2020).
6 C. Tansey et al., ‘Familiar Ethical Issues Amplified’, (2017) BMC Medical Ethics, 1891, 1–12.
7 Thielman et al., ‘Ebola Clinical Trials’.
8 WHO, ‘Guidance For Managing Ethical Issues In Infectious Disease Outbreaks’, (WHO, 2016), 30.
10 CIOMS, ‘International Ethical Guidelines’, Commentary to Guideline 20.
11 A. Sumathipala et al., ‘Ethical Issues in Post-disaster Clinical Interventions and Research: A Developing World Perspective. Key Findings from a Drafting and Consensus Generating Meeting of the Working Group on Disaster Research Ethics (WGDRE) 2007’, (2010) Asian Bioethics Review, 2(2), 124–142.
12 Thielman et al. ‘Ebola Clinical Trials’, 85.
13 Tansey et al., ‘Familiar Ethical Issues’.
14 Sumathipala et al., ‘Ethical Issues’.
15 Nuffield Council on Bioethics, ‘Briefing Note: Zika – Ethical Considerations’, (Nuffield Council on Bioethics, 2016).
16 S. Qari et al., ‘Preparedness and Emergency Response Research Centers: Early Returns on Investment in Evidence-based Public Health Systems Research’, (2014) Public Health Reports, 129(4), 1–4.
17 A. Rid and F. Miller, ‘Ethical Rationale for the Ebola “Ring Vaccination” Trial Design’, (2016) American Journal of Public Health, 106(3), 432–435.
18 A. J. London and J. Kimmelman, ‘Against Pandemic Research Exceptionalism’, (2020) Science, 368(6490), 476–477.
19 E. Zamrozik and M. J. Selgelid, ‘Covid-19 Human Challenge Studies: Ethical Issues’, (2020) Lancet Infectious Disease, www.thelancet.com/journals/laninf/article/PIIS1473-3099(20)30438-2/fulltext.
20 S. Holm, ‘Controlled Human Infection with SARS-CoV-2 to Study COVID-19 Vaccine and Treatments: Bioethics in Utopia,’ (2020) Journal of Medical Ethics, 0, 1–5.
21 WHO, ‘Ethical Issues Related to Study Design for Trials on Therapeutics for Ebola Virus Disease’, (WHO, 2014), 2.
22 E. C. Hayden, ‘Experimental Drugs Poised for Use in Ebola Outbreak,’ Nature (18 May 2018), www.nature.com/articles/d41586-018-05205-x.
23 WHO, ‘Ebola Virus Disease – Democratic Republic of Congo’, WHO (31 August 2018), www.who.int/csr/don/31-august-2018-ebola-drc/en/.
24 Tansey et al., ‘Familiar Ethical Issues’, 24.
25 A. Saxena and M. Gomes, ‘Ethical Challenges to Responding to the Ebola Epidemic: The World Health Organization Experience’, (2016) Clinical Trials, 13(1), 96–100.
26 P. Vince et al., ‘Institutional Trust and Misinformation in the Response to the 2018–2019 Ebola Outbreak in North Kivu, DR Congo: A Population-based Survey’, (2019) Lancet, 19(5), 529–356.
27 Nuffield Council on Bioethics, ‘Research in Global Health Emergencies’, 41.
28 Footnote Ibid., 32–36.
29 CIOMS, ‘International Ethical Guidelines’, Guideline 20.
30 Nuffield Council on Bioethics, ‘Research in Global Health Emergencies’.
31 Footnote Ibid., xvi–xvii.
32 Footnote Ibid., 29.
33 M. Hunt et al., ‘The Challenge of Timely, Responsive and Rigorous Ethics Review of Disaster Research: Views of Research Ethics Committee Members’, (2016) PLoS ONE, 11(6), e0157142.
35 E. Alirol et al., ‘Ethics Review of Studies during Public Health Emergencies – The Experience of the WHO Ethics Review Committee During the Ebola Virus Disease Epidemic’, (2017) BMC Medical Ethics, 18(1), 8.
36 L. Eckenwiler et al., ‘Real-Time Responsiveness for Ethics Oversight During Disaster Research’, (2015) Bioethics, 29(9), 653–661.
37 A. Ganguli-Mitra et al., ‘Reconfiguring Social Value in Health Research Through the Lens of Liminality’, (2017) Bioethics, 31(2), 87–96.
38 N. Pal et al., ‘Ethical Considerations for Closing Humanitarian Projects: A Scoping Review’, (2019) Journal of International Humanitarian Action, 4(1), 1–9.
39 D. O’Mathúna, ‘Research Ethics in the Context of Humanitarian Emergencies’, (2015) Journal of Evidence-Based Medicine, 8(1), 31–35, 31.
40 D. Schopper et al., ‘Innovations in Research Ethics Governance in Humanitarian Settings’, (2015) BMC Medical Ethics, 16(1), 7–8.
Research in the field of regenerative medicine, especially that which uses cells and tissues as therapeutic agents, has given rise to new products called ‘advanced therapies’ or advanced therapeutic medicinal products (ATMPs). These cutting-edge advances in biomedical research have generated new areas for research at both an academic and industrial level and have posed new challenges for existing regulatory regimes applicable to therapeutic products. The leading domestic health regulatory agencies in the world, such as the US Food and Drug Administration (FDA) and the European Medicines Agency (EMA), have regulated therapeutic tissues and cells as biological medicines and are currently making efforts to establish a harmonised regulatory system that facilitates the process of approval and implementation of clinical trials.
In the mid-2000s, the Argentine Republic did not have any regulations governing ATMPs, and governance approaches to them were weak and diverse. Although the process of designing a governance framework posed significant challenges, Argentina started to develop a regulatory framework in 2007. After more than ten years of work, this objective was achieved thanks to local efforts and the support of academic institutions and regulatory agencies from countries with more mature regulatory frameworks. In 2019, however, Argentina was leading in the creation of harmonised regulatory frameworks in Latin America.
In this chapter I will show how the framework was developed from a position of state non-intervention to the implementation of a governance framework that includes hard and soft law. I will identify the main objectives that drove this process, the role of international academic and regulatory collaborations, milestones and critical aspects of the construction of normative standards and the ultimate governance framework, and the lessons learned, in order to be able to transfer them to other jurisdictions.
33.2 The Evolution of Regulation of Biotechnology in Argentina: Agricultural Strength and Human Health Fragmentation
Since its advent in the middle of the 1990s, modern biotechnology has represented an opportunity for emerging economies to build capacity alongside high-income countries, thereby blurring the developed/developing divide in some areas (i.e. it represents a ‘leapfrog’ technology similar to mobile phones). For this to occur, and for maximum benefit to be realised, an innovation-friendly environment had to be fostered. Such an environment does not abdicate moral limits or public oversight but is characterised by regulatory clarity and flexibility.Footnote 1 The development of biotechnology in the agricultural sector in Argentina is an example of this. Although it had not been a technology-producing country, Argentina faced a series of favourable conditions that allowed the rapid adoption of genetically modified crops.Footnote 2 At the same time, significant institutional decisions were made, especially with regard to biosecurity regulations, with the creation of the National Commission for Agricultural Biotechnology (CONABIA) in 1991.Footnote 3 These elements, together with the fact that Argentina has 26 million hectares of arable land, made the potential application of these technologies in Argentina – and outside the countries of origin of the technology, especially the USA – possible. This transformed Argentina into an exceptional ‘landing platform’ for the rapid adoption of these biotechnological developments. The massive incorporation of Roundup Ready (RR) soybean is explained by the reduction of its production costs and by the expansion of arable land. This positioned Argentina as the world’s leading exporter of genetically modified soybean and its derivatives.Footnote 4
The development of biotechologies directed at human health was more complex and uncertain, and unfolded in a more contested and dynamic setting, which resulted in it evolving at a much slower pace, with regulation also developing more slowly, involving a greater number of stakeholders. This context, as will be demonstrated below, offered opportunities for developing new processual mechanisms aimed at soliciting and developing the views and concerns of diverse stakeholders.Footnote 5
33.3 First Steps in the Creation of A Governance Framework for Cell Therapies
The direct antecedent of stem cells for therapeutic purposes is the hematopoietic progenitor cell (HPC), which has been extracted from bone marrow to treat blood diseases for more than fifty years and is considered an ‘established practice’.Footnote 6 HPC transplantation is regulated by the Transplant Act 1993, and its regulatory authority is the National Institute for Transplantation (INCUCAI), which adopted regulations governing certain technical and procedural aspects of this practice in 1993 and 2007.Footnote 7 This explains the rationale by which many countries – including Argentina – started regulating cell therapies under the a transplantation legal framework. However, Argentina’s active pursuit of regenerative medicine research aimed at developing stem cell solutions to health problems required something more, and despite its efforts to promote this research, there were no regulations or studies related to ethics and the law in this field.Footnote 8
In 2007, the Advisory Commission on Regenerative Medicine and Cellular Therapies (Commission) was created under the National Agency of Promotion of Science and Technology (ANPCYT) and the Office of the Secretary of Science and Technology was transformed into the Ministry of Science, Technology and Productive Innovation (MOST) in 2008.Footnote 9 The Commission comprised Argentinian experts in policy, regulation, science and ethics, and was set up initially with the objective of advising the ANPCYT in granting funds for research projects in regenerative medicine.Footnote 10 However, faced with a legal gap and the increasing offer of unproven stem cells treatments to patients, this new body became the primary conduit for identifying policy needs around stem cell research and its regulation, including how existing regulatory institutions in Argentina such INCUCAI and the National Administration of Drugs, Food and Medical Technology (ANMAT), would be implicated.
The Commission promoted interactions with a wide range of stakeholders from the public and private sectors, the aim being to raise awareness and interest regarding the necessity of forging a governance framework for research and products approval in the field of regenerative medicine. In pursuing this ambitious objective, the Commission wanted to benefit from lessons from other regions or countries.Footnote 11 In 2007, it signed a Collaborative Agreement between the Argentine Secretary of Science and Technology and the University of Edinburgh’s AHRC SCRIPT Centre (the Arts and Humanities Research Council Research Centre for Studies in Intellectual Property & Technology Law).Footnote 12 This collaboration, addressed in greater detail below, extended to 2019 and was a key factor in the construction of the regulatory framework for ATMPs in Argentina.
33.4 From Transplants to Medicines
In 2007, in an attempt to halt the delivery of untested stem cell–based treatments that were not captured by the current regulatory regime applicable to HPCs, the Ministry of Health issued Resolution MS 610/2007, under the Transplant Act 1993. The 610/2007 Resolution states ‘activities related to the use of human cells for subsequent implantation in humans fall within the purview of the Transplant Authority (INCUCAI)’.Footnote 13 This Resolution formally recognises INCUCAI’s competence to deal with activities involving the implantation of cellular material into humans. However, it is very brief and does not specify which type of cell it applies to, nor any specific procedures (kind of manipulation) to which cells it can be subject, an issue that is, in any event, beyond the scope of the Act.Footnote 14 This Resolution is supplemented by Regulatory Decree 512/95, which, in Article 2, states that ‘any practice that involves implanting of human cells that does not fall within HPC transplantation is radically new and therefore is considered as experimental practice until it is demonstrated that it is safe and effective’.
To start a new experimental practice, researchers or medical practitioners must seek prior authorisation from INCUCAI by submitting a research protocol signed by the medical professional or team leader who will conduct the investigation, complying with all requirements of the regulations, including the provision of written informed consent signed by the research subjects, who must not be charged any monies to participate in the procedure. In May 2012, INCUCAI issued Resolution 119/2012, a technical standard to establish requirements and procedures for the preparation of cellular products. Substantively, it is in harmony with international standards of good laboratory and manufacturing practices governing this matter. However, very few protocols have been filed with INCUCAI since 2007, and the delivery of unproven stem cell treatments continued to grow, a situation that exposed INCUCAI’s difficulties in policing the field and reversing the growth of health scams.Footnote 15
Another attempt to regulate was the imposition of obligations to register some cellular-based products as biological medicaments. The ANMAT issued two regulations under the Medicines Act 1964:Footnote 16 Dispositions 7075/2011 and 7729/2011. These define ‘biological medicinal products’ as ‘products derived from living organisms like cells or tissues’, a definition that captures stem cell preparations, and they are categorised in both Dispositions as ATMPs. Cellular-based or biological medicaments must be registered with the National Drugs Registry (REM), and approval for marketing, use and application in humans falls within the scope of the Medicines Act and its implementing regulations. Cellular medicine manufacturers must register at the ANMAT as manufacturing establishments, and they must request product registration before marketing or commercialising their products.
Importantly, the ANMAT regulations do not apply in cases where ATMPs are manufactured entirely by an authorised medical centre, to be used exclusively in that centre. In that case, the local health authority maintains the right for approval. Like all regulations issued by the national Ministry of Health under the Medicines Act, the provisions of Dispositions 7075/2011 and 7729/2011 apply only in areas of national jurisdiction, in cases where interprovincial transit is implicated, or where ATMPs are imported or exported. In short, the Medicines Act is not applicable so long as the product does not leave the geographic jurisdiction of the province. And within the provinces, regulatory solutions were inconsistent; for example, in one they were regulated as transplants and in another as medicines.
As alluded to above, while imperfect regulatory attempts were pursued, the offer of unproven treatments with cells continued to grow. As in many countries, it was usual to find publications in the media reporting the – almost magical – healing power of stem cells, with little or no supporting evidence, and such claims have great impact on public opinion and on the decisions of individual patients. Moreover, the professionals offering these ‘treatments’ took refuge in the independence of medical practice and the autonomy that it offers, but it seems clear that some of the practices reported were directly contrary to established professional ethics, and they threatened the safety of patients receiving the treatments.Footnote 17 In addition to the safety issues, given that these were experimental therapies (that have not been proven to be safe and effective), health insurers have stated their refusal to cover them (and one can anticipate the same antipathy to indemnifying patients who chose to accept them and are injured by them). Indeed, patients filed judicial actions demanding payment of such treatments by both health insurance institutions and the national and provincial state (as guarantors of public health).Footnote 18
The regulatory regime – by virtue of its silence, its imperfect application to regenerative medicine and concomitant practices, and its shared authority between national and provincial bodies – permitted unethical practices to continue, and decisions of some courts have mandated the transfer of funds from the state (i.e. the social welfare system) to the medical centres offering these experimental cellular therapies. In short, the regime established a poorly coordinated regulatory patchwork that was proving to be insufficient to uniformly regulate regenerative medicine – and stem cell – research and its subsequent translation into clinical practice and treatments as ATMPs. Moreover, attempts by regulatory authorities to stop these practices, though valiant, also proved ineffective.
33.5 Key Drivers for the Construction of the Governance Framework
The landscape described above endured until 2017, when the Interministerial Commission for Research and Medicaments of Advanced Therapies (Interminsterial Commission) was created. This new body, jointly founded by the Ministry of Science and Technology (MOST) and the Ministry of Health (MOH), which also oversaw INCUCAI and ANMAT, was set up to:
1. Advise the MOST and MOH in the subjects of their competence.
2. Review current regulations on research, development and approval of products in order to propose and raise for the approval of the competent authority, a comprehensive and updated regulatory framework for advanced therapies.
3. Promote dissemination within the scientific community and the population more broadly on the state of the art relating to ATMPs.
Led by a coordinator appointed by the MOST, the Interministerial Commission focused its efforts first on adopting a new regulatory framework that was harmonised with the EMA and FDA, and that recognised the strengths of local institutions in fulfilling its objectives. The strategy to create the governance framework was centred in three levels of norms: federal law, regulation and soft law. The proposal was accepted by both Ministries and efforts were made to put in force, first, the regulatory framework and soft law in order to stop the delivery of unproven treatments. These elements would then be in force while a bill of law was sent to the National Parliament.
On September 2018, the new regulatory framework was issued through ANMAT Disposition 179/2018 and an amendment to the Transplant Law giving competence to INCUCAI to deal with hematopoietic progenitor cells (CPH) in their different collection modalities, the cells, tissues and/or starting materials that originate, compose or form part of devices, medical products and medicines, as well as cells of human origin of autologous use used in the same therapeutic procedure with minimal manipulation and to perform the same function of origin.
The Interministerial Commission benefitted immensely from the work of the original Commission, which was formed in 2007 and which collaborated across technical fields and jurisdictional borders for a decade, moving Argentina from a position of no regulation for ATMPs, to one of imperfect regulation (limited by the conditions of the time). The original Commission undertook the following:
1. Undertaking studies on the legislation of Argentina and other countries to better understand how these technical developments might be shaped by law (i.e. through transplant, medicines or a sui generis regime).
2. Proposing a governance framework adapted to the Argentine legal and cultural context, harmonised with European and US normative frameworks.
3. Communicating this initiative to all interested sectors and managing complex relationships to promote debate in society, and then translate learnings from that debate into a normative/governance plan.Footnote 19
The work of the Commission was advanced through key collaborations; first and foremost with the University of Edinburgh (2007–2019). This collaboration had several strands and an active institutional relationship.Footnote 20
Other collaborations involved the Spanish Agency for Medicaments, the Argentine judiciaryFootnote 21 and the creation of the Patient Network for Advanced Therapies (APTA Network) to provide patients with accurate information about advances in science and their translation into healthcare applications. All this was accompanied by interactions with a range of medical societies in order to establish a scientific position in different areas of medicine against the offer of unproven treatments.Footnote 22
33.6 Current Legal/Regulatory Framework
The current legal framework in force and proposed by the Interministerial Commission is the result of a collaboration work focused on identifying the different processes involved in research and approval of ATMPs and set up an effective articulation between its parts. It consists of laws and regulations and establishes a coordinated intervention of both authorities, ANMAT and INCUCAI, in the process of approval of research and products. The system operates as follows:
1. Medicaments Law establishes ANMAT with competence to regulate the scientific and technical requirements at national level applicable to clinical pharmacology studies, the authorisation of manufacturing establishments, production, registration and authorisation of commercialisation, and