Skip to main content Accessibility help
×
Hostname: page-component-77c89778f8-gvh9x Total loading time: 0 Render date: 2024-07-20T13:09:02.810Z Has data issue: false hasContentIssue false

Part II - European Regulation of Medical Devices

Introduction

Published online by Cambridge University Press:  31 March 2022

I. Glenn Cohen
Affiliation:
Harvard Law School, Massachusetts
Timo Minssen
Affiliation:
University of Copenhagen
W. Nicholson Price II
Affiliation:
University of Michigan, Ann Arbor
Christopher Robertson
Affiliation:
Boston University
Carmel Shachar
Affiliation:
Harvard Law School, Massachusetts

Summary

Type
Chapter
Information
The Future of Medical Device Regulation
Innovation and Protection
, pp. 47 - 114
Publisher: Cambridge University Press
Print publication year: 2022
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

Similar to the United States’ Food and Drug Administration (FDA), regulators in other jurisdictions also seek to address the increasing significance of data-driven digital health products and their interface with medical AI and machine learning. This also holds true for the European Union (EU) and its member states, as well as the United Kingdom. To be lawfully marketed within the European Union, all medical devices and in vitro diagnostic medical devices must meet the CE marking requirements under the relevant EU regulatory frameworks.Footnote 1 On May 25, 2017, two major regulatory changes simultaneously entered into force, which are highly relevant for medical device manufacturers: EU Regulation 2017/745 on medical devices (MDR) and EU Regulation 2017/746 on in vitro diagnostic medical devices (IVDR).Footnote 2 In reaction to the COVID-19 pandemic’s impact on medical device stakeholders, and with patient health and safety as a guiding principle, the application date for the EU Medical Device Regulation (2017/745) (MDR) had been postponed from May 2020 to May 2021.Footnote 3 This decision was met with a sigh of relief since it gave stakeholders more time to prepare for – and comply with – the new regulatory framework. However, in light of new technical developments and capabilities many uncertainties and challenges remain to be addressed.

As the contributions in this part demonstrate, this also concerns the broader legislative framework within which the European medicine agencies and the so-called notified bodies will have to operate. In addition to product-specific regulations, these authorities will have to consider a great number of recent laws, guidance documents, policy papers, strategy announcements, and initiatives, such as the European Health Data Space (EHDS). They will have to deal with a broad variety of relevant topics ranging from health data protection, social justice, and cybersecurity to liability, competition law, and intellectual property rights. These interacting initiatives are expected not only to have a major impact on the specific regulation, but also on the wider governance of medical devices and health data uses. To achieve the most beneficial outcome for patients and to alleviate potential risks, it is important to consider these developments from a holistic perspective. After all, most systems are only as strong as their weakest link in both regional and international contexts.

That this holds particularly true in the cybersecurity context is highlighted by Elisabetta Biasin and Erik Kamenjasevic’s chapter, “Cybersecurity of Medical Devices: Regulatory challenges in the European Union.” In light of recent cyberattacks on digital hospital systems and medical devices, which has also become a major issue during the COVID-19 pandemic, their chapter delivers an important contribution to the laws of medical devices and cybersecurity. In particular, the authors analyze and discuss the interface of the EU medical devices’ legal framework with the EU cybersecurity policy objectives. Highlighting a great number of recent threats and challenges, the authors conclude that “the adequate level of cybersecurity and resilience of medical devices is one of the crucial elements for maintaining the daily provision of health care services.” In order to provide a step forward in mitigating these challenges, the authors provide several recommendations that EU regulators should consider, ranging from better guidelines on specific security standards to improving the cooperation between competent national authorities.

This certainly also applies to the health data protection context, as it is explained by Hannah van Kolfschooten in the next chapter, “The mHealth Power Paradox: Improving Data Protection in Health Apps through Self-Regulation in the European Union.” The author asks, “whether and to what extent self-regulation by app stores may contribute to the level of health data protection in the European Union?” To answer this question, she explores health data protection issues regarding mHealth apps, and analyzes the EU legal framework governing mHealth apps. Concentrating on the most relevant stipulations of the EU’s General Data Protection Regulation (GDPR),Footnote 4 the author discusses the “benefits and risks of industry self-regulation as an alternative means to protect data protection rights in light of current mHealth regulation practices by Apple’s App Store and Google’s Google Play.” This allows her to propose several improvements to self-regulation in this field.

The GDPR is also at the center of the next chapter by Janos Meszaros, Marcelo Corrales Compagnucci and the author of this introduction. In their chapter, “The Interaction of the Medical Device Regulation and the GDPR: Do European Rules on Privacy and Scientific Research Impair the Safety and Performance of AI Medical Devices?,” the authors analyze a variety of GDPR stipulations on deidentification and scientific research that help “research organizations to use personal data with fewer restrictions compared to data collection for other purposes.” Under these exemptions, organizations may process specific types of data for a secondary purpose without consent. However, the authors admonish the definition and legal requirements of scientific research that differ among EU Member States. Since the new EU Medical Device Regulations 2017/745 and 2017/746 require compliance with the GDPR, they argue that this legal uncertainty “might result in obstacles for the use and review of input data for medical devices,” and call for “more harmonized rules, to balance individuals’ rights and the safety of medical devices.”

Next, Barry Solaiman and Mark Bloom consider a topic that has become increasingly important in recent years: “AI, Explainability, and Safeguarding Patient Safety in Europe: Towards a Science-Focused Regulatory Model.” Their chapter examines “the efforts made by regulators in Europe to develop standards concerning the explainability of artificial intelligence (AI) systems used in wearables.” Recent attempts by governments to monitor and contain the spread of the COVID-19 pandemic has certainly accelerated the increasingly invasive use of such wearables and hence the need for such standards. The authors also point out that “one key challenge for scientists and regulators is to ensure that predictions are understood and explainable to legislators, policymakers, doctors, and patients to ensure informed decision making.” Examining the operation of AI networks, the authors welcome a series of recent UK and EU guidelines for such networks and applications. But they also point out that those guidelines will ultimately be restricted by the available technology. The authors therefore argue that European legislators and regulators should spend more efforts on developing minimum standards on explainability of such technologies, which should be “leveled-up progressively as the technology improves.” Acknowledging the need for appropriate human oversight and liability, they contend that those standards should be “informed by the computer science underlying the technology to identify the limitations of explainability,” and that “the technology should advance to help them decipher networks intelligibly.”

Finally, Helen Yu’s chapter “Regulation of Digital Health Technologies in the European Union: Intended versus Actual Use,” focuses on “how the classification rules and postmarket surveillance system provisions of the EU Medical Devices Regulation (MDR) need to anticipate and address the actual use of DHTs.” She warns that courts and regulators have so far not been “consistent on the circumstances under which manufacturers are held responsible for known or encouraged ‘misuse’ of their products.” She therefore stresses the importance of adequately addressing “the potential harm caused to consumers who use digital health technologies (DHTs) beyond the manufacturer’s intended purpose” and highlights the “need for a framework to re-classify and regulate DHTs based on evidence of actual use.”

Overall, the authors’ contribution in this section demonstrates clearly how the EU and US regulators, legislators, developers, and users of medical devices are facing very similar challenges. This applies to both the micro level – with regard to the evaluation of particular medical devices – as well as on the macro level concerning the wider legal frameworks and ramifications that are so very important for the safe and efficient functioning of such devices. However, it was also shown that some aspects of the various attempts to address these and to reach acceptable trade-offs with regard to safety, efficacy, privacy, and other values differ across the pond. Against this background and considering the great variety of opportunities and risks in the increasingly complex value chains of modern medical devices, it seems more important than ever to improve international collaboration in the area and to align regulatory and legislative approaches across the globe.

4 Cybersecurity of Medical Devices Regulatory Challenges in the European Union

Elisabetta Biasin and Erik Kamenjasevic
4.1 Introduction
4.1.1 Context

Ensuring cybersecurity in the health care sector is a growing concern. The increasing digitalization of health care service providers has enabled cyberattack techniques toward them to become more liquid, flexible, and able to exploit all the possible paths of entry rapidly.Footnote 1 For example, one such attack may target critical assets of hospitals which include both the IT infrastructure and connected-to-network medical devices. A successful cyberattack toward IT infrastructure may cause significant disruptive effects for the provision of essential health care services.Footnote 2 When a cyberattack concerns a medical device, it may put at severe risk the health and safety of patients.Footnote 3 This disquiet appears to be even greater at the time of a worldwide COVID-19 outbreak. Reports on cyberattacks toward medical devices issued during this pandemic revealed how hackers use various techniques to get access to individuals’ sensitive health-related information for different gains.Footnote 4

Regulators around the globe have started increasingly to pursue medical device cybersecurity as a policy objective over the past years. For example, the US Food and Drug Administration (FDA) issued its first general principles for Networked Medical Devices Containing Off-the-Shelf Software in 2005, followed by the 2014 and 2016 Guidance for Premarket Submission and Postmarket Management of Cybersecurity in Medical Devices. In March 2020, the International Medical Devices Regulators Forum (IMRDF) issued its medical devices principles and practices on medical devices’ cybersecurity, while in the European Union (EU), the first piece of guidance was issued only in July 2020 (with the first version from December 2019) by the European Commission’s (EC) Medical Devices Coordination Group (MDCG).

4.1.2 Ambition

Discussions evolving around the regulation of medical devices and their cybersecurity are a recent trend in academic literature.Footnote 5 Many contributions analyze the US system, while fewer concern the EU one.Footnote 6 This chapter aims to contribute to the literature dealing with the law of medical devices and cybersecurity by assessing the level of maturity of the EU medical devices legal framework and EU cybersecurity policy objectives.Footnote 7 The analysis starts with an outline of cybersecurity-related aspects of EU Medical Devices Regulation (MDR).Footnote 8 This is followed by a critical analysis of regulatory challenges stemming from the MDR, through the lens of the MDCG Guidance. The following section concerns the regulatory challenges stemming from other legal frameworks, including the Cybersecurity Act,Footnote 9 the Network and Information Systems (NIS) Directive,Footnote 10 the General Data Protection Regulation (GDPR),Footnote 11 and the Radio Equipment Directive (RED)Footnote 12 since they all become applicable when it comes to ensuring the cybersecurity of medical devices. Here the analysis demonstrates that regulatory challenges persist due to regulatory specialization,Footnote 13 which has led to regulatory overlapping, fragmentation risks, regulatory uncertainty, and duplication.Footnote 14 In the final section, the chapter provides conclusive remarks as well as recommendations for regulators dealing with the cybersecurity of medical devices in the European Union.

4.2 How Does the EU Medical Devices Regulation Deal with the Cybersecurity of Medical Devices?

The provisions of the EU Medical Devices Regulation (MDR)Footnote 15 primarily address manufacturers of medical devices who are defined as “the natural or legal person who manufactures or fully refurbishes a device or has a device designed, manufactured, or fully refurbished and markets that device under its name or trademark.”Footnote 16 No explicit reference to cybersecurity is provided in the main part of the MDR. However, it provides some essential cybersecurity-related requirements that manufacturers have to implement in a medical device.Footnote 17

When putting a medical device on the market or into service, Article 5(1) of the MDR obliges its manufacturer to ensure that the device is compliant with the MDR obligations when used in accordance with its intended purpose. According to Article 5(2) of the MDR, “a medical device shall meet the general safety and performance requirements” (also including the cybersecurity-related requirements)Footnote 18 “set out in Annex I [of the MDR] … taking into account the intended purpose.”Footnote 19 The intended purpose is defined in Article 2(12) as “the use for which a device is intended according to the data supplied by the manufacturer on the label, in the instructions for use or in promotional or sales materials or statements and as specified by the manufacturer in the clinical evaluation.” As part of the general requirements set in Annex I of the MDR, “devices shall achieve the performance intended by the manufacturer”Footnote 20 and be designed in a way suitable for the intended use. They shall be safe and effective, and associated risks shall be acceptable when weighed against the benefits of the patients and level of protection of health and safety while taking into account the state of the art.Footnote 21

Moreover, “[m]anufacturers shall establish, implement, document, and maintain a risk management system.”Footnote 22 Part of this system also includes risk-control measures to be adopted by manufacturers for the design and manufacture of a device, and they shall conform to safety principles and state of the art.Footnote 23 A medical device designed to be used with other devices/equipment as a whole (including the connection system between them) has to be safe and should not impair the specified performance of the device.Footnote 24

Furthermore, a medical device shall be designed and manufactured in a way to remove, as far as possible, risks associated with possible negative interaction between software and the IT environment within which they operate.Footnote 25 If a medical device is intended to be used with another device, it shall be designed so the interoperability and compatibility are reliable and safe.Footnote 26 A medical device incorporating electronic programmable systems, including software or standalone software as a medical device, “shall be designed to ensure repeatability, reliability, and performance according to the intended use,”Footnote 27 and “appropriate means have to be adopted to reduce risks or impairment of the performance.”Footnote 28 A medical device should be developed and manufactured according to the state of the art and by respecting the principles of the development lifecycle, risk management (including information security), verification, and validation.Footnote 29 Lastly, manufacturers shall “set out minimum requirements concerning hardware, IT network characteristics, and IT security measures, including protection against unauthorized access.”Footnote 30 Concerning information to be supplied together with the device, manufacturers must inform about residual risks,Footnote 31 provide warnings requiring immediate attention on the labelFootnote 32 and, for electronic programmable system devices, give information about minimum requirements concerning hardware, IT networks’ characteristics, and IT security measures (including protection against unauthorized access), necessary to run the software as intended.Footnote 33

4.3 Regulatory Challenges Stemming from the MDR Analyzed Through the Lens of the MDCG Guidance on Cybersecurity for Medical Devices

The Medical Device Coordination Group (MDCG) of the European Commission endorsed Guidance on Cybersecurity for Medical Devices (Guidance) in December 2019Footnote 34 where it dealt with the cybersecurity-related provisions embedded in the MDR. Already, it is necessary here to mention that this MDCG Guidance is not a legally binding document. Hence, in case of disagreement, manufacturers could decide not to follow it – which might have an impact on the overall harmonizing purpose of the MDR and lead to a divergence of application of the EU principles and laws on a Member State level. Nevertheless, being the first guiding document on this topic issued by the EC for the medical devices sector, it is an essential step in further elaborating on specific MDR cybersecurity-related provisions.

As already mentioned in the previous section, the MDR does not expressly refer to cybersecurity.Footnote 35 Nor does the MDCG Guidance define the terms “cybersecurity,” “security-by-design,” and “security-by-default.” Instead, the latter document only provides an outline of its provisions relating to cybersecurity of medical devices and points out conceptual links between safety and security.Footnote 36 Leaving these terms theoretical and undefined does not facilitate their implementation in practical terms by the stakeholders concerned.

Moreover, no reference in the MDCG Guidance is given to definitions provided by the Cybersecurity Act (CSA).Footnote 37 Establishing a connection in the soft-law instrument (i.e., the Guidance) with the latter would imply a reference to a hard law definition. This link could serve to reduce the ambiguity of the term, and it might help in achieving more coherence within the EU cybersecurity regulatory framework as a whole.Footnote 38 The proposed approach would be ultimately beneficial for manufacturers as it would bring more clarity in the interpretation of MDR requirements.

The MDCG Guidance stresses the importance to “recognize the roles and expectations of all stakeholders”Footnote 39 on joint responsibility and states its “substantial alignment” with International Medical Device Regulators Forum (IMRDF) Principles and Practices for Medical Devices Cybersecurity.Footnote 40 To this end, achieving a satisfactory level of the cybersecurity of a medical device concerns manufacturers, suppliers, health care providers, patients, integrators, operators, and regulators. Manufacturers are bound by the majority of the provisions in the MDR. Integrators of a medical device are, among others, responsible for assessing a reasonable level of security while operators need to ensure the required level of security for the operational environment, and that personnel are properly trained on cybersecurity issues. At the same time, health care professionals are responsible for a device being used according to the description of the intended use, while patients and consumers need to “employ cyber-smart behaviour.”Footnote 41 All of these stakeholders are an equally important part of the cybersecurity chain,Footnote 42 and each is responsible for ensuring a secured environment in which a device could smoothly operate for the ultimate benefit of patients’ safety.

Nevertheless, the MDCG Guidance failed to elaborate on how exactly the joint responsibility of different stakeholders is influenced or conflicted by other applicable laws, in particular, when it comes to the Network and Information Systems (NIS) Directive,Footnote 43 the General Data Protection Regulation (GDPR),Footnote 44 and the Cybersecurity Act (CSA).Footnote 45 Since the expert group did not tackle them in detail in theory, it is also hard to imagine how the interested stakeholders operating within the medical devices domain are supposed to implement in practice different pieces of legislation divergent in scope and applicability.Footnote 46 Hence, the MDCG should consider adopting a more holistic approach in the future when determining the meaning of “joint responsibility” as this would help in analyzing relevant aspects of other horizontal legislation and, eventually, in achieving a more coherent cybersecurity regulatory framework.

Finally, what seems to be heavily overlooked for unclear reasons is the applicability of the Radio Equipment Directive (RED),Footnote 47 which has not even been mentioned in the MDCG Guidance. The RED cybersecurity-related provisions and their interaction with MDR as well as the other laws applicable to the cybersecurity of medical devices are explained below.

4.4 Regulatory Challenges Stemming from Other Legal Frameworks Applicable to Medical Devices

Regulation of cybersecurity is a complex task. Cybersecurity is an area in which different policy fields need to be combined (horizontal consistency), and where measures need to be taken at both levels – the European Union and Member States (vertical consistency).Footnote 48 Regulation of medical devices is complex, too, as it is a multi-levelFootnote 49 legal framework characterized by specialization and fragmentation.Footnote 50 Regulating the cybersecurity of medical devices implies bearing the complexities of both legal frameworks. In this regard, we identified four regulatory challenges: regulatory overlapping; fragmentation risks; regulatory uncertainty; and duplication. We clarify the first two challenges as relating to horizontal consistency requirements, the third to vertical requirements, and the fourth to a combination thereof. Finally, we envisage specialization and fragmentation as a common denominator of all four challenges.

4.4.1 Regulatory Overlapping: CSA Certification Schemes and the MDR

On the one hand, the MDR provides the possibility to obtain a certificate for demonstrating compliance with its security requirements. On the other hand, the CSA set up a new and broader framework for cybersecurity certifications for ICT products, processes, and services. The CSA appears to be inevitably relevant for medical devices’ cybersecurity since medical devices may fall under the definition of an ICT product.Footnote 51

Some stakeholders have questioned the applicability of CSA rules and the operability of European Cybersecurity Certification Schemes (ECCS) for health care.Footnote 52 They expressed concerns as regards to overlaps between MDR and cybersecurity certification schemes and requirements.Footnote 53 For instance, COCIR (the European trade association representing the medical imaging, radiotherapy, health ICT and electromedical industries) claimed that “[a] specific certification scheme for medical devices is … not necessary as the MDR introduces security requirements that will become part of the certification for receiving the CE mark.”Footnote 54 Such a scenario may bring duplication in requirements for manufacturers on the one hand, as well as for authorities having the oversight on manufacturers’ compliance. Ultimately, this could also imply conflicts in authorities’ respective competence.

The MDCG Guidance did not provide clarifications on the applicability of the CSA in this context. It provides only one reference to the CSA in the whole body of the document.Footnote 55 The reference is purely descriptiveFootnote 56 and does not resolve the applicability question. Against this background, the CSA clarifies that the health care sector should be one of its priorities.Footnote 57 The MDCG or the EU regulator should provide further guidance tackling aspects relevant to the cybersecurity certification schemes for medical devices. This could be done, for instance, by explaining how MDR cybersecurity-related requirements apply when the ICT product is considered to be a medical device and what type of certification schemes would be relevant. Furthermore, regulators could specify that, for ICT products not qualifying as a medical device, the CSA should remain the general rule.

4.4.2 Fragmentation Risks: Voluntariety of Certification Mechanisms

As seen in Section 4.4.1, the CSA has established certification mechanisms for ensuring the cybersecurity of ICT products. Manufacturers of medical devices may join them voluntarily.Footnote 58 However, EU Member States may establish a mandatory certification mechanism in their territories since the CSA provides that “[t]he cybersecurity certification shall be voluntary unless otherwise specified by Union law or Member State law” (emphasis added).Footnote 59 In practice, this provision implies that some Member States may impose the obligation of obtaining a cybersecurity certification, while others would leave it as a voluntary fulfilment. Manufacturers would be obliged to obtain a cybersecurity certificate for a device to market it within one Member State while at the same time, the same would not be required in another Member State.

This hypothesis could provoke diverging mechanisms in the internal market and could lead to regulatory shopping.Footnote 60 Manufacturers could also face additional compliance costs for aligning with different national requirements. Moreover, this could lead to fragmentation risks for the EU market. National requirements could diverge, and supervisory authorities could interpret different rules following different interpretative approaches.Footnote 61 Therefore, the overarching regulatory strategies to bring more consistency amongst the Member States should aim at ensuring coordination and cooperation amongst competent authorities.

4.4.3 Regulatory Uncertainty: Security Requirements between the MDR and the Radio Equipment Directive (RED)

The RED establishes a regulatory framework for making available on the EU market and putting into service of radio equipment. Certain types of medical devices (such as pacemakers or implantable cardioverter defibrillators) are likely to fall under the scope of the Directive and thus be subject to its security requirements.Footnote 62 The RED’s simultaneous application with the MDR may imply issues in practice. Notably, such parallel application may lead to the question of whether RED security rules are complementary or redundant to the MDR.Footnote 63

The European Commission developed guidance (the RED Guide)Footnote 64 to assist in the interpretation of the RED. However, the RED Guide only states that an overlap issue covering the same hazard might be resolved by giving preference to the more specific EU legislation.

Similarly, more general EC guidelines on EU product rules (the Blue Guide)Footnote 65 explains first, that two or more EU legislative acts can cover the same product, hazard, or impact. Second, it provides that the issue of overlap might be resolved by giving preference to the more specific law. This, explains the EC, “usually requires a risk analysis of the product, or sometimes an analysis of the intended purpose of the product, which then determines the applicable legislation.”Footnote 66 In other words, except for the cases where the applicability of one law has obvious priority over the other, a medical device’s manufacturer is left with a choice of the applicable legislation. On the one hand, this approach could imply a significant burden for virtuous manufacturers in justifying the applicable law. On the other hand, such kind of regulatory uncertainty could lead less-virtuous manufacturers to exploit somehow “functional overlaps” of the two regulations and bring them to “choose only” compliance with RED. This could be particularly significant for low-risk medical devices, for which a decision on the intended medical purpose – and thus, law’s scrutiny – is left to the responsibility of the manufacturer.Footnote 67

The MDCG Guidance does not provide any help in this regard. For no apparent reasons, it overlooked the applicability of the RED while it should be present in the Guidance. For example, the MDCG could provide an example of cases to which the RED applies, together with its opinion of the relevance of RED cybersecurity-related requirements. This solution would help to resolve regulatory uncertainty and help manufacturers in their decision concerning the applicability of requirements stemming from different pieces of legislation.

4.4.4 Duplication: The Notification of Medical Devices Security Incidents

Incident notification is an evident example of how specialization and decentralization have provoked the proliferation of administrative authorities with supervisory tasks. This is particularly true for the framework of medical devices where three different legal frameworks for incident notification apply: the MDR (on serious incident notification),Footnote 68 the GDPR (on data breach notification),Footnote 69 and the NISD (on security incident notification obligations).Footnote 70 Every piece of legislation requires notification to different authorities: the MDR to competent authorities, the GDPR to supervisory authorities, the NISD to national authorities or Computer Security Incident Response Teams (CSIRTs) (depending on the incident reporting model chosen by the Member State).Footnote 71 Criteria for which an incident must be notified to an authority differ in scope and objectives pursued by different pieces of legislation. None the less, it could happen that in practice, a security incident concerning a medical device should be notified at the same time to MDR, NISD and GDPR competent and/or supervisory authorities.Footnote 72

In this case, notification of a security incident implies administrative oversight by three (or more) different authorities. Such a circumstance could cause duplication of tasks and costly compliance procedures for manufacturers and health care stakeholders in general.Footnote 73 Some stakeholders already pointed out that “increasing numbers of organizations … need to be informed about a single security incident,” and “[i]n some examples, multiple competent authorities in a single country.”Footnote 74

A possible approach that could simplify the whole process would be to “adopt a more centralized approach to avoid duplication and confusion.”Footnote 75 A step further could be done by enhancing cooperation mechanisms between these authorities, harmonizing security incidents notification procedures at a vertical level across the Member States as well as at a horizontal level by considering different policy fields and their regulatory objectives.

4.5 Conclusions and Recommendations

The adequate level of cybersecurity and resilience of medical devices is one of the crucial elements for maintaining the daily provision of health care services. Above all, it is pivotal to mitigate risks relating to patients’ health and safety. On the one hand, the ongoing debate on the topic in the United States and, more recently in the European Union, shows an increasing level of awareness amongst regulators, manufacturers, health care professionals, and other involved stakeholders. On the other hand, the research presented in this chapter shows that the existing EU legal framework dealing with medical devices’ cybersecurity brings significant regulatory challenges. In order to provide a step forward in mitigating these challenges, the EU regulator might consider the following recommendations:

  1. 1. Establish a more robust connection of the MDCG Guidance with EU cybersecurity (hard) laws, especially the CSA and its definitions of cybersecurity, security-by-design, and security-by-default. Ensuring consistent use of terminology across different pieces of legislation (binding and non-binding) would also help manufacturers in meeting the requirements as it would bring more clarity in the interpretation of the MDR cybersecurity-related provisions.

  2. 2. Clarify the meaning and implications of “joint responsibility” in the intertwining with other applicable laws (in particular when it comes to the NISD, GDPR, and CSA). Further explanations on how exactly the responsibility stemming from one piece of legislation applicable to a specific stakeholder is influenced or conflicted with the responsibility of another stakeholder (stemming from the same or different piece of legislation) would represent a meaningful tool to guide manufacturers in complying with all the relevant laws.

  3. 3. Clarify the scope of application of the CSA for certification mechanisms and MDR security requirements. In particular, the EU regulator should explain how the MDR cybersecurity-related requirements apply to an ICT product which also falls under a definition of a medical device, and what type of certification schemes would be relevant.

  4. 4. Provide guidance on the application of the RED, its interaction with the MDR and other laws applicable to the cybersecurity of medical devices.

  5. 5. Ensure cooperation between competent national authorities (i.e., for incident notifications) in order to achieve timely respect of the requirements, and to avoid compliance duplication.

5 The mHealth Power Paradox Improving Data Protection in Health Apps through Self-Regulation in the European Union

Hannah van Kolfschooten
5.1 Introduction: mHealth Apps: Promise or Threat?

An increasing number of European Union (EU) citizens use mobile apps to monitor their own fitness, lifestyle, or general health to take control over their health outside of a clinical setting.Footnote 1 This growing trend is reflected in the content of mobile app stores: self-monitoring mobile health (mHealth) apps such as running trackers and medication reminders are omnipresent. While mHealth apps are said to hold great potential for empowering individuals, the apps also constitute threats to users’ fundamental rights in the European Union.Footnote 2 The main risk is posed by the extensive processing and sharing of health data with third parties by mHealth apps. Users have limited awareness of, and control over, who has access to their health data.Footnote 3 This leads to a paradox: users turn to mHealth to increase self-empowerment, but at the same time surrender power due to this lack of data control.Footnote 4

These risks are further compounded by the lack of effective EU regulation. The EU legal framework on health and protection of patients’ rights does not apply to self-monitoring mHealth app users.Footnote 5 Furthermore, while the EU’s General Data Protection Regulation (GDPR) provides a solid legal framework for the protection of health data, in practice, many mHealth apps do not comply with its provisions.Footnote 6 When traditional legislative regulation does not lead to the intended effect, complementary alternative forms of regulation may be the solution.Footnote 7 In the context of health data protection in mHealth apps, mobile app distribution platforms (app stores) may be well positioned to improve health data protection by means of self-regulation. App stores in the European Union already occupy an important place in this regard by offering a top-down regulation of third-party mHealth apps distributed on their platforms by means of app review procedures. App stores require app developers to comply with certain rules as part of a preapproval process and remove noncompliant apps. This “gatekeeping function” empowers app stores to influence app developers’ conduct: a form of industry self-regulation.Footnote 8 Starting from this premise, the purpose of this chapter is to evaluate whether and to what extent self-regulation by app stores may contribute to the level of health data protection in the European Union.

The chapter is structured as follows. First, it outlines health data protection issues concerning mHealth apps (Section 5.2). Next, it describes the EU legal framework governing mHealth apps, focusing on the GDPR (Section 5.3). Subsequently, it discusses the benefits and risks of industry self-regulation as an alternative means to protect data protection rights in light of current mHealth regulation practices by Apple’s App Store and Google’s Google Play (Section 5.4). Finally, this chapter proposes several improvements to self-regulation in this field (Section 5.5), which will provide the basis for conclusions (Section 5.6).

5.2 Health Privacy Issues in Self-Monitoring mHealth Apps

Popular examples of mHealth apps include calorie counters, apps to monitor menstruation cycles, and running trackers. These types of apps continuously monitor users’ behavior over an extended period of time. While the focus of mHealth apps ranges from health to fitness and lifestyle, all of them collect large amounts of health-related data, such as biometric data, data concerning vital body functions, and health indicators. Most of these data qualifies as “data concerning health” within the meaning of the GDPR.Footnote 9 Health data should be understood in a broad manner.Footnote 10 The GDPR’s definition of health data implies that information about users’ weight, blood pressure, tobacco, and alcohol consumption is considered health data because this information is scientifically linked to health or disease risks.Footnote 11 Furthermore, certain types of information may not be health data as such, but may transform into health data when monitoring takes place over a longer period of time (i.e., average steps per month), or the data is combined with other data sources (i.e., daily calorie intake and social media profile).Footnote 12

The risk for a violation of the users’ fundamental rights is high, since misuse of health data may be irreversible and have long-term effects on data subjects’ lives and social environments.Footnote 13 Several studies show that the extensive processing of health data by mHealth apps poses numerous threats to privacy.Footnote 14 This is mainly caused by the fact that health data is a valuable commodity: big data companies are increasingly interested in health data as it is scarce because of the expensive collection process.Footnote 15 Therefore, mHealth apps may encourage users to provide more health data in order to make more profit. Passively collected data, such as calculated overviews of average steps, are regularly collected beyond users’ control.Footnote 16 Moreover, mHealth apps often use a standard Terms of Service, setting the rules on a “take it or leave it” basis.Footnote 17 Consequently, users are often unaware of the exact type and volume of collected data.Footnote 18

Additional concerns are raised with regard to the user’s control over access to the collected health data. Most apps provide for the possibility to disclose information to an “undefined (future) audience.”Footnote 19 For example, many apps share health data among unspecified users to provide comparisons, and app operators may sell health data to third parties, such as advertisers and insurance companies.Footnote 20 Apps often do not provide the option to consent granularly: users have to consent to all receivers and all types of data at once.Footnote 21 In conclusion, the extensive processing and third-party sharing of health data by mHealth apps compromises users’ control and therefore poses threats to users’ privacy rights.

5.3 The Effectiveness of EU Legal Protection of Health Data in mHealth Apps
5.3.1 Inapplicability of the EU Health Framework

In the European Union, health privacy in technology is regulated via multiple legal instruments. At the national level, health privacy is protected through patients’ rights frameworks. One basic right can be identified in all Member States: medical confidentiality. Medical confidentiality entails both the patient’s right to confidentiality of personal data and the duty for health professionals to keep this data confidential.Footnote 22 However, mHealth app users are generally not considered patients by app developers nor in their own experience, as the apps do not serve a medical purpose and health professionals are not involved.Footnote 23 Therefore, users are not protected under the patients’ rights framework.

At the EU level, health technology is mainly regulated through regulation of medical devices under the Medical Devices Regulation (MDR).Footnote 24 Software, including apps, may also fall under the MDR.Footnote 25 However, in order to qualify as a medical device, the intended purpose of the app needs to fall within one of the medical purpose categories stipulated by the MDR.Footnote 26 As most self-monitoring mHealth apps (monitoring fitness, general health, or wellbeing) are not intended for medical purposes but instead focus on general health, they usually do not qualify as medical devices.Footnote 27 The MDR specifically excludes software intended for general purposes and lifestyle and wellbeing purposes.Footnote 28 However, when apps do have an intended medical purpose, for example, self-monitoring apps prescribed by a physician, the MDR may apply. In any case, the MDR protects health privacy primarily with reference to the GDPR.Footnote 29

5.3.2 The GDPR Protects Health Data in Theory

The main instrument for health privacy protection in the European Union is the GDPR. The GDPR provides individuals with several rights concerning personal data processing.Footnote 30 The GDPR applies to mHealth apps available in the European Union.Footnote 31 The basic premise of the GDPR is that every processing of personal data must be underpinned by a legal basis.Footnote 32 Moreover, it imposes duties on data processors and controllers and confers rights on data subjects in order to increase control.Footnote 33 Data subjects’ rights include the right to information,Footnote 34 the right to access,Footnote 35 and the right to withdraw consent.Footnote 36 Furthermore, the GDPR provides for a special data protection regime for health data, which stipulates a general prohibition on the processing of health data but provides for limited derogations.Footnote 37 However, these derogations are arguably inapplicable to mHealth apps, because app developers do not process health data in the public interestFootnote 38 and are not bound by professional secrecy.Footnote 39 Therefore, typically, health data can only be processed in mHealth apps when users provide their explicit consent.Footnote 40 This implies that the data subject must give an “express statement of consent.”Footnote 41 The GDPR’s extensive protection of data rights in combination with the strict health data regime gives it the potential to sufficiently protect mHealth users’ health data.

5.3.3 But the GDPR Does Not Effectively Protect Health Data in Practice

However, several empirical studies show that many mHealth apps do not comply with relevant GDPR provisions related to health data.Footnote 42 For example, from a study on twenty mHealth apps available in the European Union, it was found that the majority of mHealth apps do not comply with provisions on user consent: 55 percent of the analyzed apps provide information about the app provider’s privacy policy before registration, only 5 percent ask for consent every time the user shares additional personal information, none of the apps comply with the requirement of expressing “explicit” consent by specific questions or an online form and only 35 percent offer the possibility to withdraw consent and thereby delete their health data.Footnote 43 Another analysis of privacy policies of thirty-one EU mHealth apps shows that none complied with the right to information: only 42 percent mentioned the right to object and 58 percent the right to rectification and access.Footnote 44 A different study on twenty-four mHealth apps shows that 79 percent send users’ health data to third parties in a nontransparent manner.Footnote 45

Thus, in practice, many mHealth apps do not seem to comply with the GDPR. This can be explained by the fact that apps are often developed by individuals located all over the world, with little understanding of applicable data protection legislation.Footnote 46 Furthermore, due to the great number of available apps, regulatory oversight is difficult because of insufficient resources.Footnote 47 The majority of Member States do not have an entity that is responsible for the regulatory oversight of mHealth apps.Footnote 48 Knowledge of lack of oversight may also result in lower compliance. In sum, the GDPR offers a relevant and sufficient legal framework for protection of health data, but lack of compliance and enforcement make the GDPR a practically ineffective instrument to protect mHealth users. Therefore, as long as compliance is not strengthened, traditional legislative regulation does not suffice.

5.4 Self-Regulation by App Stores as a Solution to Improve Health Data Protection

When traditional (legislative) regulation does not lead to the intended effect, complementary alternative forms of regulation, such as self-regulation, may be the solution.Footnote 49 While the important role of app stores in securing GDPR compliance has been recognized by the European Union on several occasions,Footnote 50 and the role of digital platforms in protecting fundamental rights online is a popular topic in legal scholarship, the discussion seems to focus mainly on social media platforms and does not elaborate on app stores.Footnote 51 However, app stores may be well positioned to improve health data protection by means of self-regulation.

5.4.1 Self-Regulation in Data Protection

Industry self-regulation can be defined as “a regulatory process whereby an industry-level, as opposed to a governmental- or firm-level, organisation … sets and enforces rules and standards relating to the conduct of firms in the industry.”Footnote 52 Often-mentioned benefits of self-regulation are flexibility in adapting rules to technological changes, greater quality of rules, and more commitment to the rules.Footnote 53 However, self-regulation also has its limitations, specifically with regard to fundamental rights protection. Self-regulation instruments often lack effective enforcement and monitoring mechanisms. Furthermore, in some cases, self-regulation instruments are not consistent with other existing regulation, which makes the overall regulatory system increasingly complex. Other challenges include risks for favoritism and lack of accountability.Footnote 54

In the context of data protection, self-regulation by the industry is becoming more common. Companies often choose to complement existing legislation with self-regulatory instruments for reasons of protecting consumer interests, increasing public trust and reputation, and combatting negative public opinions.Footnote 55 Also, self-regulation has been given prominence in the context of data protection at the EU level: the GDPR supports and encourages self-regulation by businesses in the form of codes of conduct and Binding Corporate Rules.Footnote 56 Moreover, the European Commission has (so far unsuccessfully) taken steps to set up a voluntary Privacy Code of Conduct on mHealth apps for app developers.Footnote 57

5.4.2 App Stores as Privacy Regulators

With regard to industry self-regulation of mHealth apps in the European Union, we see that app stores already play an important role by top-down regulating third-party mHealth apps distributed on their platforms by means of app review procedures.Footnote 58 The app-ecosystem works as follows: in order for app developers to distribute their apps to the general public, they need to publish their app in app stores for consumers to download onto their mobile devices. App stores require app developers to comply with certain rules as part of a preapproval process and remove noncompliant apps. This “gatekeeping function” empowers app stores to influence app developers’ conduct.Footnote 59 Therefore, app stores are the central orchestrators in the app-ecosystem and have a large amount of control over consumers.Footnote 60

App stores are not regulated under the GDPR. They do not qualify as data processors or controllers under the GDPR themselves, as they do not exercise any control over personal data of users, but simply provide a platform for app providers to offer their apps.Footnote 61 However, app stores can impact the manner in which third-party apps – who do qualify as data processors – handle data protection.Footnote 62 Moreover, they are encouraged by the GDPR to fulfil this role.Footnote 63 In this regard, app stores conduct a form of industry self-regulation.Footnote 64 While app stores voluntarily impose these rules on third-party apps, although encouraged by the GDPR, self-regulation is not voluntary from the point of view of the app developers. In order to examine these app stores’ behavior toward privacy of mHealth apps and to assess the effectiveness of these existing practices for health data protection in mHealth apps, this chapter performs a case-study analysis on Apple App Store and Google Play, today’s leading app stores.Footnote 65

5.4.3 Case Studies
5.4.3.1 Apple App Store

In order for app developers to submit apps to the Apple App Store, they must register to the Apple Developer Program, governed by the Apple Developer Program License Agreement.Footnote 66 Furthermore, Apple App Store reviews all submitted apps and app updates according to the App Store Review Guidelines.Footnote 67 As shown in Table 5.1 above, these Guidelines contain specific rules on mHealth apps and state that these apps may be reviewed with greater scrutiny.Footnote 68 The guidelines also contain general provisions on processing of personal data and privacy. First, apps must include a privacy policy, explaining how users can exercise their rights to data retention, deletion, and withdraw consent.Footnote 69 Second, data collection must be based on user consent and users must be provided with an easily accessible and understandable option to withdraw consent.Footnote 70 Third, apps should minimize data collection.Footnote 71 With regard to sharing of data with third parties, user consent is required.Footnote 72 Furthermore, apps should not attempt to build a user profile on the basis of collected data.Footnote 73 The Apple Developer Program License Agreement also states that app developers must take into account user privacy and comply with privacy legislation.Footnote 74

Table 5.1 Health data protection in app store policies

Source: author’s analysis (2020)

Furthermore, as can be seen in Table 5.1, the guidelines contain explicit rules on health data processed by mHealth apps.Footnote 75 First, apps may not use or disclose collected health data to third parties for the purpose of advertising, marketing, or other data-mining purposes.Footnote 76 In addition, apps may not use health data for targeted or behavioral advertising.Footnote 77 However, they may use or disclose health data for the purposes of improving health management and health research, but only with user permission.Footnote 78 Second, app developers may not write inaccurate data into mHealth apps.Footnote 79 Third, mHealth apps may not store health information in the iCloud.Footnote 80

5.4.3.2 Google Play

Google Play’s review criteria are outlined in the Developer Distribution Agreement and Developer Program Policies.Footnote 81 The Agreement functions as a legally binding contract between the app developer and Google.Footnote 82 With regard to processing of personal data, the Agreement states that apps should comply with applicable data protection laws.Footnote 83 More specifically, apps must inform users of what personal data is processed, provide a privacy notice, and offer adequate data protection. Furthermore, apps may only use personal data for the purposes the user has consented to.Footnote 84 As shown in Table 5.1 above, the Agreement does not specifically mention mHealth apps or health data.

The Developer Program Policies provide more guidance on processing of personal (health) data. With regard to processing of personal data, the Policies state that apps that are intended to abuse or misuse personal data are strictly prohibited.Footnote 85 Furthermore, apps must be transparent about the collection, use, and sharing of personal data.Footnote 86 As to sensitive personal data, which probably also include health data, the Policies state that collection and use should be limited to purposes directly related to functionality of the app. Furthermore, an accessible privacy policy must be posted within the app itself. It must also disclose the type of parties the sensitive data is shared with.Footnote 87 Moreover, the in-app disclosure must contain a request for users’ consent prior to data processing, requiring affirmative user action. These permission requests must clearly state the purposes for data processing or transfers. Furthermore, personal data may only be used for purposes that the user has consented to.Footnote 88 The Policies do not contain explicit provisions on mHealth apps, except for a prohibition on false or misleading health claims.Footnote 89

5.4.3.3 Case Study Analysis

The above examination of app stores’ guidelines shows that app stores are indeed concerned with privacy issues. However, it is questionable whether this leads to a higher level of protection of mHealth app users’ health privacy. Both app stores’ guidelines state that apps must comply with privacy legislation and integrate a privacy policy. However, the level of detail of the respective app stores’ privacy provisions differs significantly. While Apple App Store specifically recalls most of the GDPR’s data protection principles and data subjects’ rights, Google Play’s privacy guidelines are formulated in somewhat vague terms and do not mention data subjects’ rights. Therefore, Google Play’s guidelines do not offer app developers the needed guidance on how to protect personal data, specifically with regard to data subjects’ rights. This entails a strong risk that users’ rights will simply end up in the app’s privacy policy fine print and will not lead to better privacy protection in practice.

Furthermore, while Apple App Store has specific guidelines on health data processing, Google Play’s Policies only mention “sensitive personal data.” This lack of specific regulation of health data does not reflect the risky nature of this type of data and therefore does not increase awareness of the need for protection. Most notably, both guidelines miss a provision on “explicit consent” for health data processing, which is required for app developers under the GDPR. While both guidelines contain provisions on user consent, no distinction is made between “regular” and “explicit” consent and thus no clarification on how to obtain explicit consent is offered. This puts privacy at risk, as control over health data is not sufficiently protected.

Both guidelines state that noncompliant apps will be removed, but do not elaborate on the structure of the monitoring process. Therefore, actual enforcement of the guidelines faces risks of uncertainty and inconsistency, which does not ensure compliance with the GDPR. After all, app stores are likely facing the same capacity problems as data protection authorities, and it could take months before noncompliant apps are taken down. Compliance issues also come into play in the differences between the respective guidelines, as this leads to the risk of unequal standards of protection of iOS and Android users.

Taken together, it can be concluded that the current self-regulation practices, Google Play’s especially, do not live up to their potential and do not adequately ensure mHealth app users’ control over their health data. However, due to the central position of app stores, self-regulation by app stores may still contribute to a higher level of health data protection if certain amendments are made to the content and form of their policies. Recommendations on how to improve the policies are touched upon in the next section.Footnote 90

5.5 Recommendations to Improve Current App Store Self-Regulation Practices

App stores have a powerful position in the mHealth app sector. By setting requirements for mHealth apps to be listed on and removed from their platforms they hold the most promising means to improve the level of health data protection of users. Their current self-regulation practices could be improved on multiple fronts. First, app stores could provide app developers with clearer guidelines on data processing obligations and data subjects’ rights. This should include stating all applicable obligations and rights under the GDPR and providing practical guidance on how to adequately implement this in apps. For example, app stores could issue technical guidelines on how to include consent withdrawal mechanisms in the apps. Translating privacy rights to technical measures will enhance adequate understanding and implementation by app developers.Footnote 91 Furthermore, app stores could make data subject rights and principles part of their contractual agreements with app developers to further strengthen compliance.Footnote 92

Second, specific provisions on health data protection should be included, in order to point out its importance and increased privacy risks. These provisions should at least include the requirement to obtain explicit consent on health data processing and provide technical guidance on how to implement this.Footnote 93 There should also be specific provisions on limiting sharing of health data with third parties and possible commercial use. Additionally, app stores can further strengthen users’ control by requiring apps to include user report tools on data protection infringement or provide for these tools in the app store itself.Footnote 94 Furthermore, app stores should commit to raising awareness of the risks of health data processing. For instance, a standard text on the risks could be provided for in the guidelines, which app developers would be required to include in their privacy policies. App stores could educate users of the risks by adding “health data processing warnings” to the downloading environment.

Moreover, app stores could strengthen user protection if they would mainstream their policies and engage in a shared EU Code of Conduct under the GDPR.Footnote 95 The GDPR codes are voluntary tools that set out specific data protection rules. They provide a detailed rulebook for controllers and processors in a specific sector. Bodies representing a sector – such as app stores – can create codes to aid GDPR compliance.Footnote 96 Codes have to be approved by the European Data Protection Board (EDPB) and compliance will be monitored by an accredited, independent supervisor.Footnote 97 Consequently, present self-regulation would turn into coregulation, and current guidelines would be replaced or supplemented by this GDPR code. App stores could make adherence to the code by app developers a requirement to offer apps on their platforms. This would have more effect than current self-regulation initiatives as preapproval of the code by the EDPB will give the code greater authority and the monitoring mechanism will lead to better compliance. Moreover, the unequal level of protection and risks of legal uncertainty and inconsistency would be minimized.Footnote 98 For mHealth app users’ health privacy, a GDPR code will provide for more transparency regarding apps’ approaches to data processing.Footnote 99 For example, the code would have to include specification of all applicable rights related to control over health data, explicit consent included.Footnote 100

The preceding sections allow for the conclusion that app stores could positively impact GDPR compliance and thus strengthen mHealth users’ health privacy by engaging in a GDPR code with specific health data safeguards. While there is no guarantee that app stores will make these changes, there are compelling reasons for them to do so. Foremost, the increased legal certainty offers app stores a competitive advantage. It reduces the complexity of app developers’ entrepreneurial process, which may positively impact app stores’ businesses.Footnote 101 For app developers, a code would be beneficial because it could be used to demonstrate compliance with the GDPR.Footnote 102 Furthermore, app stores will benefit from good privacy practices by third-party apps because this will likely also enhance their own trustworthiness. In this regard, privacy can be seen as a positive marketing statement.Footnote 103 Moreover, both Apple and Google were stakeholders in the European Commission’s attempt at a voluntary mHealth Privacy Code of Conduct, which shows their interest in such an initiative.

5.6 Conclusion: Improved App Store Self-Regulation Strengthens Health Privacy

Paradoxically, the wish to achieve self-empowerment by using mHealth apps leads to users surrendering power due to a lack of control over their health data. While the GDPR offers a solid solution for the protection of mHealth app users’ health data in theory, it lacks practical effectiveness. Self-regulation of third-party apps by app stores by means of review procedures could fill the regulatory gap and thereby contribute to the level of health data protection in the European Union. However, the performed case-studies show that current self-regulation does not fulfil this promise. None the less, given the platforms’ central and powerful position in the sector, complementary regulation of mHealth apps by app stores may still be the most promising means to improve the level of health data protection of mHealth app users. This conclusion sheds light on the heavily debated role of the European Union in regulating technological phenomena and related fundamental rights risks: in some cases, the sector itself is in a better position to regulate these risks and enforce legal compliance than independent supervisory authorities. This finding is in line with the European Union’s growing tendency to promote and support self-regulation structures to supplement EU legislation.

Despite the important role of app stores in achieving this, in the end, the ultimate responsibility for safeguarding users’ health privacy lies with the mHealth app developers and providers that process health data. mHealth apps should provide users with the adequate means to exercise privacy rights by ensuring concrete and effective opportunities to have control over decisions regarding health data processing. In this regard, effective possibilities for actual enforcement of self-regulation standards are of key importance. While app store self-regulation may steer mHealth app developers in the right direction by translating the GDPR’s privacy provisions into technical preapproval requirements, compliance with the relevant privacy provisions is also aided by increased awareness among both mHealth users, developers, and health data brokers as to the risks mHealth apps entail for individual fundamental rights. The European Union could play a central role in accomplishing this, in order to assist mHealth users to achieve the highly desired self-empowerment by bringing the GDPR to life in mHealth apps.

6 The Interaction of the Medical Device Regulation and the GDPR Do European Rules on Privacy and Scientific Research Impair the Safety and Performance of AI Medical Devices?

Janos Meszaros , Marcelo Corrales Compagnucci , and Timo Minssen

Stipulations on deidentification and scientific research in the European General Data Protection Regulation (GDPR) help research organizations to use personal data with fewer restrictions compared to data collection for other purposes. Under these exemptions, organizations may process specific data for a secondary purpose without consent. However, the definition and legal requirements of scientific research differ among EU Member States. Since the new EU Medical Device Regulations 2017/745 and 2017/746 require compliance with the GDPR, the failure to come to grips with these concepts creates misunderstandings and legal issues. We argue that this might result in obstacles for the use and review of input data for medical devices. This could not only lead to forum shopping but also safety risks. The authors discuss to what extent scientific research should benefit from the research exemption and deidentification rules under the GDPR. Furthermore, this chapter analyzes recently released guidelines and discussion papers to examine how input data is reviewed by EU regulators. Ultimately, we call for more harmonized rules to balance individuals’ rights and the safety of medical devices.

6.1 Introduction

Artificial intelligence (AI) and big data have a significant impact on society,Footnote 1 as many aspects of our lives have become subject to data processing.Footnote 2 This “datafication” has also led to a rapid transformation in the delivery of health care services.Footnote 3 The new generation of medical devices represents one example of technological advance that could substantially protect and improve public health.Footnote 4 Many of these rely heavily on data and AI algorithms to prevent, diagnose, treat, and monitor sources of epidemic diseases.Footnote 5

Though opening a world of new opportunities, rapid advances in AI medical devices have resulted in a number of highly complex dilemmas, tradeoffs, and uncertainties regarding the applicability and appropriateness of the current legal framework. Many of these legal and ethical issues relate to privacy and data protection. The European General Data Protection Regulation (GDPR)Footnote 6 is of particular importance in that respect. Focusing on the GDPR, the following chapter discusses the risk that AI medical device systems may run afoul of sufficiently informed consents of data subjects since they collect, process, and transfer sensitive personal data in unexpected ways without giving adequate prior notice, choices of participation, and other options.Footnote 7 At the same time, such data can be important to ensure the safety and effectiveness of such devices. Considering the consequential need for reasonably sound tradeoffs, we argue that current legal frameworks and definitions need to be harmonized and refined. We refer to the typical lifecycle in the collection and processing of health data via medical devices (Section 6.2) to highlight the challenges and legal risks at each phase. Section 6.3 examines the new EU regulations for Medical Devices (MDR)Footnote 8 and In Vitro Diagnostic Medical Devices (IVDR)Footnote 9 with a special focus on the MDR. In this section, we seek in particular to identify and iron out the missing links between the GDPR and the MDR. Section 6.4 discusses our main findings and summarizes recommendations. This provides the basis for our conclusions in Section 6.5.

6.2 Collection and Processing of Health Data Under the GDPR

Modern health care systems and medical devices collect and process vast amounts of data, which may enhance an individual’s health care experience directly and indirectly through scientific research and policy planning. Nevertheless, obtaining informed consentFootnote 10 or authorization from a large number of data subjects can be challenging and result in disproportionate cost and effort.Footnote 11 For instance, the Italian government provided the health dataFootnote 12 of 61 million Italian citizens to IBM Watson Health, without obtaining patient consent.Footnote 13 The agreement between the Italian government and IBM underlined that IBM alone would retain rights to the results of the research, which it could then license to third parties.Footnote 14 Instead of acquiring consent for the secondary processing, the most realistic option for privacy protection is providing the option to opt-out for the citizens, such as the national data opt-out systemFootnote 15 in England.Footnote 16

In general, the processing of sensitive data (e.g., health data) is prohibited under the GDPR. This can be a crucial issue in the case of AI-augmented medical devices since the sensitivity and specificity of an algorithm are only as good as the data that they are trained on. For instance, if an algorithm is only trained on the genetic material derived from European Caucasians, it may not provide accurate information that can be generalized to individuals of other groups. However, the GDPR enables the processing of sensitive data for public interest, public health, and scientific research purposes, if there are appropriate safeguards for the rights and freedom of individuals. While the GDPR does not fully specify what those safeguards are, it indicates that their purpose is to “ensure that technical and organizational measures are in place in order to ensure respect for the principle of data minimization.”Footnote 17 Such measures may include de-identification methods (for example, anonymization and pseudonymization) provided that the intended use of the data can still be fulfilled. However, differing requirements of national laws toward the application of these exemptions and de-identification methods often hinder the application of AI medical devices at the EU level. In Sections 6.2.16.2.3, we consider the most salient problems.

6.2.1 Public Interest and Public Health

Public interest and public health can be a legal basis for the secondary use of health data. The GDPR posits several levels of public interest, such as general and important.Footnote 18 However, the level of public interest in AI medical devices is still not clear and may fall under different categories. This could create problems to identify whether personal data might be processed with or without consent to develop and update these devices. Deciding on the level of public interest is as challenging as it is relevant. Medical devices need to be safe and reliable. Malfunctions could potentially cost lives. Therefore, the public interest and public health could be linked to the intended use and classification of these devices.

6.2.2 Scientific Research

There are situations when data was not collected for research or health care purposes initially. For instance, when a smartwatch measures a wearer’s heart rate. This data can be useful later for research purposes, to find unseen correlations. The collected data provides valuable information for future research but reaching users for getting their approval for the secondary purpose would pose a significant burden, if it is possible at all. This can lead to controversial scenarios, such as the Google DeepMindFootnote 19 case in the United Kingdom, where the Royal Free Hospital under the National Health Service (NHS)Footnote 20 provided the personal data of 1.6 million patients to Google DeepMind without their consent. Google’s AI medical device was an app, which could monitor an acute kidney injury disease. The app called “Streams” was used as part of a trial to test, diagnose, and detect the disease. Public concerns and corroborative research suggested that Google DeepMind failed to comply with the provisions enshrined by data protection law.Footnote 21

The GDPR aims to ease the restrictions on the processing of sensitive data by explicitly allowing the processing for research purposes. To use this legal basis, the data controllers need to apply appropriate safeguards (e.g., pseudonymization and anonymization) under EU and Member State laws.Footnote 22 The GDPR defines scientific research in a broad manner, which includes “technological development and demonstration, fundamental research, applied research and privately funded research” conducted by both public and private entities.Footnote 23 However, the definition of research can be found in the RecitalsFootnote 24 of the GDPR, which are not legally binding by themselves. Several EU Member States, such as Germany and Finland, do not define “scientific research” in their laws. Instead, these States define the limits and requirements of research through the regulation of their authorities responsible for this field.Footnote 25 Other Member States such as Austria regulate scientific research by referring to the OECD’s Frascati Manual.Footnote 26,Footnote 27 The OECD Frascati Manual includes definitions of basic concepts, data collection guidelines, and classifications for compiling research and development statistics. However, the Frascati Manual never defines “scientific research” as such, even though it makes use of the term in a number of instances throughout the text. Furthermore, the application of the research exemption can lead to different interpretations. For instance, in Ireland, the application of the research exemption by the Health Research Consent Declaration Committee is significantly stricter than in the United Kingdom, by the Medical Research Council.Footnote 28 Hence, the Member States need to restrict the scope of scientific research, since overly broad interpretations might undermine the goals of the GDPR. These diverse rules on data collection pose hurdles for improving the safety of medical devices, since processing new data for updating is crucial, and the different requirements and barriers in Member States undermine the collection of reliable and diverse datasets. Germany’s new Digital Healthcare ActFootnote 29 is a good example of promoting the use of low-risk medical devices and ensuring better usability of health data for research purposes. The Act entitles persons covered by statutory health insurance to benefit from digital health applications and contains provisions to make demographic data from health insurers more usable for research purposes.Footnote 30

6.2.3 Deidentification

Deidentification methods represent a broad spectrum of tools and techniques to protect the data subject’s privacy. In general, the strength of the deidentification scales with a loss in data utility and value.Footnote 31 The two ends of this spectrum are clear: personal data without any deidentification, which can directly identify the data subject and anonymous data, which cannot identify individuals.Footnote 32 Between these two ends, there is a wide range of methods and techniques, which need further clarification. The GDPR clarifies that pseudonymized data is a type of personal data.Footnote 33 However, the definition of pseudonymization is too broad to know the requirements to reach an adequate level of deidentification. Recognizing the broad spectrum of deidentification techniques and acknowledging them as an “appropriate safeguard” enables the development of regulatory guidance that encourages the maximum use of deidentification, and it may open the door for the safe secondary use of data in scientific research.

Public interest, public health, and scientific research represent a broad exemption from the prohibition of the processing of sensitive data in the GDPR. These legal bases also require safeguards, such as deidentification techniques. However, the application of them in the Member States is not unified. This may trigger unnecessary legal risks in the development and deployment of AI medical devices and takes us directly to what has been called the “update problem”:Footnote 34 how can regulators, as well as reliable developers and producers, determine when the updated AI behaves differently enough that a new assessment is needed? It is challenging to ensure that AI medical devices conform to all the rules and technical issues without posing new risks than those assessed during the premarket review.Footnote 35 Considering that the essence of updating medical devices potentially introduces new risks without constant approval, it is crucial to validate the data they are learning from. Therefore, regulators and product manufacturers need to implement a risk reassessment and incident-report framework, which includes ongoing evaluation and mitigation strategies throughout the whole lifecycle of AI medical devices, in particular, during service deployment and operation phases. For this, harmonized rules on the collection and processing of health data as well as review systems and processes of medical devices would be necessary in the EU Member States.

6.3 The EU Medical Device Regulation

To keep up with advances in science and technology, two new EU regulations on medical devices and in vitro diagnostic medical devices entered into force on May 25, 2017.Footnote 36 They will progressively replace the existing directivesFootnote 37 after a staggered transitional period.Footnote 38

The MDR clarifies that data protection rules need to be applied when medical devices process personal data.Footnote 39 Therefore, if a medical device regulated by the MDR collects personal data, it also falls under the GDPR. The MDR differentiates among three classes of medical devices, depending on their level of risk:

  1. 1. Class I devices, posing low/medium risk (e.g., wheelchairs);

  2. 2. Class IIa and IIb devices, representing medium/high-level risk (e.g., x-ray devices);

  3. 3. Class III, high-risk devices (e.g., pacemakers).

In the case of low-risk level (Class I) medical devices, such as a smartwatch, privacy might often prevail over the secondary use of personal data to develop and improve these devices. In the case of high-risk level (Class III), the safety of medical devices might outweigh patient privacy. AI medical devices with a medium risk level (Class II), such as medical image processing software, may be considered to have at least a general level of public interest. However, developing high-risk devices does not mean that manufacturers could automatically process health data without consent. Careful consideration is necessary on a case-by-case basis with strong safeguards, under the oversight of authorities.

Medical devices in the European Union need to undergo a conformity assessment to demonstrate that they meet legal requirements. The conformity assessment usually involves an audit of the manufacturer’s quality system and, depending on the type of device, a review of technical documentation from the manufacturer on the safety and performance of the device.Footnote 40 Manufacturers can place a CE (Conformité Européenne) mark on their medical device after passing the assessment. The EU Member States can designate accredited notified bodies to conduct conformity assessments. A notified body within the European Union is an entity designated by an EU competent authority to assess the conformity of medical devices before being placed on the market. Companies are free to choose the notified body they engage with.Footnote 41 There are more than fifty EU notified bodies in total that can certify according to Medical Device Directives. However, not all of these notified bodies can certify according to all categories of medical device products. When the authorities start to scrutinize the AI/ML medical device during the approval process, it is challenging to know clearly how the AI application and algorithms developed and evolved due to their opaque nature.Footnote 42 It is not clear how notified bodies can review the input data of AI medical devices. First, reviewing large and complex datasets requires special knowledge and technical expertise, which might be lacking or not at the same level within all the notified bodies of the European Union. Second, there are medical devices developed outside of the European Union. Reviewing the datasets used for developing them might trigger data protection and data transfer jurisdictional issues. The datasets might contain sensitive data of individuals from countries outside Europe, thus data sharing is challenging, posing a hurdle for part of the review process. For instance, the Health Insurance Portability and Accountability Act (HIPAA) and state regulations in the United States, and Japanese regulations on personal dataFootnote 43 might not allow the sharing of sensitive data with the notified bodies in the EU Member States. Moreover, sharing anonymized data might not be sufficient to review input data thoroughly. Third, there is a great variety of data-processing software and methods among companies operating in different countries, which makes it extremely challenging to review these devices uniformly on the same level.

The European Medicines Agency and several notified bodies are already preparing for the change of AI medical devices. The European Medicines Agency and the Heads of Medicines Agencies (HMA) Big Data Task Force (BDTF)Footnote 44 released two reportsFootnote 45 recently for the European regulators and stakeholders to realize the potential of big data in terms of public health and innovation. Since the biggest issues in the European Union currently are the decentralization of health data and regulatory tasks, the reports focus on providing guidance and resources for data quality and discoverability to build up computing and analytical capacity. Thus, the most ambitious recommendation of the BDTF is the establishment of an EU platform: Data Analysis and Real World Interrogation Network (DARWIN) to access and analyze health care data from across the European Union. This platform would create a European network of databases with verified quality and strong data security. It is intended to be used to inform regulatory decision making with robust evidence from health care practice. The reports highlight the following actions for the European Union:

  1. 1. Ensuring sufficient expertise and capacities within the European network (in all the notified bodies in the Member States), in order to ensure that AI medical devices can be assessed appropriately.

  2. 2. Enable regulatory evaluation of clinical data submitted by drug manufacturers for approval where the data has been processed by AI algorithms or if part of the analysis, such as patient selection, involved AI methods.

  3. 3. Enable regulatory use of AI in internal processes at authorities and notified bodies. For instance, applying Natural Language Processing of received texts, or reviewing image data submitted to support a clinical claim from a drug manufacturer.

  4. 4. Approval of AI-based Health Apps in devices intended for clinical decision making.

The reports also clarify that the European Union cannot accept opaque algorithms performing without checks and balances. Algorithm code should be more transparent (feature selection, code, original data set) and available for targeted review by regulators and notified bodies. The report states that the outcomes and changes to algorithm use (safety and efficacy) need to be subject to post-marketing surveillance mechanisms, in a similar way as monitoring drug safety after marketing authorization. By way of comparison, the European Union’s approach for the assessment of medical devices is slightly different from the FDA’s in the United States. While the reports suggest that the European Union is still focusing on the transparency of AI applications, the FDA also pays special attention to the excellence and trustworthiness of the companies developing AI medical devices during the precertification process.Footnote 46 Figure 6.1 below shows the flow of health data for developing AI medical devices in the European Union.

Figure 6.1. The processing of health data for developing AI medical devices

6.4 Discussion

The effective collection and processing of relevant health data is the first step to making AI medical devices that work properly. This is particularly relevant during the COVID-19Footnote 47 outbreak as the foreseeable reuse of health data for scientific purposes leads to a rise in the number of organizations manufacturing AI medical devices.Footnote 48 The US Sentinel system is a great example of monitoring the safety of medical devices and securely sharing and reusing the collected information.Footnote 49 Our analysis suggests, however, that the processing and review of input data for medical devices, as well as the definition of specific data uses, are not fully harmonized in the European Union. This issue stems from the fact that the health care systems and scientific research are mainly regulated by the EU Member States, resulting in diverse legal environments and barriers for processing health data. Thus, the GDPR and Medical Device Regulation have not reached a sufficient level of harmonization in this field. This may result in unnecessary legal risks in the development and deployment of AI medical devices, which is crucial in the case of the “update problem.”Footnote 50 Therefore, harmonized rules on the collection and processing of health data, as well as review systems and processes of medical devices, would be necessary in the EU Member States.

The “update problem” is still not sufficiently addressed and little work has thoroughly examined how AI medical devices are developed and built from the perspectives of public interest and data protection law. To build these devices, data-intensive research is necessary. However, at what cost? Strong privacy protection may hinder the development, effectiveness, and precision of AI products and services. Globally, there is a drive to create competitive pharmaceutical and health care industries. As a result, the developers of AI medical devices and services have enjoyed a privileged position since they have been able to further use health data with less restrictions, and sometimes without adequate consent.Footnote 51 On the one hand, this could save lives and minimize treatment costs.Footnote 52 An increased precision due to better and more data, might even help to identify, monitor, and correct potential risks for bias in the data. On the other hand, this situation might lead to the further use of sensitive data with less control and increasing risks for privacy breaches.

To address this dilemma and achieve reasonable tradeoffs, we suggest the following measures to advance the assessment of the safety and efficacy of AI medical devices in the European Union. First, we believe that the expected level of public interest in the case of the secondary use of health data for developing AI medical devices must be clarified for different categories of medical devices, considering both the intended and unintended use scenarios.Footnote 53 Second, we propose to regulate the definition and requirements of scientific research on the EU level to harmonize the secondary use of health data. This would be crucial for providing a sufficient amount of quality data for machine learning in the case of AI medical devices. Moreover, collecting personal data and processing it for a purpose with public interest should not result in a product or service that negatively affects the data subject’s rights. Third, we think that more guidance would be necessary on the safeguards and expected level of de-identification on health data, without overconfidently relying on them. Fourth, we call upon the EMA and notified bodies to be properly prepared for the review of (large) datasets since it is the foundation of AI medical devices. While opening and assessing opaque algorithms is challenging for regulators, we believe that a reasonable level of transparency should be required to allow for sufficient regulatory review of medical device systems.Footnote 54 This does not necessarily imply that every single computational step must be traceable.Footnote 55 For instance, some algorithms could still be utilized to construct a transparent and trusted AI system “as long as the assumptions and limitations, operational protocols, data properties, and output decisions can be systematically examined and validated.”Footnote 56 Fifth, we recommend harmonizing the conformity assessment of notified bodies to provide safety, allow for European-wide reports on unwanted incidents, and avoid forum shopping. Sixth, and finally, we propose to develop special regulation and oversight for AI research to allow for a better coordination and compliance assessment in view of the great variety of separate regulations concerning data protection, health care, and medical research.

6.5 Conclusion

Harnessing the full benefits of AI-driven medical devices offers many opportunities, in particular in health crisis situations, such as the ongoing COVID-19 pandemic. However, many legal risks and lingering questions remain unsolved. The European Union does not yet have the means to fully exploit the benefits of this data due to heterogeneous health care systems with different content, terminologies, and structures.Footnote 57 In addition, the European Union currently has no pan-European data network and is lagging behind other regions in delivering answers for health care-related regulatory questions.Footnote 58 Although the GDPR and Medical Device Regulations aim to address some of these challenges by harmonizing the processing of data and risk assessment of AI medical devices in the European Union, these areas still remain diversified. To enhance the performance and safety of medical devices, it will be important to improve the dialogue between data protection authorities, ethical review boards, notified bodies, and medicine agencies. The proposed recommendations discussed in this chapter attempt to enhance this dialogue for a better understanding and alignment between the medical device sector, regulators, public research programs, and data protection standards.Footnote 59 This could form the basis for a legal debate on the circumstances under which access by researchers to health data by private companies can be justified based on public interest and research exemptions.Footnote 60 Considering the increasing importance of public-private partnerships and AI-driven medical devices proactive initiatives to that effect appear more important than ever.Footnote 61 The ongoing implementation of the EU strategies concerning AI, Data and medical innovation plays an important role in that regard. This has not only resulted in the evolving formation of the European Health Data Space,Footnote 62 but also in the adoption of a new EU Data Governance ActFootnote 63 and the proposal of an AI Regulation,Footnote 64 which provides for regulatory sandboxes for low-risk devices. It is the hope of the authors that these developments will improve the current situation.

7 AI, Explainability, and Safeguarding Patient Safety in Europe Toward a Science-Focused Regulatory Model

Barry Solaiman and Mark G. Bloom
7.1 Introduction

This chapter explores the efforts made by regulators in Europe to develop standards concerning the explainability of artificial intelligence (AI) systems used in wearables. Diagnostic health devices such as fitness trackers, smart health watches, ECG and blood pressure monitors, and other biosensors are becoming more user-friendly, computationally powerful, and integrated into society. They are used to track the spread of infectious diseases, monitor health remotely, and predict the onset of illness before symptoms arise. At their foundation are complex neural networks making predictions from a plethora of data. While their use has been growing, the COVID-19 pandemic will likely accelerate that rise as governments grapple with monitoring and containing the spread of infectious diseases. One key challenge for scientists and regulators is to ensure that predictions are understood and explainable to legislators, policymakers, doctors, and patients to ensure informed decision making.

Two arguments are made in this chapter. First, regulators in Europe should develop minimum standards on explainability. Second, those standards should be informed by the computer science underlying the technology to identify the limitations of explainability. Recently, several reports have been published by the European Commission and the National Health Service (NHS) in the United Kingdom (UK). This chapter examines the operation of AI networks alongside those guidelines finding that, while they make good progress, they will ultimately be limited by the available technology. Further, despite much being said about the opaqueness of neural networks, human beings have significant oversight over them. The finger of liability will remain pointed toward humans, but the technology should advance to help them decipher networks intelligibly. As computer scientists enhance the technology, lawmakers should set minimum standards that are leveled-up progressively as the technology improves.

7.2 Wearables in Health Care

Wearables are devices designed to stay on the body and collect health data such as heart rate, temperature, and oxygenation levels.Footnote 1 Smartwatches, chest belts, clothing, ingestible electronics, and many others are converging with the internet-of-things (IoT) and cloud computing to become powerful diagnostics for more than seventy conditions.Footnote 2 The technology has advanced rapidly, with GPUs, CPUs, and increasing RAM being adopted, opening possibilities for deep learning.Footnote 3 Despite these advances, adoption remains low in the health care setting overall, being in the early stages of the Gartner Hype Cycle.Footnote 4 Nevertheless, the trend is moving toward greater adoption. The COVID-19 pandemic in 2020 may accelerate the development of telemedicine, monitoring patients remotely, predicting disease, and mapping the spread of illnesses.Footnote 5 An example of the technology’s use can be seen in England under an NHS pilot program where patients were fitted with a Wi-Fi-enabled armband. This monitored vital signs remotely, such as respiratory rates, oxygen levels, pulse, blood pressure, and body temperature. AI was able to monitor patients in real-time, leading to a reduction in readmission rates, home visits, and emergency admissions. Algorithms were able to identify warning signs in the data, alerting the patient and caregiver.Footnote 6 This example aligns with a broader trend of adoption.Footnote 7 The largest NHS hospital trusts have signed multiyear deals to increase the number of wearables used for remote digital health assessments and monitoring.Footnote 8 This allows doctors to monitor their patients away from the hospital setting, both before and after medical procedures.

7.3 Human Neural Networks?

Underpinning such technologies is complex computer science. A device can predict illness, but it cannot explain why it made a prediction, which raises several legal issues. A targeted legal strategy cannot be realistically devised without understanding the technology driving it. Lawyers are unlikely to become master coders or algorithm developers, but they can have a reasonable understanding of where most efforts are needed. By examining what drives AI, more technically aware discussions can be generated in the legal sphere.

AI is an umbrella term used for different forms of “machine learning.” This includes “supervised” and “unsupervised” learning, which entails making predictions by analyzing data.Footnote 9 The former involves predefined labels used to assign the data to relevant groups, whereas the latter searches for common features in the data to classify it. A subset of “machine learning” is “deep learning,” which consists of artificial neural networks (ANNs) used for autonomous learning. There are various architectures, but the primary example here is of a deep supervised learning network with labeled data.Footnote 10 Such networks are the most numerous in operation and can illustrate how deep learning works and where the legal issues may arise.

Figure 7.1 depicts a neural network. An ANN begins with an “input layer” on the left.Footnote 11 The example is an image of a cerebellum, which the ANN converts into many “neurons” (represented by the grid of squares). Each neuron is assigned a value (for black and white images) called the “activation” number. The number could, for example, be higher for brighter neurons (where the cerebellum is) and lower for darker neurons (outside the cerebellum). Every neuron is represented in the input layer. The example shows four neurons, but the ANN will have as many neurons as there are pixels in the image.

Figure 7.1: Example of an ANN

The example also shows two hidden layers in the middle, but there will be numerous in practice. In reality, the layers are not hidden to the programmer, but their numerousness makes the ANN virtually undecipherable – much like a human brain. The activations in the input layer (the black circles) will influence what is activated in the first hidden layer (the light grey circles) which will influence further activations. At the end is the output layer with several choices (Cerebellum, frontal lobe, or pituitary gland). The ANN gives the highest value to its choice (here, Cerebellum, the dark grey circle). Between the neurons are connections called “weights” (represented as lines) whose values are determined by a mathematical function. The sum of the weights in one layer determines which neurons are activated in the next layer. For example, the sum of the weights in the input layer has activated the first, third, and sixth neurons in the first hidden layer. Humans can also influence those activations by adding a “bias” to alter the value required for an activation.

In practice, the numerousness and complexity of the connections create an undecipherable matrix of distinct weights and biases. The choice of output cannot be explained, which is where the term “black box” algorithms arises. Despite this, humans play a central role. They give the network training data consisting of many prelabeled images of cerebellums, pituitary glands, and frontal lobes. The network is trained on that data. The process of data moving from left to right is called “forward propagation,” and the weights between the neurons are initially random, which produces random outputs. To correct the ANN, a validation data set is used with labels indicating the correct answer. In response, the ANN works backward (backpropagation) from the output layer, through the hidden layers to the input layer, adjusting the weights and biases as it moves along. The network becomes more accurate through repetition.

Deep supervised learning networks are well suited to diagnostics. Inputs, such as scans, are in a standardized format, which is a useful source of structured input data, training, and validation. The process becomes highly accurate because of the numerous hidden layers and connections. However, the black-box nature of an ANN should not be overstated. Humans have significant involvement, labeling data, giving it to the network, providing feedback, computing biases, interpreting data, and putting it into practice.

Most studies of wearable data have focused on supervised learning architectures.Footnote 12 However, data derived from wearables is often unlabeled and unstructured, which benefits unsupervised learning that identifies patterns to make predictions.Footnote 13 These techniques raise more complex legal issues because humans are less involved. They are a work in progress at present, but they will become more prominent.Footnote 14 Indeed, there are increasing studies that analyze wearable data using unsupervised ANNs. One study proposes an unsupervised ANN to classify and recognize human activities.Footnote 15 It was able to recognize human activities through a combination of data obtained from magnetometers and accelerometers in wearables.Footnote 16 Another study analyzed data from 3D accelerometers on the wrist and hip, a skin temperature sensor, an ECG electrode, a respiratory effort sensor, and an oximeter amongst others.Footnote 17 The unsupervised network yielded 89 percent accuracy in detecting human activities (walking, cycling, playing football, or lying down).Footnote 18 Another approach has analyzed gestures to detect daily patterns that might indicate when older persons require assistance.Footnote 19

These are a small sample of studies increasingly utilizing unsupervised learning architectures in wearables. The underlying point is that such technologies are being used more frequently, which raises legal issues surrounding explainability. At the same time, humans must still train the networks. The processes within the hidden layers are complex to decipher, but humans pretrain and oversee the process.Footnote 20 Consequently, while the legal implications must be deciphered, the autonomous nature of these systems should not be overstated.

7.4 Explainability and the Law

Explainability refers to ex-ante explanations of an ANN’s functionality, and ex-ante or ex-post explanations of the decisions taken, such as the rationale, the weighting, and the rules.Footnote 21 It requires that humans can understand and trace decisions.Footnote 22 However, the regulation of an ANN is as complex as its operation, which is problematic in health care. While shortcomings in explainability of AI systems will not necessarily lead to liability, it is one important factor. The key point of interaction between explainability and liability is at the fact finding or evidence stage. It may be difficult to factually prove the harm caused by a neural network because one cannot explain how a certain input resulted in a specific output, and that a deficiency resulted due to that process.Footnote 23 The circumstances in which explainability becomes important in liability analyses are broad.

Problems may arise where harm is caused to a patient because the doctor did not follow the appropriate standard of care.Footnote 24 Price notes how, in the current climate, the risk of liability for doctors relying on AI recommendations is significant because the practice is “too innovative to have many adherents.”Footnote 25 Algorithm developers might be liable as well. However, in Europe, the laws are incoherent. The Product Liability Directive (1985/374/EEC) holds manufacturers liable for defective products. Proving that an ANN was defective requires technical expertise, but even experts cannot explain the hidden layers of a network.Footnote 26 There is also the problem of ANNs being autonomous and changing. While the European Union has taken a strict approach on manufacturers being liable for the safety of products throughout their lifecycle, it acknowledges that the Directive should be revisited to account for products that may change or be altered thereby leaving manufacturers in legal limbo.Footnote 27

There are also medical device regulations, but half of developers in the United Kingdom do not intend to seek CE Mark classification because it is uncertain whether algorithms can be classified as medical devices.Footnote 28 There are medical device conformity assessments, but there are no standards for validating algorithms nor regulating adaptive algorithms.Footnote 29 Also, while manufacturers must carry out risk assessments before products are placed on the market, they quickly become outdated because ANNs continuously evolve.Footnote 30 For doctors, they may be negligent when advising patients based on AI recommendations that later cause harm. There are also questions about whether a person can consent to flawed medical advice from an ANN. These challenges are recognized in Europe where several reports were published in 2019 and 2020.

7.5 Guidelines

In the European Union, there are Guidelines, a White Paper, and an Assessment List regarding AI geared toward developing a future regulatory framework. On the guidelines, the EU Commission set up an “independent group” which released the Ethics Guidelines for Trustworthy AI in 2019 seen as a “starting point” for discussions about AI premised on respect for human autonomy, prevention of harm, fairness, and explicability.Footnote 31 The White Paper (which builds upon the Guidelines) was published in 2020 and outlines an approach to AI based on “excellence and trust.”Footnote 32 It notes that while AI can improve prevention and diagnosis in health care, black box algorithms create difficulties of legal enforcement.Footnote 33 The Assessment List for Trustworthy Artificial Intelligence (ALTAI) is a self-assessment list published in July 2020.Footnote 34

7.5.1 Guidelines, Explainability, and the GDPR

In the Guidelines, the principle of “explicability” is of primary relevance. It requires that AI processes and decisions are transparent and explainable to those involved.Footnote 35 The Guidelines emphasize that this may not always be possible with black box algorithms and, “in those circumstances, other explicability measures (e.g., traceability, auditability, and transparent communication on system capabilities) may be required.”Footnote 36 Auditability and transparent communication are likely within easiest reach from a technical standpoint. The accuracy of the training data used can be verified, and the specific tasks undertaken by humans developing the network can be checked. Traceability is the greatest challenge owing to the hidden layers.

The Guidelines highlight several principles that may help in realizing explainability. First, “human agency,” which refers to humans understanding AI systems and challenging them.Footnote 37 AI can shape human behavior and should support informed decision making.Footnote 38 The issue is whether a doctor is liable for advice given that was informed by AI recommendations. Of relevance is Article 22 of the General Data Protection Regulation (GDPR) concerning automated decision making and profiling which protects individuals from decisions “based solely on automated processing, including profiling, which produces legal effects.”Footnote 39 The Information Commissioner’s Office (ICO) in the United Kingdom requires that individuals must have the right to obtain human intervention, express their point of view, an explanation of the decision and the ability to challenge it.Footnote 40

Taken at its most extreme, there would be an automatic infringement of a medical decision based solely on the automated processing of an ANN, and a right to an explanation. However, it has been argued that a “right to explanation” does not exist under the GDPR, but rather a limited right to be “informed” of system functionality.Footnote 41 In other words, a right only to ex ante explanations of system functionality at the data collection stage, rather than ex post explanations of the decisions that have been made once the data has been propagated and an output generated.Footnote 42 Furthermore, a right to explanation has existed for many years in different jurisdictions but has not led to greater transparency because copyright protections have precluded algorithms from being revealed.Footnote 43 The general distinction is that persons might be entitled to know of specific data used in a neural network, but are not entitled to know the weights, biases and, statistical values.Footnote 44 The extent of the right is very narrow and would, in any case, be limited to those bringing a claim rather than general laws on explainability setting minimum standards.

Further, it would be a rare scenario indeed for a decision to be made “solely” by AI as required under Article 22. In practice, AI is used to supplement informed decisions rather than make them. It is also unlikely that AI outputs can result solely from “automated” processing because humans are always involved in some capacity.Footnote 45 Most fundamentally, the wording of Article 22 requires that automated processing has “legal effects” on the individual. However, an ANN will not interfere with the right not to consent, nor to withdrawing consent once it has been given. Although, the law might protect those relying on wearable tech giving flawed advice that would interfere with their right to informed consent.

Matters are further complicated by an interrelated provision in the GDPR concerning “profiling,” which is any “automated processing of personal data” used to predict aspects concerning a person’s health.Footnote 46 Wearables may combine individual health data with broader user data to provide individualized advice. Users relying on them would be unable to assess why the advice was given nor challenge it, which undermines the aims of “human agency” in the Guidelines. Additionally, nothing in the law appears to preclude automated processing where the individual consents.Footnote 47 The law could protect individuals by requiring a minimum level of explainability in such cases.

A related matter is “human oversight and autonomy.” This is most practically achieved through “human-on-the-loop” or “human-in-command” approaches.Footnote 48 The former requires that humans can both intervene in designing a system and monitor it. The latter refers to holistic oversight over a network. The Guidelines recommend that the less oversight a human has, the more extensive testing and stricter governance is required.Footnote 49 However, for neural networks to work, humans must intervene and monitor a system, both granularly and holistically. Without such oversight, the neural network would produce “garbage” outputs. Networks can be tricked easily, and even slight changes to the data can cause them to fail.Footnote 50 The Guidelines, therefore, overstate the significance of these principles.

Other principles are “technical robustness and safety” and “human oversight and autonomy.” Networks could be required to change procedures or ask for human intervention before continuing an operation when encountering a problem. A network should indicate the likelihood of errors occurring, be reliable, and have reproducible outputs.Footnote 51 This requires adequate transparency, which entails principles of “traceability” and “communication.”Footnote 52 Traceability means documenting outputs of the ANN, the labeled data, the datasets, and data processes.Footnote 53 Communication means revealing the AI’s capabilities and limitations.Footnote 54 Returning to the GDPR, Article 22(3) requires “the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.”

Two matters arise here. First, what human involvement means. Second, when should humans get involved? The former could mean humans replacing automated decisions without algorithmic help; a human decision taking into account the algorithmic assessment, or humans monitoring the input data based on a person’s objections and a new decision made solely by the network.Footnote 55 It could also mean that a data controller must provide ex-ante justifications for any inferences drawn about the subject’s data to determine whether the inference was unreasonable.Footnote 56 A risk-based approach could determine the latter. Thus, the riskier the recommendation by an ANN, the more checks required.Footnote 57 However, this would be limited to procedural rather than substantive validation, such as appropriately training doctors for using AI.Footnote 58 Further, a risk-based approach would still be unable to assess the reasons for AI recommendations.

Much remains undetermined regarding what these factors mean in practice for explainability. The White Paper recognizes that these principles are not covered under current legislation and promises feedback later.Footnote 59 For now, it proposes distinct forms of human oversight such as blocking AI systems not reviewed and validated by humans; allowing systems to operate temporarily as long as human intervention occurs afterward; ensuring close monitoring of networks by humans once they are in operation and that networks can be deactivated when problems arise; or imposing operational constraints on networks during the design phase.Footnote 60 Such oversight could assist in finding inaccurate input data, problematic inferences, or other flaws in the algorithm’s reasoning.Footnote 61 It could form part of procedural evaluations of black-box algorithms noted by Price.Footnote 62 However, a key question is how such factors may apply in practice, which is why the Commission also released an Assessment List (ALTAI). The ALTAI list contains two questions on explainability, but they are minimalist. The first asks whether the decision of a neural network was explained to users. The second asks whether users were continuously surveyed about whether they understood the decisions of a network.Footnote 63 There are other potentially useful questions regarding human oversight and the other principles noted above, but it is the NHSX approach that is of most practical significance.

7.5.2 Practical Implementation

The NHS Code of Conduct for Data-Driven Health and Care Technology may provide a practical solution. Principle 7 focuses on explainability. It states: “Show what type of algorithm is being developed or deployed, the ethical examination of how the data is used, how its performance will be validated and how it will be integrated into health and care provision.”Footnote 64 The outputs should be explained to those relying on them, the learning methodology of the ANN should be transparent, the learning model and functionality specified, its strengths and limitations and compliance with data protection.Footnote 65

To assist developers, there is a “how-to” guide detailing what is expected when developing AI.Footnote 66 Four processes are relevant here. First, reporting the type of algorithm developed, how it was trained and demonstrating that adequate care was given to ethical considerations in the input data.Footnote 67 For this, a “model card” or checklist approach is proposed for explaining those aspects of the ANN.Footnote 68 Second, provide evidence of the algorithm’s effectiveness through external validation, communicating early with NHSX on the proposed method of continuous audit of inputs and outputs, and how they were determined.Footnote 69 Third, explain the algorithm to those relying on their outputs, detail the level of human involvement, and develop languages that are understandable to the layperson.Footnote 70 Fourth, explain how a decision was “made on the acceptable use of the algorithm in the context of it being used.”Footnote 71 This may involve speaking to patient groups to assess their thinking on the acceptable uses of AI and monitor their reactions to gauge acceptance of the technology.Footnote 72

The Code is significant because it indicates how minimum standards for explainability might operate in the context of an ANN. However, it is undetermined how the factors might be realized or whether a uniform approach would work for all neural networks. A pilot Trustworthy AI Assessment List has been proposed in the Commission’s Guidelines with questions on traceability and explainability.Footnote 73 The questions on traceability concern detailing the method of programming and testing – those on explainability concern the ability to interpret outputs and ensuring that they can be explained. The questions are useful but remain idealistic for deriving sense from the hidden layers. The technological limitations mean that other ideas in the Guidelines are more practicable at present. This includes a “white list” of rules that must always be followed and “black list” restrictions that must never be transgressed.Footnote 74

While such requirements could provide minimum standards for explainability, there are some aspects of neural networks that remain unexplainable. If networks do not provide insight into their continuously evolving reasoning, it will be impossible to achieve detailed insight arising from any checklist. For this reason, researchers are developing new technologies surrounding “algorithmic transparency.” This includes auditing techniques and interactive visualization systems.Footnote 75 It is beyond the scope of this chapter to explore these in detail, but one example involves the creation of a “deep visualization” toolbox that examines the activation of individual neurons.Footnote 76 Working backward, researchers can map out different neurons and determine which one influences the other. The activated neurons can be viewed in real-time to see which parts of an image the neuron is highlighting.Footnote 77 As this technology develops further, lawyers and policymakers should remain alert to incorporating standards developed in this field into the explainability requirements of guidelines and regulations. One day, they could form part of the minimum standards for explainability.

7.6 Conclusion

The foundations for setting minimum standards concerning explainability have now been established. However, there are shortcomings in AI-enhanced technology, such as wearables, which undermine informed decision-making for doctors, patients, and others. This is problematic because wearables will become ever more heavily relied upon for a wide variety of medical purposes. Further, doctors and patients ought to know why neural networks produce specific outputs. In time, scientists will develop more sophisticated models of explainability. Regulators, doctors, patients, and scientists should work together to ensure that those advances filter into the relevant guidelines as they develop – a gradual and flexible “leveling up” that keeps apace with the science. In this manner, lawyers and policymakers should take responsibility for better understanding the technology underlying those systems. As such, they should become more familiar with and knowledgeable about neural networks, the use of input data, training data, how data propagates, and how “learning” occurs. This will be key for creating standards that are relevant, sound, and justified. While laws and guidelines in the future will indicate the path to be pursued, some matters will take concerted interdisciplinary efforts to resolve.

8 Regulation of Digital Health Technologies in the European Union Intended versus Actual UseFootnote *

Helen Yu

The functionality of digital health technologies (DHTs), such as wearable devices and virtual assistants, is increasingly being used to make personal health and medical decisions. If manufacturers of DHTs are able to avoid regulation of their products as medical devices by marketing them as “lifestyle and well-being” devices, the potential harm caused to consumers who use DHTs beyond the manufacturer’s intended purpose will not be adequately addressed. This chapter argues the need for a framework to reclassify and regulate DHTs based on evidence of actual use.

This chapter focuses on how the classification rules and postmarket surveillance system provisions of the EU Medical Devices Regulation (MDR) need to anticipate and address the actual use of DHTs. To date, courts and regulators have not been consistent on the circumstances under which manufacturers are held responsible for known or encouraged “misuse” of their products. By defining a postmarket surveillance requirement for manufacturers of DHTs to acquire knowledge of the actual use of their products, informed regulatory decisions based on impact can be made. Actual use information can also help establish that the risk caused by a reasonably foreseeable misuse of DHTs was known to the manufacturer in a liability claim should consumers suffer harm from relying on statements or representations, made or implied, when using DHTs to self-manage their health. Moreover, if data generated by DHTs will be used to make regulatory decisions under the 2020 revision of the Good Clinical Practice, the MDR must proactively regulate technologies that have an actual impact on public health.

8.1 Introduction

The functionality of digital health technologies (DHTs), such as wearable devices and virtual assistants, is being promoted as essential tools to empower people to take control and responsibility of their own health and wellness. Examples of wearable devices referred to in this chapter include devices that track health and fitness-related data such as heart rate, activity level, sleep cycles, caloric intake, and the like. An example of a virtual assistant includes Amazon Echo with its technology to analyze the user’s voice to detect and determine “physical or emotional abnormality” and provide targeted content related to a particular medicine sold by a particular retailer to address the detected problem.Footnote 1

There is significant literature on the potential benefits of DHTs in reducing costs and the burden on the health care system, for example, by providing patients with options to self-manage health from home.Footnote 2 DHTs are also attributed with the ability to help detect early warning signs of potentially serious health conditions, alerting users to irregularities, leading to investigations to detect illnesses that may otherwise have gone unnoticed with potentially tragic consequences.Footnote 3 While health care providers generally recognize DHTs as useful tools, there is also evidence that these very same technologies are increasingly being used by the public in a manner that potentially increases health care costs in the long run.Footnote 4 One study reported an increase in physician-“digitalchondriac” interaction where patients demand immediate attention from medical professionals based on troubling key health indicators detected by wearable devices, which may or may not be accurate.Footnote 5 On the other end of the spectrum, some patients elect to by-pass traditional health service structures and formalities and take medical and health decisions into their own hands at great risk to themselves instead of consulting a medical professional. Some doctors recount stories of patients taking prescription medication in response to an irregular reading from their wearable device without understanding or inquiring about the risk of taking a higher than recommended dosage of medication.Footnote 6

DHTs have been setting off alarms for users to take note and control of their health, but there are also data and reports that suggest many of those alarms turn out to be false. As consumers increasingly engage in self-monitoring and self-care with the help of DHTs, health practitioners need to respond to patient confusion and anxiety created by data generated by DHTs.Footnote 7 Because the accuracy of DHTs can vary greatly with a margin of error as high as 25 percent across different devices,Footnote 8 health practitioners have the added burden of treating patients without medical training who nevertheless seek medical intervention for self-diagnosed illnesses derived from the internet by attributing symptoms detected by unreliable DHTs. The next section will examine the applicable regulatory framework in the European Union to better understand what oversight mechanisms are available to ensure the safety and efficacy of DHTs in view of evidence of how consumers actually use these devices to make personal health and medical decisions.

8.2 The EU Medical Devices Regulation

Medical devices are recognized as essential to the health and wellbeing of European citizens and legislation is essential to ensure the safety and efficacy of medical devices for the protection of public health.Footnote 9 The new EU Medical Devices Regulation (MDR)Footnote 10 will come into force in May 2021, replacing the existing Medical Devices Directive (MDD).Footnote 11 The MDR attempts to modernize the MDD by introducing new concepts, definitions, and rules that may be applicable to DHTs. For example, the definition of a medical device in the MDR includes new qualifying language “prediction and prognosis of disease.”Footnote 12 In principle, this definition should capture the collection, monitoring, processing, and evaluation of physiological data associated with DHTs since they claim to be capable of potentially predicting or providing a prognosis of potential future disease identification from the data collected.

However, the MDR clearly states “software for general purposes, even when used in a health care setting, or software intended for lifestyle and well-being purposes is not considered a medical device.”Footnote 13 It is the intended purpose, as opposed to the technological features and capabilities of a device that determines whether a DHT will be regulated under the MDR. Intended purpose is defined as “the use for which a device is intended according to the data supplied by the manufacturer on the label, in the instructions for use or in promotional or sales materials or statements and as specified by the manufacturer in the clinical evaluation.”Footnote 14 Because the regulatory framework can be challenging for many startups with limited resources, the ability to market the intended use of DHTs as health and wellness devices as opposed to a medical device that requires a higher degree of regulatory compliance is a pragmatic business decision, but at what potential cost to public health? Even for larger companies, Apple CEO Tim Cook stated that the regulatory process and degree of adherence required for the Apple Watch would prevent Apple from continuing to innovate and remain competitive in the medical product marketplace.Footnote 15

In brief, the classification rules and procedures under the MDR are based on the potential risk a particular device poses to the user, having regard to the technical design and manufacture of the device.Footnote 16 Currently, a significant number of wearables are classified as Class I noninvasive devices.Footnote 17 However, under the MDR, the introduction of a more nuanced classification system and a more involved assessment procedure may increase the regulatory scrutiny of DHTs. For example, software intended to monitor physiological processes will be considered Class IIa, and software intended to monitor vital physiological parameters would be classified as Class IIb.Footnote 18 As the classification level increases, the applicable safety rules and conformity assessments also become stricter. However, the increased classification level applies only to “active devices intended for diagnosis and monitoring,” which again does not include DHTs that manufacturers self-declare as intended for “lifestyle and well-being purposes.”Footnote 19

While efforts continue to focus on what types of innovations fall into the definition of a medical device and within which classification level, this chapter focuses on how the public actually uses and interfaces with these products, regardless of the regulatory classification. There is increasing evidence to suggest that consumers use DHTs to help with medical care decision making despite the manufacturer’s stated intent.Footnote 20 Although the MDR attempts to establish a contemporary legislative framework to ensure better protection of public health and safety, the point where DHTs “not intended for medical purposes” and the use of pharmaceuticals intersect, raises a myriad of legal, ethical, and policy implications. Pharmaceuticals, which are highly regulated, are reportedly being used by the public to make self-determined medication decisions based solely on information derived from DHTs,Footnote 21 which are not as well regulated under the MDR. Understandably, the regulatory framework should focus on technologies that pose the greatest risk to patients and their data security. However, as discussed in greater detail below, “misuse” of lower-risk devices beyond the manufacturer’s intended use could raise significant public health risks not previously contemplated. Some DHTs proclaim medical benefits but disclaim that the device is intended for health and wellbeing purposes only.Footnote 22 If manufacturers of DHTs are able to avoid the higher regulatory burden associated with having their products classified as medical devices, the question is what legal framework exists to hold manufacturers responsible for known “misuse” of their products and whether consumer protection laws will provide adequate redress to the potential harm caused to consumers who nevertheless use DHTs beyond the manufacturer’s stated purpose to make personal health and medical decisions.

Although the vast majority of DHTs pose a very low risk of harm to consumers, there is increasing evidence that many of these devices are not as accurate as described or fail to work at all.Footnote 23 Without an oversight mechanism to detect and respond to the health risk arising from the actual use of low-risk devices beyond the manufacturer’s stated intended use means many consumers could be adversely affected throughout the lifecycle of the product without recourse. To bring medical devices onto the EU market, the CE approval process is required to verify that a device meets all the regulatory requirements under the MDR. However, for Class I devices, the manufacturer is responsible for self-certification for the CE marking process.Footnote 24 Policy proposals related to permitting lower-risk devices to be brought to market more efficiently on the condition that postmarketing data on safety and effectiveness is collected as part of mandatory renewal or reevaluation process has previously been considered.Footnote 25 While postmarket safety and efficacy data may be used to assess whether a DHT continues to qualify for a low-risk classification level, it falls short of providing an evidence-based reason to reclassify a DHT based on the potential risk arising from how consumers actually use these devices, regardless of their safety and efficacy profile.

8.3 Intended versus Actual Use

While DHTs are intended to modify behavior to improve health and wellness, an unintended consequence of the functionalities of these devices is that consumers are increasingly using them to make personal health and medical decisions.Footnote 26 DHTs offer to collect and monitor physiological data that medical devices do and can be used in combination with apps to interpret such data to provide medical advice. The line between DHTs and medical devices therefore increasingly becomes blurred, particularly to the consumer, as new devices and new improvements of well-established wearables allow the monitoring and assessing of a range of medical risk factors.Footnote 27 According to a recent survey, 71 percent of physicians say they use digital health data to inform their own personal health decisions,Footnote 28 and another survey found that consumers are increasingly using wearables to make critical health care decisions instead of monitoring physical activity and lifestyle.Footnote 29

However, the majority of manufacturers provide no empirical evidence to support the effectiveness of their products, in part, because the applicable regulation does not require them to do so.Footnote 30 Recent reports indicate an increase in incidents of wearables sending otherwise healthy people to doctors due to incorrect and inaccurate readings.Footnote 31 Meanwhile, popular consumer devices continue to insist that their product, unless otherwise specified, is not a medical device and should not be held to such a standardFootnote 32 despite marketing their devices as being able to “help improve wellness, disease management and prevention.”Footnote 33 Experts agree that wearable devices cannot be expected to give medical grade accuracy, nor should consumers demand such high scientific quality from DHTs. However, as users become increasingly more reliant on DHTs that may provide a false sense of security on one spectrum to misguided self-diagnosis on the other, the need for legal solutions and regulatory oversight has been called for to address issues of consumer harm and accountability.Footnote 34

To avoid liability, manufacturers typically rely on disclaimers even though it is known that users tend to ignore such information.Footnote 35 Legal measures are available to address direct-to-consumer marketing practices relating to fraudulent or misleading advertising. For example, in the 2018 dispute against Fitbit’s Purepulse heart rate tracker for being grossly inaccurate, alleging false advertising, common law fraud, and breach of implied warranty among other claims, the court allowed the class action to proceed, stating that “[g]iven the magnitude of the aberrant heart rate readings and multiple allegations that the devices under-report heart rate, [plaintiff] has plausibly alleged an ‘unreasonable safety hazard’ that may arise when users rely on Fitbit heart rate readings during exercise.”Footnote 36 Similarly, the FDA also monitors medical product communications to make sure they are consistent with the product’s regulatory authorization.Footnote 37 However, the FDA has stated that it will only oversee “medical devices whose functionality could pose a risk to patient safety if the [device] were to not function as intended.”Footnote 38 More specifically, the FDA stated that it does not intend to regulate general wellness products.Footnote 39 In other words, if the manufacturer’s stated intention is for DHTs to be used for “life-style and well-being” purposes, then any use by the public outside the intended use falls outside the scope of the FDA regulatory framework.

Courts and policy makers seem to support the consumer demand for reliability of DHTs.Footnote 40 However, courts have not always been consistent on the circumstances under which manufacturers are held responsible for known or encouraged “misuse” of their products.Footnote 41 Nor have they provided clear or predictable guidance on what constitutes reasonably foreseeable misuse that manufacturers should have known that their product is being used for a purpose for which it is not intended.Footnote 42 Because of the legal duty to anticipate and take precautions against unintended but reasonably foreseeable use of products, manufacturers have always been expected to be apprised of the potential “misuses” of their products. Generally, under the reasonable foreseeability standard, manufacturers can be held liable for injuries caused by a product even if the consumer fails to use the product as intended, but the consumer must show the actual use rendered the product defective, which was known or should have been known to the manufacturer.Footnote 43 In practice, it can be difficult to determine what unintended uses and what harms arising from such unintended uses are reasonably foreseeable, with some responsibility of prudence being placed on the consumer.Footnote 44 It would likely be difficult to establish legal liability under the consumer protection framework for harms arising from the known use of DHTs by consumers who rely on these devices to make medical and health decisions instead of using them for health and wellness purposes only.

There is an opportunity for the MDR to implement a reclassification framework based on evidence of actual use to provide better regulatory oversight, especially as the functionality of DHTs continues to expand their focus toward health care by detecting and measuring an increasing number of physiological parameters associated with health conditions. Manufacturers should not be able to circumvent and avoid higher regulatory burdens by being willfully blind to the increasing evidence of consumers who feel empowered by promotional statements or representations made or implied that they can use DHTs as a means to take control of and responsibility for their own health and wellness.Footnote 45 However, according to the Court of Justice of the European Union “[where] a product is not conceived by its manufacturer to be used for medical purposes, its certification as a medical device cannot be required.”Footnote 46 In other words, how a device is actually used should have no bearing on how the device is regulated if the stated intention of the manufacturer is that the product is not a medical device. Nevertheless, the postmarket surveillance (PMS) requirement under the MDR may be used to require manufacturers to proactively understand how their products are being used by the public to better align regulatory purposes with public health objectives.

8.4 Postmarket Surveillance (PMS) under the MDR

Under the MDR, the PMS system is a proactive procedure where manufacturers act in cooperation with other economic actors to collect, review, and report on experiences of devices on the market with the aim of identifying any need for corrective or preventative measures.Footnote 47 One of the new features of the MDR is the concept of a PMS plan that requires manufacturers to define the process of collecting, assessing, and investigating incidents and market-related experiences reported by health care professionals, patients, and users on events related to a medical device.Footnote 48 According to the MDR, the PMS plan “shall be suited to actively and systematically gathering, recording and analysing relevant data on the quality, performance and safety of a device throughout its entire lifetime, and to drawing the necessary conclusions and to determining, implementing and monitoring any preventive and corrective actions.”Footnote 49 Because of a growing demand to adopt a more proactive as opposed to the current passive reactive approach to PMS,Footnote 50 the implementation of the PMS plan under the MDR may be an avenue to address the concerns associated with the actual use of DHTs beyond the manufacturer’s stated intended use. The ability to identify risks and take corrective measures in a timely manner is vital for any regulatory framework. Clear guidance on the implementation of the PMS plan is essential to improve the delivery of health care to consumers through the help of DHTs.

Arguably, the PMS plan can be interpreted to include an obligation to collect postmarketing data on consumer use of DHTs as part of a mandatory reevaluation process to assess the appropriate classification level and regulatory compliance the DHT must adhere to in order to continue to remain on the market. By defining a PMS requirement for manufacturers of DHTs to acquire knowledge of actual use of their products in order to maintain their lower classification status, informed regulatory decisions based on data and evidence can be made. Actual use information can also help establish that the risk caused by a reasonably foreseeable misuse of DHTs was known to the manufacturer in a liability claim should consumers suffer harm from relying on statements or representations, made or implied, when using DHTs to self-manage their health.

However, the MDR is not particularly clear on the extent of the PMS obligation, stating that the PMS plan should be “proportionate to the risk class and appropriate for the type of device.”Footnote 51 For Class I devices, a PMS report based on the PMS plan shall be “updated when necessary and made available to the competent authority upon request,”Footnote 52 and there is no clarification of how often information should be collected. The elements and type of information that shall be collected for the PMS plan include adverse events, data on nonserious incidents and undesirable side effects, safety updates, trend reporting, relevant specialist or technical literature, and feedback and complaints from users.Footnote 53 Although information about actual use is not specifically mentioned, it may be captured under trend reporting, which is intended to include incidents “that could have a significant impact on the benefit analysis … which may lead to risks to the health or safety of patients, users or other persons that are unacceptable when weighed against the intended benefits.”Footnote 54 The reclassification of devices is contemplated for reasons of public health based on new scientific evidence or based on information that becomes available in the course of vigilance and market surveillance.Footnote 55 Evidence of actual use collected as part of the PMS plan can therefore be used as grounds for reclassification to anticipate and address the actual use of DHTs. However, even with the ability to reclassify, the classification rules and definition of noninvasive versus active devices based on intended use renders this process a vicious cycle. Furthermore, the reclassification of devices based on PMS requires a request to the commission by a Member State and consultation with the Medical Device Coordination Group,Footnote 56 making the process bureaucratically cumbersome and unlikely to be used in practice. Without clearer implementation guidelines and a better alignment of the PMS objectives with the classification rules, the PMS plan could become a toothless oversight mechanism.

PMS allows for continuous vigilance, not only to ensure quality, safety, and efficacy of the devices but to ensure the appropriate level of regulatory adherence based on how a device is actually being used. With the rapid proliferation and advancement of DHTs, it will require a collaborative effort between manufacturers, regulators, health care providers, and consumers to strike the right balance between the appropriate regulatory burden and the benefit that DHTs promise to bring to the public health system. DHTs could prove to be a good secondary diagnostic tool with its ability to constantly monitor and collect data and provide detailed longitudinal data to monitor progress and understand patterns.Footnote 57 A deeper understanding of patients through their health data is one of the keys to improving health, especially in managing chronic conditions that are primarily driven by leading an unhealthy lifestyle.Footnote 58 As the International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use continues to consider revisions to the Good Clinical Practice (GCP) to enable the use of real-world evidence, such as patient data derived from or influenced by consumer use of DHTs, reliable oversight and regulation of DHTs become even more pressing to ensure the data are reliable and appropriately collected and interpreted to serve as evidence for informing future regulatory decisions. The underlying presumption that data derived from DHTs can be transformed into meaningful real-world evidence to be used for the intent contemplated by the GCP is that the data is reliable and that there is a relationship between the use of the DHT and the clinical relevance of the data.Footnote 59 A PMS system that is aligned with the classification rules to adapt regulatory oversight of DHTs based on actual use and actual impact on consumer health will better support the use of real-world evidence or data derived from DHTs for the purposes of the GCP. The interpretation of the PMS plan to include a proactive requirement on manufacturers to self-report and be informed of not only the safety and efficacy of their products but also how their products are being used will help users and regulators make more informed decisions. This requirement also aligns with the EU responsible research and innovation policy objectives to ensure the innovation process is interactive, transparent, and responsive to public interests and concerns.Footnote 60 The PMS plan may be resource intensive; however, innovation will still be encouraged by allowing manufacturers to continue to benefit from easier access to the market without imposing a higher regulatory burden at the outset. Furthermore, the collection of actual use information may constitute know-how that can be used to facilitate follow-on innovation and ultimately increase competition. A risk-based regulatory framework that promotes innovation, protects patient safety, and avoids overregulation of DHTs can be achieved if the clear objectives and a robust structure are defined for the PMS system.

8.5 Conclusion

With the introduction of the PMS plan in the MDR, industry, regulators, health care professionals, and consumers have the opportunity to work together to define oversight parameters and mechanisms to realize the potential of DHTs as a health care tool. If consumer DHTs are being advertised as providing medical grade resultsFootnote 61 and therefore being used by consumers as a medical device, the MDR needs to provide adaptive mechanisms to respond to how DHTs are actually used. Leveraging the PMS plan to require manufacturers to proactively monitor, collect, and report on the actual use of DHTs by consumers in order to continue to qualify for classification and regulation as a lower-risk device will convey accountability and provide an evidence-based oversight mechanism within the MDR to garner public trust. Interpreting the PMS plan to require manufacturers to report on how their products are actually used as part of a mandatory reevaluation process to continually assess the classification of the device, regardless of the device’s safety and efficacy profile, will ensure greater consumer protection. To achieve this, the MDR must provide clearer implementation guidelines that better align PMS obligations with the classification rules applicable to DHTs. As advocated by some medical professionals, if medical decisions will be made from information generated by DHTs, then such DHTs will require proportionate regulatory oversight.Footnote 62

Footnotes

4 Cybersecurity of Medical Devices Regulatory Challenges in the European Union

The authors wish to thank: Prof. W. Nicholson Price II, Charlotte Ducuing, and Jessica Schroers for their helpful comments and feedback. The research leading to these results has received funding from the European Union’s Horizon2020 Research and Innovation Programme, under Grant Agreement no 787002.

1 Enrico Frumento, Cybersecurity and the Evolutions of Healthcare: Challenges and Threats Behind its Evolution, in m_Health Current and Future Applications 115 (Giuseppe Andreoni et al. eds., 2019).

2 This happened, for instance, during the Wannacry malware attacks for several trustees of the UK National Healthcare System (NHS). See Finnian Bamber et al., Nat’l Audit Office, Investigation: Wannacry Cyber-Attack and the NHS (2018).

3 As was demonstrated in 2018 by a team of researchers, an attacker could cause pacemakers to deliver a deadly shock or stop an insulin pump from providing the needed insulin to a patient. See Sally Shin & Josh Lipton, Security Researchers Say They Can Hack Medtronic Pacemakers, CNBC (Aug. 17, 2018), www.cnbc.com/2018/08/17/security-researchers-say-they-can-hack-medtronic-pacemakers.html.

4 See Laurens Cerulus, Hackers Use Fake WHO Emails to Exploit Coronavirus Fears, POLITICO (Mar. 13, 2020), www.politico.eu/article/hackers-use-fake-who-emails-to-exploit-coronavirus-fears-for-gain/?fbclid=IwAR379JroScZEggppneFxEQqMpYfKP9M0Rg90k1lB-xziGkIH_3Byy1NtKjE; Mathew M. Schwartz, COVID-19 Complication: Ransomware Keeps Hitting Healthcare, Bank Info Security (Mar. 16, 2020), www.bankinfosecurity.com/covid-19-complication-ransomware-keeps-hitting-hospitals-a-13941.

5 See Deborah Eskenasy, Le dispositif médical à la recherche d’un nouveau cadre juridique 38 (Nov. 30, 2016) (unpublished PhD dissertation) (remarks on legal literature on medical devices law).

6 See, for example, Charlotte A. Tschider, Enhancing Cybersecurity for the Digital Health Marketplace, 26 Ann. Health L. 1 (2017); Louiza Doudin, Networked Medical Devices: Finding a Legislative Solution to Guide Healthcare into the Future, 40 Seattle U. L. Rev. 1085 (2017).

7 Joint Communication to the European parliament, the Council, the European Economic and Social Committee and the Committee of the Regions Cybersecurity strategy of the European union: an open, safe and secure cyberspace, JOIN (2013) 1 final (Feb. 7, 2013) [hereinafter EC 2013 Cybersecurity Strategy].

8 Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017, on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC, 2017 O.J. (L 117/1) [hereinafter MDR].

9 Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on ENISA (the European Union Agency for Cybersecurity) and on information and communications technology cybersecurity certification and repealing Regulation (EU) No 526/2013 (Cybersecurity Act), 2019 O.J. (L 151) [hereinafter CSA].

10 Directive (EU) 2016/1148 of the European Parliament and of the Council of 6 July 2016, concerning measures for a high common level of security of network and information systems across the Union, 2016 O.J. (L 194) [hereinafter NISD].

11 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), 2016 O.J. (L 119) [hereinafter GDPR].

12 Directive 2014/53/EU of the European Parliament and of the Council of 16 April 2014 on the harmonization of the laws of the Member States relating to the making available on the market of radio equipment and repealing Directive 1999/5/EC, 2014 O.J. (L 153) [hereinafter RED].

13 See Emmanuelle Mathieu et al., 2011, Regulatory Agencies and Multi-Actor Regulatory Governance: A Method to Study Regulatory Fragmentation, Specialization, Coordination and Centralization (unpublished manuscript) (2011), www.academia.edu/20494619/Regulatory_agencies_and_multi-actor_regulatory_governance_A_method_to_study_regulatory_fragmentation_specialization_coordination_and_centralization (on the notion of specialization and fragmentation).

14 In this chapter, we will refer to “cybersecurity” in two different ways. In a general way, we mean “cybersecurity” as a policy objective pursued by the European Union – having regard to the EC 2013 Cybersecurity Strategy (see supra Footnote note 7). When used in a specific way, we refer to the definition provided by the CSA, art. 4: “a set of activities to protect network and information systems the users of such systems, and other persons affected by cyber threats.”

15 MDR, supra Footnote note 8.

16 Footnote Id. art. 2(30).

17 See Medical Devices Coordination Group, Guidance on Cybersecurity of medical devices (Dec. 2019) [MDCG, Guidance] (complete list of the cybersecurity requirements).

19 MDR, supra Footnote note 8, art. 5(2).

20 Footnote Id. Annex I, req. 1.

22 Footnote Id. req. 3.

23 Footnote Id. req. 4.

24 Footnote Id. req. 14.1.

25 Footnote Id. req. 14.2.(d).

26 Footnote Id. req. 14.5.

27 Footnote Id. req. 17.1.

29 Footnote Id. req. 17.2.

30 Footnote Id. req. 17.4.

31 Footnote Id. req. 23.1.(g).

32 Footnote Id. req. 23.2.(m).

33 Footnote Id. req. 23.4.(ab).

34 MDCG, Guidance, supra Footnote note17.

35 See Elisabetta Biasin, Medical Devices Cybersecurity: A Growing Concern?, CITIP Blog (Sept. 26, 2019), www.law.kuleuven.be/citip/blog/medical-devices-cybersecurity-a-growing-concern/, (a concise overview of cybersecurity, EU guidance and the MDR).

36 MDCG, Guidance, supra Footnote note 17, at 7.

37 Footnote Id. at 9.

38 See Gloria González Fuster & Lina Jasmontaite, Cybersecurity Regulation in the European Union: The Digital, the Critical and Fundamental Rights, in The Ethics of Cybersecurity 119 (Markus Christen et al. eds., 2020) (for an overview of the coherence problem in the EU cybersecurity legal framework).

39 MDCG, Guidance, supra Footnote note 17, at 12.

41 Footnote Id. at 13.

42 See Erik Kamenjasevic, Protect the Weakest Link in a Cyber-Security Chain – Protect the Human, CITIP Blog (Mar. 20, 2018), www.law.kuleuven.be/citip/blog/protect-the-weakest-link-in-a-cyber-security-chain-protect-the-human/.

43 NISD, supra Footnote note 10.

44 GDPR, supra Footnote note 11.

45 CSA, supra Footnote note 9.

46 Further elaboration on these laws could have been done, by the same expert group, based on art. 3(5) and 12 of the Medical Devices Coordination Group Rules of Procedure. art. 3(5) states that the Chair of the MDCG or the working group may invite, on a case-by-case basis, experts and other third parties with specific competence in a subject on the agenda to participate in the meetings or provide written contributions. art. 12 provides that the Commission services shall provide technical, scientific, and logistical support for the MDCG and any of its working groups.

47 RED, supra Footnote note 12.

48 Ramses Wessel, Towards EU Cybersecurity Law: Regulating a New Policy Field in Research Handbook on Int’l Law & Cyberspace 405 (Nicholas Tsagourias et al. eds., 2015).

49 See Nupur Choudhoury & Ramses Wessel, Conceptualising Multilevel Regulation in the EU: A Legal Translation of Multilevel Governance?, 18(3) Eur. L.J. 335 (2012).

50 See supra Section 4.1.2.

51 CSA, art. 2(12).

52 See, e.g., COCIR, Advancing Cybersecurity of Health and Digital Technologies (Mar. 27, 2019), www.cocir.org/uploads/media/19036_COC_Cybersecurity_web.pdf.

54 Footnote Id. at 6.

55 See MDCG, Guidance, supra Footnote note 17.

57 CSA, art. 56(3).

58 CSA, art. 56(2).

60 DIGITALEUROPE, Cybersecurity Act: DIGITALEUROPE Urges Colegislators to Ensure Certification Schemes Do Not Lead to More Market Fragmentation in Europe (June 11, 2018), www.digitaleurope.org/wp/wp-content/uploads/2019/01/DIGITALEUROPE%20Cybersecurity%20Act%2011%20June.pdf (stakeholders’ concerns over the CSA’s fragmentation risks).

61 See Jan Rommel et al., Specialisation and Fragmentation in Regulatory Regimes, in Government of Public Management 6971 (Patrick Lægreid et al. eds., 2010).

62 Amongst the many other aspects, the RED foresees technical features for the protection of privacy, personal data, misuse, interoperability, network functioning, and compliance regarding the combination of radio equipment and software. See RED, art. (3)(3), lett. (d) and (e). Since they relate to network and information systems, the two articles are considered for the purposes of the present chapter as cybersecurity-related requirements.

63 Due to overlapping elements, manufacturers must refer to different notified bodies to meet obligations stemming from different legislations. In practice this adds another level of complexity. See BSI, Medical Devices complying with the Radio Equipment Directive, www.bsigroup.com/meddev/LocalFiles/ja-jp/Technologies/BSI-md-Radio-devices-ja-JP.pdf.

64 European Commission, Guide to the Radio Equipment Directive 2014/53/EU, Version of 19 December 2018 (2018) [hereinafter EC, RED Guide].

65 European Commission, The ‘Blue Guide’ on the EU Interpretation of EU Product Rules (2014) [hereinafter EC, Blue Guide].

66 EC, Blue Guide, 22.

67 See Eugenio Mantovani & Pedro Cristobal Bocos, Are mHealth Apps Safe? The Intended Purpose Rule, Its Shortcomings and the Regulatory Options under the EU Medical Devices Framework, in Mobile E-Health 251–76 (Hannah R. Marston et al. eds., 2017) (on pitfalls of the “intended purpose” notion in medical devices law).

68 MDR, art. 87.

69 GDPR, art. 33–4.

70 NISD, art. 14.

71 There are four different incident reporting models: centralized, distributed, decentralized, hybrid. See ENISA, EU MS Response Development Status Report (2019) 89.

72 According to NISD, art. 4(1)(7), a security incident is an event having an actual adverse effect on the security of network and information systems. Such an event, if it involves the processing of personal data, could also qualify as a “personal data breach” (cfr GDPR, art. 4(1)(12). Finally, a security incident could also be a “serious incident” under the MDR meaning art. 4(1)(54), for instance, when the incident directly or indirectly leads to a serious public health threat, or the death of a patient. See MDCG Guidance, Annex II (examples of cybersecurity incidents/serious incidents).

73 Including health care providers, when considered as “operators of essential services,” according to NISD (art. 4(1)(4)).

74 See COCIR, supra Footnote note 52, at 8.

5 The mHealth Power Paradox Improving Data Protection in Health Apps through Self-Regulation in the European Union

1 Incisive Health International, Taking the Pulse of eHealth in the EU: An Analysis of Public Attitudes to eHealth Issues in Austria, Bulgaria, Estonia, France, Germany, Italy, and the UK (2017).

2 European Commission, Green Paper on mobile Health (“mHealth”) (2014).

3 Keith Spiller et al., Data Privacy: Users’ Thoughts on Quantified Self Personal Data, in Self-Tracking: Empirical and Philosophical Investigations 111–24 (Btihaj Ajana ed., 2018).

4 Federica Lucivero & Karin R. Jongsma, A Mobile Revolution for Healthcare? Setting the Agenda for Bioethics, 44 J. Med. Ethics 685, 685–9 (2018).

5 Commission Staff Working Document on the existing EU legal framework applicable to lifestyle and wellbeing apps Accompanying the document Green Paper on mobile Health (“mHealth”) (2014); See also Recital 19 of the MDR.

6 Quinn Grundy et al., Data Sharing Practices of Medicines Related Apps and the Mobile Ecosystem: Traffic, Content, and Network Analysis, 364 BMJ l920 (2019); Achilleas Papageorgiou et al., Security and Privacy Analysis of Mobile Health Applications: The Alarming State of Practice, PP IEEE Access 1–1 (2018).

7 Anil K. Gupta & Lawrence J. Lad, Industry Self-Regulation: An Economic, Organizational, and Political Analysis, 8 AMR 416, 416–25 (1983).

8 Adrian Fong, The Role of App Intermediaries in Protecting Data Privacy, 25 Int’l J.L. & Info. Tech. 85, 85114 (2017).

9 GDPR, 2016 O.J. (L 119) Recital 35.

10 Art. 29 Data Protection Working Party, Annex – health data in apps and devices (2015) 2.

12 Footnote Id. at 3–5.

13 Z v. Finland (1997) 25 Eur. Ct. H.R. 371, 94–6.

14 See generally Dominik Leibenger et al., Privacy Challenges in the Quantified Self Movement – An EU Perspective, 2016 Proc. on Privacy Enhancing Techs. 315, 315–34 (2016).

15 Grazia Cecere et al., Economics of Free Mobile Applications: Personal Data as a Monetization Strategy 45 (2018).

16 Papageorgiou et al., supra Footnote note 6.

18 Kirsten Ostherr et al., Trust and Privacy in the Context of User-Generated Health Data, 4 Big Data & Soc’y (2017).

19 Marjolein Lanzing, The Transparent Self, 18 Ethics & Info. Tech. 9, 916 (2016).

20 Leibenger et al., supra Footnote note 14.

21 Commission Staff Working Document, supra Footnote note 5.

22 Tamara K. Hervey & Jean V. McHale, European Union Health Law (2015).

23 Commission Staff Working Document, supra Footnote note 5.

24 NB: Regulation (EU) 2017/745 (MDR) will replace the current Directive 93/42/EEC in May 2020.

25 CJEU, Case C-329/16 (SNITEM).

26 See Helen Yu, Regulation of Digital Health Technologies in the EU: Intended versus Actual Use, in The Future of Medical Device Regulation: Innovation and Protection (I. Glenn Cohen et al. eds., 2021).

27 European Commission, Guidance Document Medical Devices – Scope, Field of Application, Definition – Qualification and Classification of Stand Alone Software (2016).

28 MDR, Recital 19.

29 MDR, art. 109–10.

30 GDPR, 2016 O.J. (L 119) Recitals 7, 63 GDPR.

31 GDPR, art. 2–3, 2016 O.J. (L 119); European Data Protection Supervisor, Opinion 1/2015 Mobile Health: Reconciling technological innovation with data protection (2015).

32 GDPR, art. 6, 2016 O.J. (L 119).

33 GDPR. 2016 O.J. (L 119) Chapter III.

34 GDPR, art. 12–13, 2016 O.J. (L 119).

35 GDPR, art. 15, 2016 O.J. (L 119).

36 GDPR, art. 7(3), 2016 O.J. (L 119).

37 GDPR, art. 9, 2016 O.J. (L 119).

38 GDPR, art. 9(2)(b–j), 2016 O.J. (L 119).

39 GDPR, art. 9(3), 2016 O.J. (L 119).

40 GDPR, art. 9(2)(a), 2016 O.J. (L 119).

41 Data Protection Working Party, art. 29, 2016 O.J. (L 119), Guidelines on consent under Regulation 2016/679 (2018) 18–19; GDPR, art. 32.

42 See generally Grundy et al., supra Footnote note 6.

43 Papageorgiou et al., supra Footnote note 6.

44 Trix Mulder, Health Apps, Their Privacy Policies and the GDPR, 10 Eur. J. L. and Tech. (2019).

45 Grundy et al., supra Footnote note 6.

46 Fong, supra Footnote note 8, at 98.

47 David Wright, Enforcing Privacy: Regulatory, Legal and Technological Approaches 2931 (David Wright & Paul De Hert eds., 2016).

48 Carrie Beth Peterson et al., From Innovation to Implementation: eHealth in the WHO European Region (2016).

49 OECD, Alternatives to Traditional Regulation (2013) at 47; Gupta & Lad, supra Footnote note 7, at 417.

50 European Union Agency for Cybersecurity, Privacy and Data Protection in Mobile Applications 16 (2018); Data Protection Working Party, art. 29, supra Footnote note 41, at 11–12.

51 See, e.g., Christina Angelopoulos et al., Study of Fundamental Rights Limitations for Online Enforcement through Self-Regulation 96 (2015).

52 Gupta & Lad, supra Footnote note 7, at 417.

53 Rebecca Ong, Mobile Communication and the Protection of Children 247–9 (2010).

54 OECD, supra Footnote note 49, at 6–7, 42.

55 Artyom Dogtiev, App Stores List (2019), Business of Apps 131–2 (2017), www.businessofapps.com/guide/app-stores-list/.

56 GDPR, art. 40, 47, 2016 O.J. (L 119).

57 European Commission, supra Footnote note 27.

58 Apple App Store, App Store Review Guidelines (2019), https://developer.apple.com/app-store/review/guidelines/; Google Play, Google Play Developer Distribution Agreement (2019), https://play.google.com/intl/ALL_uk/about/developer-distribution-agreement.html/.

59 Fong, supra Footnote note 8, at 96–8; Luis Hestres, App Neutrality: Apple’s App Store and Freedom of Expression Online, 7 Int’l J. Comm. (2013) at 1265–80.

60 The Netherlands Authority for Consumers & Markets, Market Study into Mobile App Stores 40 (2019).

61 European Union Agency for Cybersecurity, supra Footnote note 50.

63 GDPR, 2016 O.J. (L 119) Recital 78.

64 Fong, supra Footnote note 8.

65 Dogtiev, supra Footnote note 55.

66 Apple App Store, Apple Developer Program License Agreement (2020), www.imperial.ac.uk/media/imperial-college/staff/web-guide/public/Apple-Developer-Agreement.pdf.

67 Apple.com, supra Footnote note 58.

68 Footnote Id. at § 1.4.1.

69 Apple App Store, App Store Review Guidelines (Sept. 12, 2019), https://developer.apple.com/app-store/review/guidelines/, § 5.1.1 (i).

70 Footnote Id. at § 5.1.1 (ii).

71 Footnote Id. at § 5.1.1 (iii).

72 Footnote Id. at § 5.1.2 (i)–(ii).

73 Footnote Id. at § 5.1.2 (iii).

74 Apple Developer Program License Agreement 2020, supra Footnote note 66, at § 3.3.7–3.3.11.

75 App Store Review Guidelines Sept. 12, 2019, supra Footnote note 69, at § 5.1.3.

76 Footnote Id. at § 5.1.3 (i).

77 Footnote Id. at § 3.1.7.

78 Footnote Id. at § 5.1.3 (i).

79 Footnote Id. at § 5.1.3 (ii).

81 Google Play, Google Play Developer Distribution Agreement (Nov. 5, 2019), https://play.google.com/intl/ALL_uk/about/developer-distribution-agreement.html/.

82 Footnote Id. at § 2.1.

83 Footnote Id. at § 4.6.

84 Footnote Id. at § 4.8.

85 Google Play, Google Play Developer Program Policies (2019), https://play.google.com/about/developer-content-policy/ under “Privacy, security and deception.”

89 Footnote Id. under “Unapproved Substances.”

90 This section does not consider intermediary liability under the e-Commerce Directive.

91 Data Protection Working Party, art. 29, supra Footnote note 41.

92 Fong, supra Footnote note 8, at 108–11.

93 Masooda Bashir et al., Online Privacy and Informed Consent: The Dilemma of Information Asymmetry, 25 Proc. of the Assc’n for Info. Science and Tech., 1, 110 (2015).

94 Daithi Mac Sithigh, App Law Within: Rights and Regulation in the Smartphone Age, 21 Int’l J. L. & Info. Tech. 154, 154–86 (2013).

95 GDPR, art. 40, 2016 O.J. (L 119).

96 European Data Protection Board, Guidelines 1/2019 on Codes of Conduct and Monitoring Bodies under Regulation 2016/679 (2019) 6.

97 Footnote Id. at 8; GDPR, art. 40(5), 40(9), 41(1), 2016 O.J. (L 119).

98 Maximilian von Grafenstein, Co-Regulation and the Competitive Advantage in the GDPR: Data Protection Certification Mechanisms, Codes of Conduct and the “State of the Art” of Data Protection-by-Design, in Research Handbook on Privacy and Data Protection Law: Values, Norms and Global Politics (forthcoming).

99 European Data Protection Board, supra Footnote note 96, at 7–9.

100 GDPR, art. 40(2), 2016 O.J. (L 119).

101 von Grafenstein, supra Footnote note 98.

102 See GDPR, art. 24(3), § 3.2.3; 2016 O.J. (L 119); European Data Protection Board, supra Footnote note 96, at 9.

103 Mulder, supra Footnote note 44.

6 The Interaction of the Medical Device Regulation and the GDPR Do European Rules on Privacy and Scientific Research Impair the Safety and Performance of AI Medical Devices?

Acknowledgement: This research is supported by a Novo Nordisk Foundation grant for a scientifically independent Collaborative Research Program in Biomedical Innovation Law (grant agreement number NNF17SA0027784).

1 Marcelo Corrales Compagnucci, Big Data, Databases and ‘Ownership’ Rights in the Cloud 4, 38, 40 (2020); Marcelo Corrales & Paulius Jurčys, Nudging Cloud Providers through Intermediary Services in New Technology, Big Data and the Future of Law, 154–5 (Marcelo Corrales et al. eds., 2017).

2 Viktor Mayer-Schönberger & Kenneth Cukier, Big Data: A Revolution that Will Transform How We Live, Work, and Think (Mariner Books ed., 2013).

3 Alessandro Blasimme & Effy Vayena, Towards Adaptive Governance in Big Data Health Research: Implementing Regulatory Principles (Oct. 2019), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3501545; Marcelo Corrales Compagnucci, Big Data, Databases and ‘Ownership’ in the Cloud 4, 39, 40, 299 (2019).

4 Ugo Pagallo et al., The Rise of Robotics & AI: Technological Advances and Normative Dilemmas 113 (2018).

5 Marcelo Corrales Compagnucci et al., Homomorphic Encryption: The Holy Grail for Big Data Analytics and Legal Compliance in the Pharmaceutical and Healthcare Sector, 3 EPLR 144, 145–55 (2019).

6 Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance), 2016 O.J. (L 119) 4.5, at 1–88 (EU) [hereinafter GDPR].

7 Timo Minssen et al., The EU-US Privacy Shield Regime for Cross-Border Transfers of Personal Data Under the GDPR: What Are the Legal Challenges and How Might These Affect Cloud-Based Technologies, Big Data, and AI in the Medical Sector?, 4 EPLR 34, 34–50; Marcelo Corrales Compagnucci et al., Lost on the High Seas without a Safe Harbor or a Shield? Navigating Cross-Border Data Transfers in the Pharmaceutical Sector after Schrems II Invalidation of the EU-US Privacy Shield, 4 EPLR 153, 153–160.

8 Regulation 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC, 2017 O.J. (L 117) 5.5, at 1–175 (EU) [hereinafter MDR].

9 Regulation 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU,2017 O.J. (L 117) 5.5, at 176–332 (EU) [hereinafter IVDR].

10 Consent is defined by GDPR, art. 4(11), as “any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her.”

11 Paul R. Burton et al., Policies and Strategies to Facilitate Secondary Use of Research Data in the Health Sciences, 46 International Journal of Epidemiology 1732, 1732–3 (2017).

12 The health data included demographic data; all medical conditions, diagnoses, and their treatment; emergency and other hospital visits, including dates and times; prescriptions and their costs; genomic data and information about any cancers; and much else besides.

13 Elad Leshem, IBM Watson Health AI gets access to full health data of 61 m Italians, Medium (Jan. 18, 2018), https://medium.com/@qData/ibm-watson-health-ai-gets-access-to-full-health-data-of-61m-italians-73f85d90f9c0.

14 Glyn Moody, Detailed Medical Records of 61 Million Italian Citizens to Be Given to IBM for Its “Cognitive Computing” System Watson, Privacy News Online (May 22, 2017), www.privateinternetaccess.com/blog/detailed-medical-records-61-million-italian-citizens-given-ibm-cognitive-computing-system-watson/.

15 NHS Digital, National Data Opt-out, https://digital.nhs.uk/services/national-data-opt-out.

16 Janos Meszaros & Chih-Hsing Ho, Building Trust and Transparency? Challenges of the Opt-Out System and the Secondary Use of Health Data in England, 19 Med. L. Int’l 159, 15981 (2019).

17 GDPR, art. 89(1).

18 Janos Meszaros & Chih-Hsing Ho, Big Data and Scientific Research: The Secondary Use of Personal Data Under the Research Exemption in the GDPR, Acta Juridica Hungarica 403, 403–19 (2018).

19 DeepMind Technologies is a British artificial intelligence company founded in 2010, currently owned by Google through Alphabet Inc.

20 Royal Free is one of the largest health care providers in Britain’s publicly funded National Health Service.

21 Janos Meszaros et al., Nudging Consent and the New Opt-Out System to the Processing of Health Data in England, in Legal Tech and the New Sharing Economy, 61, 68 (Marcelo Corrales Compagnucci et al. eds., 2019).

22 GDPR, art. 9(2)(j).

23 GDPR, Recital 159.

24 In EU law, Recitals are usually placed at the beginning of the legal text. They introduce the legislation and explain the reasons for the provisions and clarify legislative goals. Recitals are normally not binding as such. Recitals may, however, influence interpretations of the law by Courts or further legislation and may in that way achieve binding effect.

25 See, e.g., German Research Foundation requirements for funding scientific research, www.dfg.de/en/research_funding/principles_dfg_funding/index.html.

26 Organisation for Economic Co-operation and Development (OECD), Frascati Manual: Guidelines for Collecting and Reporting Data on Research and Experimental Development (2015).

28 Mary Donnelly & Maeve McDonagh, Health Research, Consent and the GDPR Exemption (Apr. 2, 2019). This is a pre-edited version of M. Donnelly & M. McDonagh Health Research, Consent and the GDPR Exemption, 26 Eur. J. Health L. 97, 97119 (2019).

29 Gesetz für eine bessere Versorgung durch Digitalisierung und Innovation (Digitale-Versorgung-Gesetz – DVG) [Digital Healthcare Act] of 9 December 2019, BGBl I at 2562 (Germany, 2019). Compare also Germany’s new Hospital Future Act, Gesetz für ein Zukunftsprogramm Krankenhäuser (Krankenhauszukunftsgesetz – KHZG), G. v. 23.10.2020 BGBl. I S. 2208 (Nr. 48).

30 Sara Gerke et al., Germany’s Digital Health Reforms in the COVID-19 Era: Lessons and Opportunities for Other Countries, 3 npj Digit. Med. 94 (2020).

31 Paul Ohm, Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization, 57 UCLA L. Rev. 1706 (Aug. 13, 2009); U. of Colorado Law Legal Studies Research Paper No. 9–12, https://ssrn.com/abstract=1450006.

32 The GDPR has strict expectations towards anonymization. Unlike the Health Insurance Portability and Accountability Act (HIPAA) in the United States, which sets forth a rule exempting data from regulation if eighteen specific identifiers are removed, the GDPR applies the standard that data is anonymous only when it cannot be identified by any means by any person (GDPR, Recital 26).

33 However, many scholars are challenging the idea that pseudonymized data constitutes personal data in all cases. For instance: Miranda Mourby, Elaine Mackey, Mark Elliot, et al., Are “Pseudonymised” Data Always Personal Data? Implications of the GDPR for Administrative Data Research in the UK, 34 (2) Computer Law and Security Review 222–33 (2018); Anne Bahr & Irene Schlünder, Code of Practice on Secondary Use of Medical Data in European Scientific Research Projects, 5 International Data Privacy Law 279, 279–91 (2015).

34 B. Babic et al., Algorithms on Regulatory Lockdown in Medicine. Prioritize Risk Monitoring to Address the “Update Problem,” 366 Science 1202, 1202–4 (2019).

35 Glenn Cohen et al., The European AI Strategy: Implications and Challenges for Digital Health (forthcoming LANCET-Digital Health).

37 Council Directive 90/385/EEC on Active Implantable Medical Devices (AIMDD) (1990); Council Directive 93/42/EEC on Medical Devices (MDD) (1993); Council Directive 98/79/EC on in vitro Diagnostic Medical Devices (IVDMD) (1998).

38 The Council and the Parliament adopted on 23 April 2020 Regulation 2020/561 amending Regulation 2017/745 on medical devices regarding application dates of certain of its provisions. This Regulation postpones the date of application for most Medical Devices Regulation provisions by one year – until 26 May 2021. This postponement alleviates the pressure off national authorities, notified bodies, manufacturers, and other actors so they can focus fully on urgent priorities related to the COVID-19 crisis. The IVDR Regulation 2017/746 corresponding date of application remains the same (May 2022).

39 Regulation 2017/745 Recital 47, arts. 62(4)(h), 72(3), 92(4), 110(1)–(2) (EU).

41 European Medicines Agency, Questions & Answers on Implementation of the Medical Devices and In Vitro Diagnostic Medical Devices Regulations, ((EU) 2017/745 and (EU) 2017/746) (Oct. 21, 2019) Rev.1 EMA/37991/2019.

42 US Food & Drug Admin., Executive Summary for the Patient Engagement Advisory Committee Meeting, Artificial Intelligence (AI) and Machine Learning (ML) in Medical Devices (Oct. 22, 2020).

William Nicholson Price II, Regulating Black-Box Medicine, 116 Mich. L. Rev. 421 (Mar. 21, 2017).

43 Act on the Protection of Personal Information (Act No. 57 of May 30, 2003, as amended, APPI).

44 The HMA/EMA Task Force on Big Data was established in 2017 to report on the challenges and opportunities posed by big data in medicine regulation.

45 See, e.g., HMA-EMA Joint Big Data Taskforce Phase I and Phase II reports on “Evolving Data-Driven Regulation” (2019), www.ema.europa.eu/en/documents/other/hma-ema-joint-big-data-taskforce-phase-ii-report-evolving-data-driven-regulation_en.pdf.

46 US Food & Drug Admin., Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) – Discussion Paper and Request for Feedback (2019).

47 Guidelines 04/2020 on the use of location data and contact tracing tools in the context of COVID-19 outbreak adopted on 21 April 2020.

48 European Data Protection Supervisor, A Preliminary Opinion on Data Protection and Scientific Research (Jan. 6, 2020), https://edps.europa.eu/sites/edp/files/publication/20-01-06_opinion_research_en.pdf.

49 US Food & Drug Admin., FDA’s Sentinel Initiative, https://www.fda.gov/safety/fdas-sentinel-initiative (Nov. 26, 2020).

50 B. Babic et al., Algorithms on Regulatory Lockdown in Medicine. Prioritize Risk Monitoring to Address the “Update Problem,” 366 Science 1202, 1202–4 (2019).

51 Julia Powles & Hal Hodson, Google DeepMind and Healthcare in an Age of Algorithms, 7 Health Tech. 351, 351–67 (Dec. 2017).

52 Jonathan H. Chen & Steven M. Asch. , Machine Learning and Prediction in Medicine – Beyond the Peak of Inflated Expectations, 376 N. Engl. J. Med. 2507 (Jun. 2017).

53 Helen Yu, Regulation of Digital Health Technologies in the EU: Intended vs Actual Use, in Future of Medical Device Regulation: Innovation and Protection (Cambridge University Press ed., Oct. 2020); see also Timo Minssen et al., When Does Stand-Alone Software Qualify as a Medical Device in the European Union? – The Cjeu’s Decision in Snitem and What It Implies for the Next Generation of Medical Devices, 28 Med. L. Rev. 615, 615–24 (2020).

54 Timo Minssen, Regulating Digital Health, Gary Humphreys Report (2020), www.who.int/bulletin/volumes/98/4/20-020420.pdf.

57 European Medicines Agency, A Common Data Model for Europe? Why? Which? How? Workshop report from a meeting held at the European Medicines Agency 10, 1011 (Dec. 2017), www.ema.europa.eu/en/documents/report/common-data-model-europe-why-which-how-workshop-report_en.pdf.

58 Footnote Id. at 31.

59 Cf. European Data Protection Supervisor, A Preliminary Opinion on Data Protection and Scientific Research, EDPS (Jan. 6, 2020), https://edps.europa.eu/sites/edp/files/publication/20-01-06_opinion_research_en.pdf.

61 Press release. Commission and Germany’s Presidency of the Council of the EU underline importance of the European Health Data Space, https://ec.europa.eu/commission/presscorner/detail/en/IP_20_2049 (Nov. 11, 2020).

62 Press release. Commission and Germany’s Presidency of the Council of the EU underline importance of the European Health Data Space, https://ec.europa.eu/commission/presscorner/detail/en/IP_20_2049 (Nov. 11, 2020).

63 Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on European data governance (Data Governance Act)

COM/2020/767 final. Cf. www.euractiv.com/section/digital/news/data-governance-new-eu-law-for-data-sharing-adopted/.

64 Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS, COM/2021/206 final.

7 AI, Explainability, and Safeguarding Patient Safety in Europe Toward a Science-Focused Regulatory Model

1 Aras D. Dargazany et al., Wearable DL: Wearable Internet-of-Things and Deep Learning for Big Data Analytics-Concept Literature and Future, 1 Mobile Info. Systems 4 (2018).

2 NHSX, Artificial Intelligence: How to Get it Right – Putting Policy into Practice for Safe Data-Driven Innovation in Health and Care, 18 (Oct. 2019), www.nhsx.nhs.uk/media/documents/NHSX_AI_report.pdf; Sara Gerke et al., Ethical and Legal Issues of Ingestible Electronic Sensors, 2 Nature Electronics 329 (2019).

3 Sourav Bhattacharya et al., From Smart to Deep: Robust Activity Recognition on Smartwatches Using Deep Learning, IEEE (2016), https://userpages.umbc.edu/~nroy/courses/shhasp18/papers/From%20Smart%20to%20Deep%20Robust%20Activity%20Recognition%20on%20Smartwatches%20Using%20Deep%20Learning.pdf.

4 NHSX, supra Footnote note 2, at 20; Department of Health and Social Care (UK), The AHSN Network: Accelerating Artificial Intelligence in Health and Care (2018), https://wessexahsn.org.uk/img/news/AHSN%20Network%20AI%20Report-1536078823.pdf.

5 Fight Covid-19 through the Power of the People, Stan. Med. (2020), https://innovations.stanford.edu.

6 Moni Miyashita & Michael Brady, The Health Care Benefits of Combining Wearables and AI, Harv. Bus. Rev. (2019), https://hbr.org/2019/05/the-health-care-benefits-of-combining-wearables-and-ai.

7 Such adoption may lead to unintended consequences, such as unregulated yet sophisticated apps marketed as low-level medical devices which may lead to doctors becoming overburdened with requests. See Helen Yu, Regulation of Digital Health Technologies in the EU: Intended versus Actual Use, in Innovation and Protection: The Future of Medical Device Regulation (I. Glenn Cohen et al. eds., 2021).

8 Laura Donnelly, NHS Experiment in AI Will See Whole City Offered Virtual Hospital Appointments and Diagnosis by Chatbot, Telegraph (Jan. 23, 2020), www.telegraph.co.uk/news/2020/01/23/nhs-experiment-ai-will-see-whole-city-offered-virtual-hospital/.

9 Also “reinforcement” learning. Stuart Russell & Peter Norvig, Artificial Intelligence: A Modern Approach 830 (3rd ed. 2010).

10 See, e.g., deep Boltzmann machine, spike neural networks. Aras, 7.

11 But What Is Neural Network?, YouTube (Oct. 5, 2017), www.youtube.com/watch?v=aircAruvnKk; Russell & Norvig, supra Footnote note 9; Ron Sun, Connectionism and Neural Networks, in The Cambridge Handbook of Artificial Intelligence (Keith Frankish & William M. Ramsey eds., 2014).

12 See, e.g., Oscar D. Lara et al., A Survey on Human Activity Recognition Using Wearable Sensors, 15 IEEE Commc’n Surveys & Tutorial 1199 (2012).

13 Aras, 5–6; Stuart Russell & Peter Norvig, supra Footnote note 9, at 695.

14 Aras, 15; Stanford.

15 Lukun Wang, Recognition of Human Activities Using Continuous Autoencoders with Wearable Sensors, 16 Sensors 189, 23 (2016).

16 Footnote Id. at 15.

17 Miikka Ermes et al., Detection of Daily Activities and Sports with Wearable Sensors in Controlled and Uncontrolled Conditions, 12 IEEE Transactions on Information Technology in Biomedicine 20, 21 (2008).

18 Footnote Id. at 24–5.

19 Alessandra Moschetti et al., Towards an Unsupervised Approach for Daily Gesture Recognition in Assisted Living Applications, 17 IEEE Sensors Journal 8395, 8402 (2017).

20 Sourav, 2.

21 Sandra Wachter et al., Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation, 7 Int’l Data Privacy L. 76, 78 (2017).

22 European Commission, Ethics Guidelines for Trustworthy AI: High-Level Expert Group on Artificial Intelligence 18 (Apr. 8, 2019).

23 Expert Group on Liability and New Technologies, Liability for Artificial Intelligence and Other Emerging Digital Technologies, European Commission 1, 54 (2019), https://op.europa.eu/en/publication-detail/-/publication/1c5e30be-1197-11ea-8c1f-01aa75ed71a1/language-en/format-PDF; European Commission, White Paper on Artificial Intelligence – A European Approach to Excellence and Trust, 13 (2020), https://templatearchive.com/ai-white-paper/.

24 W. Nicholson Price II et al., Potential Liability for Physicians Using Artificial Intelligence, 322 JAMA 1765, 1765 (2019).

25 W. Nicholson Price II, Medical Malpractice and Black-Box Medicine, in Big Data, Health Law, and Bioethics 301 (I. Glenn Cohen et al. eds., 2018).

26 European Commission, supra Footnote note 22, at 13.

27 European Commission, Report on the Safety and Liability Implications of Artificial Intelligence, the Internet of Things and Robotics (2020), https://ec.europa.eu/info/sites/info/files/report-safety-liability-artificial-intelligence-feb2020_en_1.pdf; this is known as the “update problem.” See I. Glenn Cohen et al., The European Artificial Intelligence Strategy: Implications and Challenges for Digital Health, 2 Lancet Digital Health e376, e377 (2020), www.thelancet.com/action/showPdf?pii=S2589-7500%2820%2930112-6; on “system view” approach to regulation, see Sara Gerke et al., The Need for a System View to Regulate Artificial Intelligence/Machine Learning-Based Software as Medical Device, 3 Digital Me. 1 (2020); Timo Minssen et al., Regulatory Responses to Medical Machine Learning, J. L. & Biosciences 1, 6 (2020).

28 Regulation on Medical Devices (Regulation 2017/745) (EU); In Vitro Diagnostic Medical Device Regulation (IVDR) (Regulation 2017/746) (EU); NHSX, How to Get It Right, supra Footnote note 2, at 22.

30 European Commission, supra Footnote note 27, at 6.

31 European Commission, supra Footnote note 22, at 3.

32 European Commission, supra Footnote note 23.

33 European Commission, supra Footnote note 23, at 1, 10.

34 European Commission, The Assessment List for Trustworthy Artificial Intelligence (ALTAI) for Self Assessment (2020), https://ec.europa.eu/digital-single-market/en/news/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment.

35 Footnote Id. at 13.

37 Footnote Id. at 16.

39 GDPR 2016/679 and the UK Data Protection Act 2018 (DPA).

40 Information Commissioner’s Office, Right Not to Be Subject to Automated Decision-Making (2020), https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-law-enforcement-processing/individual-rights/right-not-to-be-subject-to-automated-decision-making/ [hereinafter ICO].

41 Wachter et al., supra Footnote note 21, at 79, 90; Further, an individual’s right to know about how personal data is evaluated, is significantly curtailed by ECJ jurisprudence. See Sandra Wachter & Brent Mittelstadt, A Right to Reasonable Inferences: Re-thinking Data Protection Law in the Age of Big Data and AI, 1 Colum. Bus. L. Rev. 1, 67 (2019).

42 Footnote Id. at 82.

43 Footnote Id. at 86.

44 Footnote Id. at 87.

45 Footnote Id. at 92.

46 GDPR, art. 4(4); see also ICO, supra Footnote note 40.

47 GDPR, art. 22(2)(C), art. 9(2).

48 European Commission, supra Footnote note 22, at 16.

50 Jory Heckman, DARPA: Next Generation Artificial Intelligence in the Works, Federal News Network (Mar. 1, 2018), https://federalnewsnetwork.com/technology-main/2018/03/darpa-next-generation-artificial-intelligence-in-development/.

51 European Commission, supra Footnote note 22, at 17.

52 Footnote Id. at 18.

55 Sandra Wachter et al., Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR, 31 Harv. J. L. & Tech. 842, 873 (2018).

56 Wachter & Mittelstadt, supra Footnote note 41, at 7.

57 By developers and independent external auditors. Price, supra Footnote note 25, at 295, 301.

58 Footnote Id. at 304.

59 European Commission, supra Footnote note 22, at 9.

60 Footnote Id. at 21.

61 Wachter, supra Footnote note 55, at 37.

62 Price, supra Footnote note 25, at 305.

63 European Commission, supra Footnote note 34, at 14–15.

66 Footnote Id. at 29.

67 Footnote Id. at 31.

68 Footnote Id. at 31; Margaret Mitchell et al., Model Cards for Model Reporting, FAT* ‘19: Conference on Fairness, Accountability, and Transparency 1, 3 (Jan. 2019).

69 NHSX, How to Get It Right, supra Footnote note 2, at 32; this approach aligns with Leong Tze Yun’s recommendation that AI systems should be systemically examined and validated; see Gary Humphreys, Regulating Digital Health, 98 Bulletin of the World Health Organization 235, 235 (2020), www.who.int/bulletin/volumes/98/4/20-020420.pdf.

73 European Commission, supra Footnote note 22, at 24–31.

74 Footnote Id. at 21.

75 Information Commissioner’s Office (UK), Big Data, Artificial Intelligence, Machine Learning and Data Protection 86 (2017), https://ico.org.uk/media/for-organisations/documents/2013559/big-data-ai-ml-and-data-protection.pdf.

76 Jason Yosinski et al., Understanding Neural Networks through Deep Visualization, Deep Learning Workshop, 31st International Conference on Machine Learning (2015).

8 Regulation of Digital Health Technologies in the European Union Intended versus Actual UseFootnote *

* This chapter has been adapted from an article originally published in BMJ Innovations (Digital Health Technologies Under the New EU Medical Devices Regulation: Monitoring and Governing Intended versus Actual Use, 7 BMJ Innovation 637–41 2021).

1 Issued US patent US10096319B1 entitled voice-based determination of physical and emotional characteristics of users, https://patents.google.com/patent/US10096319B1/en.

2 See, e.g., Melinda Beeuwkes Buntin et al., The Benefits of Health Information Technology: A Review of the Recent Literature Shows Predominantly Positive Results, 30 Health Aff. 464–71 (2011).

3 See, e.g., Apple Watch Saves Man’s Life after Warning Him of Heart Problems, The Telegraph (Jul. 16, 2019), www.telegraph.co.uk/news/2019/07/16/apple-watch-saves-mans-life-warning-heart-problems/; see also D.C. Ioannidis et al., Wearable Devices: Monitoring the Future?, Oxford Med. Case Reps. 492–4 (2019).

4 See, e.g., Alex Matthews-King, Apple Watch and Fitbits Wrongly Sending Healthy People to Doctors Could Overwhelm NHS, Report Warns, Independent, www.independent.co.uk/news/health/nhs-apple-watch-fitbits-ai-waiting-times-gp-misdiagnosis-a8749876.html; Artificial Intelligence in Healthcare, Acad. Med. Royal Colls., www.aomrc.org.uk/wp-content/uploads/2019/01/Artificial_intelligence_in_healthcare_0119.pdf.

5 See, e.g., D. Lupton, The Digitally Engaged Patient: Self-monitoring and Self-Care in the Digital Health Era, 11 Soc. Theory & Health 256–70 (2013).

6 See, e.g., K.J. Compton-Thweatt, Physicians or Facebook? The Effects of Do-It-Yourself Healthcare on Modern Society, Integrated Studies 171 (2018); A. Robeznieks, 4 Mistakes Your Patients Should Avoid With Wearables, AMA, www.ama-assn.org/practice-management/digital/4-mistakes-your-patients-should-avoid-wearables.

7 Lukasz Piwek et al., The Rise of Consumer Health Wearables: Promises and Barriers, 13 PLoS Med. (2016).

8 M.A. Case et al., Accuracy of Smartphone Applications and Wearable Devices for Tracking Physical Activity Data, 313 JAMA 1011 (2015).

9 European Commission, New EU rules to ensure safety of medical devices, MEMO/17/848 (2017).

10 Regulation 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC, 2017 O.J. (L 117) 5.5 (EU).

11 Council Directive 93/42/EEC of 14 June 1993 concerning medical devices, 1993 O.J. (L 169) 12.7.

12 Regulation 2017/745, supra Footnote note 10, at art. 2.

13 Footnote Id. at p. 19.

14 Footnote Id. at art. 2(12); see also Case C‑329/16 Syndicat national de l’industrie des technologies médicales (SNITEM), Philips France v. Premier ministre, Ministre des Affaires sociales et de la Santé Confédération paysanne and Others v. Premier ministre and Ministre de l’Agriculture, de l’Agroalimentaire et de la Forêt [2017] ECLI:EU:C:2017:947; see also T. Minsssen et al., When Does Stand-Alone Software Qualify as a Medical Device in the European Union? – The CJEU’s Decision in SNITEM and What It Implies for the Next Generation of Medical Devices, 28 Med. L. Rev. 615–24 (2020).

15 A. Heath, Apple’s Tim Cook Declares the End of the PC and Hints at New Medical Product, Telegraph, www.telegraph.co.uk/technology/2016/01/21/apples-tim-cook-declares-the-end-of-the-pc-and-hints-at-new-medi/.

16 Regulation 2017/745, supra Footnote note 10, at p. 58, art. 51, Annex VIII.

17 European Commission, DG for Communications Network, Content and Technology Smart Wearables: Reflection and Orientation Paper (2016).

18 European Commission, supra Footnote note 9, at Annex VIII.

20 S.S. Bhuyan et al., Use of Mobile Health Applications for Health-Seeking Behavior Among US Adults, 40 J. Med. Sys. 153 (2016); see also Sean Day & Megan Zweig, Beyond Wellness for the Healthy: Digital Health Consumer Adoption 2018, Rock Health, https://rockhealth.com/insights/beyond-wellness-for-the-healthy-digital-health-consumer-adoption-2018/.

21 Compton-Thweatt, supra Footnote note 6.

22 See, e.g., Casey Erdmier et al., Wearable Device Implications in the Healthcare Industry, 40 J. Med. Eng’g & Tech. 141–8 (2016).

23 See, e.g., B. Bent, Investigating Sources of Inaccuracy in Wearable Optical Heart Rate Sensors, 3 NPJ Digit. Med. 19 (2020).

24 Regulation 2017/745, supra Footnote note 10, at Annex VIII.

25 M.B. Hamel et al., FDA Regulation of Mobile Health Technologies, N. Eng. J. Med., 371, 372 (2014).

26 Compton-Thweatt, supra Footnote note 6; see also J. Dunn et al., Wearables and the Medical Revolution. 15 Personalized Med 429–48 (2018).

27 Piwek et al., supra Footnote note 7.

29 Day & Zweig, supra Footnote note 20.

30 Case et al., supra Footnote note 8; see also Sara Chodosh, “FDA approved” Medical Devices Don’t Actually Have to Do What They Promise, Popular Science, www.popsci.com/fda-approved-medical-devices/.

31 Matthews-King, supra Footnote note 4; see also Dora Allday & Stephen Matthews, Fitbits Are Putting a Strain On Doctors “Because the Exercise Trackers Are Incorrectly Telling Wearers They Are ILL,” Daily Mail, www.dailymail.co.uk/health/article-6639305/Hypochondriacs-rely-data-Fitbits-piling-extra-pressure-NHS.html; Emily Clarkson, Is Your Fitness Tracker Helping or Hurting Your Health?, The Manifest, https://themanifest.com/app-development/fitness-tracker-helping-hurting-health.

32 See, e.g., Important Safety and Product Information, Fitbit, www.fitbit.com/dk/legal/safety-instructions.

33 See, e.g., Fitbit Launches Fitbit Care, A Powerful New Enterprise Health Platform for Wellness and Prevention and Disease Management, Fitbit, https://investor.fitbit.com/press-releases/press-release-details/2018/Fitbit-Launches-Fitbit-Care-A-Powerful-New-Enterprise-Health-Platform-for-Wellness-and-Prevention-and-Disease-Management/default.aspx.

34 See, e.g., M. Schukat et al., Unintended Consequences of Wearable Sensor Use in Healthcare, 25 Y.B. Med. Informatics 7386 (2016); see also A.B. Cohen & K. Safavi, The Oversell and Undersell of Digital Health, Health Aff. Blog 442, 443 (2019) and Jenny McGrath, Lack of Regulation Means Wearables Aren’t Held Accountable for Health Claims, Digit. Trends, www.digitaltrends.com/wearables/wearable-devices-leading-to-over-diagnosis/.

35 Akshay et al., Wearable Healthcare Technology – the Regulatory Perspective, 4 Int’l J. Drug Reg. Aff. 15 (2016).

36 McLellan v. Fitbit, Inc., Case No. 16-cv-36, (N.D. Cal. 2018).

37 US Food & Drug Admin., Medical Product Communications That Are Consistent with the FDA-Required Labeling – Questions and Answers Guidance for Industry (2018).

38 See US Food and Drug Admin., Policy for Device Software Functions and Mobile Medical Applications: Guidance for Industry and Food and Drug Administration Staff, www.fda.gov/downloads/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/UCM263366.pdf.

39 US Food and Drug Admin., General Wellness: Policy for Low Risk Devices, www.fda.gov/media/90652/download.

40 See European Commission, Assessing the Impact of Digital Transformation of Health Services (2018); see also World Health Org., Draft global strategy on digital health 2020–4 (2019).

41 See, e.g., E. Timmerman & B. Reid, The Doctrine of Invited Misuse: A Societal Response to Marketing Promotion, 4 J. Macromarketing 40–8 (1984).

42 W.L. Trombetta & T.L. Wilson, Foreseeability of Misuse and Abnormal Use of Products by the Consumer, 39 J. Marketing 4855 (1975).

45 Clarkson, supra Footnote note 31; see also Brian Fung, Is Your Fitbit Wrong? One Woman Argued Hers Was – and Almost Ended Up in a Legal No-Man’s Land, Washington Post, www.washingtonpost.com/technology/2018/08/02/is-your-fitbit-wrong-one-woman-argued-it-was-almost-ended-up-legal-no-mans-land/.

46 Case C-219/11 Brain Products GmbH v. Biosemi VOF and Others.

47 Regulation 2017/745, supra Footnote note 10, at art. 2(60).

48 Footnote Id. at arts. 83–6, Annex III.

49 Footnote Id. at § 1.1 of Annex III.

50 See, e.g., Josep Pane et al., Evaluating the Safety Profile of Non‐active Implantable Medical Devices Compared with Medicines, 40 Drug Safety 37, 3747 (2017).

51 Regulation 2017/745, supra Footnote note 10, at art. 83(1).

52 Footnote Id. at art. 85.

53 Footnote Id. at Annex III.

54 Footnote Id. at art. 88.

55 Footnote Id. at art. 51(3).

56 Footnote Id. at art. 51.

57 Piwek et al., supra Footnote note 7.

58 Dinesh Puppala, Regulatory Standpoint: Wearables and Healthcare, Hopkins Biotech Network, https://hopkinsbio.org/biotechnology/regulatory-standpoint-wearables-healthcare/.

59 Determining Real-World Data’s Fitness for Use and the Role of Reliability, Duke-Margolis Center for Health Policy.

60 H. Yu, Redefining Responsible Research and Innovation for the Advancement of Biobanking and Biomedical Research, 3 J. L. & Biosciences 611–35 (2016).

61 See, e.g., KardiaMobile, which advertises to be “the most clinically-validated personal EKG in the world. Now a medical-grade EKG can become part of your daily routine. Enjoy peace of mind,” www.alivecor.com/kardiamobile. Kardia is FDA cleared. See https://alivecor.zendesk.com/hc/en-us/articles/115015799808-Is-Kardia-FDA-cleared-and-CE-marked.

62 Kenneth R. Foster & John Torous, The Opportunity and Obstacles for Smartwatches and Wearable Sensors, 10 IEEE Pulse 22, 22–5 (2019).

Figure 0

Table 5.1 Health data protection in app store policies

Source: author’s analysis (2020)
Figure 1

Figure 6.1. The processing of health data for developing AI medical devices

Figure 2

Figure 7.1: Example of an ANN