Skip to main content Accessibility help
Hostname: page-component-77c89778f8-n9wrp Total loading time: 0 Render date: 2024-07-19T19:19:31.094Z Has data issue: false hasContentIssue false

Part I - AI and Data as Medical Devices


Published online by Cambridge University Press:  31 March 2022

I. Glenn Cohen
Harvard Law School, Massachusetts
Timo Minssen
University of Copenhagen
W. Nicholson Price II
University of Michigan, Ann Arbor
Christopher Robertson
Boston University
Carmel Shachar
Harvard Law School, Massachusetts


The Future of Medical Device Regulation
Innovation and Protection
, pp. 11 - 46
Publisher: Cambridge University Press
Print publication year: 2022
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0

It may seem counterintuitive to open a book on medical devices with chapters on software and data, but these are the frontiers of new medical device regulation and law. Physical devices are still crucial to medicine, but they – and medical practice as a whole – are embedded in and permeated by networks of software and caches of data. Those software systems are often mindbogglingly complex and largely inscrutable, involving artificial intelligence and machine learning. Ensuring that such software works effectively and safely remains a substantial challenge for regulators and policymakers. Each of the three chapters in this part examines different aspects of how best to meet this challenge, focusing on review by drug regulators and, crucially, what aspects of oversight fall outside that purview.

Kerstin Vokinger, Thomas Hwang, and Aaron Kesselheim tackle the question of how food and drug regulators should oversee AI head-on in “Lifecycle Regulation and Evaluation of Artificial Intelligence and Machine Learning-Based Medical Devices.” A crucial difference between AI-powered software systems and classic devices, including software devices, is that AI-powered systems are frequently plastic: that is, they change more regularly (or at least can), given new data and new information about the world in which they are deployed. Vokinger and colleagues highlight how American and European regulators are fitting such plastic AI approaches into existing frameworks and suggest that accomplishing the regulatory task requires a combination of strong prospective evidence, ongoing oversight after approval, and transparency to agencies and others.

It is to those others that Barbara Evans and Frank Pasquale turn in “Product Liability Suits for FDA-Regulated AI/ML Software.” Regulators are only one part of the oversight picture; tort law lurks in the background to pick up the slack where products result in injury. The relationship between the FDA and tort suits for injuries caused by medical technology is complex, and mostly focused on preemption – when can plaintiffs sue in state court where the products involved are FDA-approved?Footnote 1 Evans and Pasquale focus on another aspect of the relationship: the very fact of FDA regulation for at least some clinical decision support software helps define the involved software as a “product” – neatly resolving the product/service distinction that has bedeviled tort liability for software more generally. Opening the door to product liability suits generates new possibilities for tort law to enforce requirements on AI-powered software systems. Evans and Pasquale explore the potential for novel tort suits brought on this basis, notably to address questions of explainability and the adequacy of training datasets. Here, too, the analysis highlights the boundary-crossing nature of AI-powered software, as these issues could be tackled by tort law, regulators, or both.

Finally, Craig Konnoth broadens the regulatory oversight focus beyond just artificial intelligence in “Are Electronic Health Records Medical Devices,?” considering the appropriate regulation of electronic health records (EHRs) more generally. Konnoth asks about the EHRs into which clinical decision support and other software are embedded, and which connect different parts of the health system (sometimes with greater success than others). Such interstitial technologies are a persistently challenging target for agency oversight, where different actors have the differing expertise and jurisdiction. Konnoth argues that here, too, the oversight role of the FDA may fruitfully be complemented by another: in this case, the Office of the National Coordinator for Health Information Technology, which could oversee the networking-focused aspects of electronic health records.

Collectively, these chapters demonstrate the challenge of regulating and overseeing the AI- and data-powered software which increasingly shapes medical practice, both behind the scenes and within the examining room. These technologies bring immense potential along with real risk, but present new regulatory challenges due to their opacity, their plasticity, and the speed with which they are being incorporated into the health system. Ensuring the right sort of oversight so that medical devices centered on AI and big data are safe, effective, and deployed in such a way as to actually help the health system demands concerted action from stakeholders across the board.

1 Lifecycle Regulation and Evaluation of Artificial Intelligence and Machine Learning-Based Medical Devices

Kerstin N. Vokinger , Thomas J. Hwang , and Aaron S. Kesselheim
1.1 Introduction

Artificial intelligence- and machine learning (AI/ML)-based technologies aim to improve patient care by uncovering new insights from the vast amount of data generated by an individual patient, and by the collective experience of many patients.Footnote 1

Though there is no unified definition of AI,Footnote 2 a good working definition is that it is a branch of computer science devoted to the performance of tasks that normally require human intelligence.Footnote 3 A major subbranch of this field is ML, in which, based on the US Food and Drug Administration’s (FDA) definition, techniques are applied to design and train software algorithms to learn from and act on data.Footnote 4 When intended to diagnose, treat, or prevent a disease or other conditions, AI/ML-based software is a medical device under the Food, Drug, and Cosmetic Act in the United States as well as the Council Directive 93/42/EEC and Therapeutic Products Act in the European Union and Switzerland, respectively.Footnote 5 Examples of AI/ML-based medical devices include an imaging system that uses algorithms to give diagnostic information for skin cancer or a smart electrocardiogram device that estimates the probability of a heart attack.Footnote 6

Medical devices that are AI/ML-based exist on a spectrum from locked to continuously learning. “Locked” algorithms provide the same result each time the same input is provided.Footnote 7 Such algorithms need manual processes for updates and validation. By contrast, adaptive or continuously learning algorithms change their behavior using defined learning processes. These changes are typically implemented and validated through a well-defined and possibly fully automated process that aims at improving performance based on analysis of new or additional data.Footnote 8

While AI/ML-based technologies hold promise, they also raise questions about how to ensure their safety and effectiveness.Footnote 9 In April 2019, the FDA published a discussion paper and announced that it was reviewing its regulation of AI/ML-based medical devices.Footnote 10 The distinctive characteristics of AI/ML-based software require a regulatory approach that spans the lifecycle of AI/ML-based technologies, allowing necessary steps to improve treatment while assuring safety outcomes.

In this chapter, we analyze the regulation of the clearance and certification of AI/ML-based software products in the United States and Europe. Due to the distinctive characteristics of AI/ML-based software, we believe that a regulatory approach is required that spans the lifecycle of these technologies, allowing indicated steps to improve treatment and ensure safety.Footnote 11 We conclude by reviewing the regulatory implications of this approach.

1.2 Clearance of AI/ML-Based Medical Devices in the United States

There is no separate regulatory pathway for AI/ML-based medical devices. Rather, in the United States, the FDA reviews medical devices based on the risks of the devices primarily through the 1) premarket approval pathway (most stringent review for high-risk devices), 2) the 510(k) pathway, or 3) de novo premarket review (for low- and moderate-risk devices).Footnote 12 Additionally, the humanitarian device exemption can apply to medical devices intended to benefit patients in the treatment or diagnosis of diseases or conditions that affect fewer than 8,000 individuals in the United States per year.Footnote 13

Premarket approval (PMA) is the most likely FDA pathway for new Class III medical devices. Class III devices are those that support or sustain human life, are of substantial importance in preventing impairment of human health, or which present a potential unreasonable risk of illness or injury. The FDA determined that general and special controls alone are insufficient to guarantee safety and effectiveness of such devices. Thus, such devices require a PMA application to obtain marketing approval. Premarket approval requires the demonstration of “reasonable assurance” that the medical device is safe and effective and generally includes at least one prospective trial.Footnote 14 Clearance through the 510(k) pathway is intended for devices for which a PMA is not required (Class I, II, and III devices). In contrast to the PMA, the 510(k) pathway only requires “substantial equivalence” to an already marketed device.Footnote 15 The de novo pathway is an alternate pathway to classify novel medical devices that had automatically been placed in Class III after receiving a “not substantially equivalent” (NSE) determination in response to a 510(k) submission. There are two options for de novo classification for novel devices of low to moderate risk. In the first option, any sponsor that receives an NSE determination may submit a de novo request to make a risk-based evaluation for classification of the device into Class I or II. In option 2, any sponsor that determines that there is no legally marketed device upon which to base a determination of substantial equivalence may submit a de novo request for the FDA to make a risk-based classification of the device into Class I or II, without first submitting a 510(k) and receiving an NSE determination.Footnote 16 The de novo pathway allows new devices to serve as references or predicates for future 510(k) submissions.Footnote 17

A majority of AI/ML-based medical devices are cleared through the 510(k) pathway.Footnote 18 However, the 510(k) pathway has been criticized for not sufficiently guaranteeing safety and effectiveness. The 510(k) clearance can lead to chains of medical devices that claim substantial equivalence to each other, but over years or even decades, may diverge substantially from the original device.Footnote 19 For example, certain metal-on-metal hip implants were cleared without clinical studies and based on predicate medical devices that did not demonstrate safety and effectiveness or were discontinued.Footnote 20 Indeed, past clearance of AI/ML-based medical devices can be traced back to other devices that do not have an AI/ML component. For example, the AI/ML-based medical device, Arterys Oncology DL, cleared in 2018, which is indicated to assist with liver and lung cancer diagnosis, can be traced back to cardiac imaging software cleared in 1998, which was considered as substantially equivalent to devices marketed prior to 1976.Footnote 21 The clearance decision does not provide any information regarding clinical validation, and such testing may not have been done.Footnote 22

Changes or modifications after marketing of a device requires additional FDA notification and possibly review, either as a supplement to the premarket approval or as a new 510(k) submission.Footnote 23 Of course, this is a further challenge for AI/ML devices, since adaptive algorithms that enable continuous learning from clinical application and experience may result in outputs that differ from what has initially been reviewed prior to regulatory approval.Footnote 24

The FDA publishes summaries of the cleared medical devices’ safety and effectiveness as well as statements. However, only rarely does the device description state whether the medical device contains an AI/ML component.Footnote 25 One example in which this was indicated was BriefCase, a radiological computer-aided triage and notification software that was 510(k) cleared in 2018 and indicated for use in the analysis of nonenhanced head CT images. According to the FDA’s summary, BriefCase uses an artificial intelligence algorithm to analyze images and highlight cases with detected intracranial hemorrhage on a standalone desktop application in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected intracranial hemorrhage findings.Footnote 26 Another example is AiCE (Advanced Intelligent Clear-IQ Engine), an AI/ML-based medical device that was 510(k) cleared in 2020. AiCE is a noise-reduction algorithm that improves image quality and reduces image noise by employing deep convolutional neural network methods for abdomen, pelvis, lung, cardiac, extremities, head, and inner ear applications.Footnote 27 However, the FDA’s summaries and statements do not reveal whether a cleared AI/ML-based medical device contains locked or adaptive algorithms.Footnote 28 For example, Illumeo System, an image management system software used with general purpose computing hardware to acquire, store, distribute, process, and display images and associated data throughout the clinical environment, is promoted as “adaptive” on the manufacturer’s website, but this is not explicitly mentioned in the FDA’s summary.Footnote 29

1.3 CE Marking of AI/ML-based Medical Devices in Europe

In Europe, there is also no specific regulatory pathway for AI/ML-based medical devices.Footnote 30 In contrast to the United States, medical products are not approved by a centralized agency. Apart from the lowest-risk medical devices (Class I) that can be carried out under the sole responsibility of the manufacturer, initial review of medical devices of higher-risk Classes (IIa, IIb, and III) are handled by private so-called notified bodies.Footnote 31 In Vitro Medical Devices (IVD) are, based on their risks, either marketed on the basis of the sole responsibility of the manufacturer or handled by notified bodies.Footnote 32 The EU Member States, EFTA States (Liechtenstein, Iceland, Norway, and Switzerland), and Turkey concluded treaties with regard to the mutual recognition of conformity assessments for medical devices.Footnote 33 For simplicity, we use “Europe” to refer to these countries, unless otherwise denoted. Each of these European countries recognize certificates (“Conformité Européenne” [CE] marks) issued by accredited private notified bodies in the other European countries, meaning that after a manufacturer obtains a CE mark in one European country, direct distribution is possible across Europe. Country-specific requirements remain valid, such as mandatory notification for new medical devices, requirements regarding the languages in which the product information must be provided, provisions regarding the prescription and professional use, advertising, reimbursement by social insurances, surveillance.Footnote 34

Studies show that medical devices are often certified in Europe prior to approval in the United States.Footnote 35 However, faster access in Europe brings with it important risks that have been well documented. Recent changes to the current European device regulatory system are intended to better safeguard patient safety.Footnote 36 For example, the revised laws (Regulation 2017/745 on Medical Devices [MDR] and Regulation 2017/46 on in vitro diagnostic medical devices [IVDR]) raised the certification threshold for medical products. However, these new laws still do not address AI/ML-based medical devices specifically. Due to the COVID-19 pandemic, the date of implementation of these laws by Member States has been postponed by one year to May 2021 for the MDR and May 2022 for the IVDR.Footnote 37

In contrast to the United States, Europe does not have a publicly accessible, comprehensive database for certified medical devices and summaries of the regulatory decisions. The EC database on medical devices (Eudamed) is a repository for information on market surveillance exchanged between national competent authorities and the Commission. However, its use is restricted to national competent authorities, the country-specific device regulatory authorities for medical devices, such as Swissmedic in Switzerland.Footnote 38 In some European countries, for example, Germany, the United Kingdom, or France,Footnote 39 such authorities have publicly accessible databases for registered medical devices in their country. However, such databases only reflect a fraction of the medical devices CE marked in Europe.

1.4 Implications for Lifecycle Regulation of AI/ML-based Medical Devices

The traditional paradigm of medical device regulation in both the United States and Europe was not designed for (adaptive) AI/ML technologies, which have the potential to adapt and optimize device performance in real time. The iterative and autonomous nature of such AI/ML-based medical devices require a new lifecycle-based framework with the goal of facilitating a rapid cycle of product improvement and to allow such devices to continuously improve while providing patients’ safety.Footnote 40

First, we believe it is important to address the currently limited evidence for safety and effectiveness available at the time of market entry for such products. Both in the United States and in Europe, a majority of the cleared and CE-marked AI/ML-based medical devices have not required new clinical testing.Footnote 41 This can deprive patients and clinicians of important information needed to make informed diagnostic and therapeutic decisions. Ideally, AI/ML-based medical devices that aim to predict, diagnose, or treat, should be evaluated in prospective clinical trials using meaningful patient-centered endpoints.Footnote 42 More rigorous premarket assessment of the performance of AI/ML-based medical devices could also facilitate trustworthiness and thus broader and faster access to these new technologies.Footnote 43 Implementation of AI/ML-based medical devices in clinical care will need to meet particularly high standards to satisfy clinicians and patients. Mistakes based on the reliance of an AI/ML-based medical device will drive negative perceptions that could reduce overall enthusiasm for the field and slow innovation. This can be seen with another AI-fueled innovation, autonomous and semi-autonomous vehicles. Even though such vehicles may be, on average, safer than human drivers, a pedestrian death due to such a vehicle error caused great alarm.Footnote 44 As pointed out in a prior study, it is also crucial to ensure that new regulations help contribute to an environment in which innovation in the development of new AI/ML-based medical devices can flourish.Footnote 45 Thus, the prerequisites for clinical testing must be aligned with the risks of AI/ML-based medical devices.

Second, to address the postapproval period (“surveillance”), manufacturers and the agencies (FDA in the United States, national authorities in Europe) should work together to generate a list of allowable changes and modifications that AI/ML-based medical devices can use to adapt in real time to new data that would be subject to “safe harbors” and thus not necessarily require premarket review. This is especially crucial for devices with adaptive algorithms. Such a “safe harbor” could, for example, apply to modifications in performance, with no change to the intended use or new input type, provided that the manufacturer agrees that such changes would not cause safety risks to patients.Footnote 46 These modifications should be documented in the manufacturer’s change history and other appropriate records. However, modifications to the AI/ML-based medical device’s intended use (e.g., from an “aid in diagnosis” to a “definitive diagnosis”) could be deemed to fall out of the “safe harbor” scope and require submission of a new review.Footnote 47 Depending on the modification, it may be reasonable that a focus of the review lies on the underlying algorithm changes for a particular AI/ML-based medical device.

Since even anticipated changes may accumulate over time to generate an unanticipated divergence in the AI/ML-based software’s eventual performance, there should be appropriate guardrails as software evolves after its initial regulatory approval. One possibility would be to develop built-in audits for regular intervals using data from ongoing implementation and assessing outcomes prespecified at the time of approval.Footnote 48 Another example would be to implement an automatic sunset after a specific amount of years, such as five years.Footnote 49 This would allow the regulatory agencies to periodically review accumulated modifications and postapproval performance to ensure that the risk-benefit profile for the device remains acceptable.Footnote 50 A stronger focus on the postapproval period is also in line with the FDA’s discussion paper that proposes, among other things, that manufacturers provide periodic reporting to the FDA on updates to their software.Footnote 51

Lastly, transparency has the potential to improve the usefulness, safety, and quality of clinical research by allowing agencies, regulators, researchers, and companies to learn from successes and failures of products.Footnote 52 It also fosters trust.Footnote 53 Function and modifications of AI/ML-based medical devices are key aspects of their safety, especially for adaptive software, and should therefore be made publicly accessible. Since modifications to AI/ML-based medical devices may be supported by the collection and monitoring of real-world data, manufacturers should also provide information about the data being collected in an annual report. A further approach to enhance transparency and trustworthiness could be that manufacturers actively update the FDA and European agencies, as well as the public (clinicians, patients, general users) with regard to modifications in algorithms, change in inputs, or the updated performance of the AI/ML-based medical devices.Footnote 54

A stronger focus on transparency should also be pursued by the FDA and European agencies. For example, medical devices that contain an AI/ML component should be indicated as such in the FDA’s summaries. The FDA should also clarify in the summaries whether such AI/ML-based medical devices include locked or adaptive algorithms. In Europe, the public does not have access to reviews or summaries of notified bodies or national authorities. National authorities in Europe should adopt the FDA’s approach.

Medical devices that are AI/ML-based pose new chances and challenges. Current regulations in the United States and in Europe are not designed specifically for AI/ML-based medical devices, and do not fit well with adaptive technologies. We recommend a regulatory approach that spans the lifecycle of these technologies.

2 Product Liability Suits for FDA-Regulated AI/ML Software

Barbara J. Evans and Frank Pasquale

The 21st Century Cures Act confirmed the FDA’s authority to regulate certain categories of software that, increasingly, incorporate artificial intelligence/machine-learning (AI/ML) techniques. The agency’s September 27, 2019 draft guidance on Clinical Decision Support Software proposed an approach for regulating CDS software and sheds light on plans for regulating genomic bioinformatics software (whether or not it constitutes CDS software). No matter how the FDA’s regulatory approach ultimately evolves, the agency’s involvement in this sphere has an important – and underexamined – implication: FDA-regulated software seemingly has the status of a medical product (as opposed to an informational service), which opens the door to product liability for defects causing patient injury. When a diagnostic or treatment decision relies on FDA-regulated CDS software, will mistakes invite strict liability, as opposed to being judged by the professional or general negligence standards of care that traditionally governed diagnostic and therapeutic errors? This chapter explores the policy rationales for product liability suits and asks whether such suits may have a helpful role to play as an adjunct to FDA oversight in promoting safety, effectiveness, and transparency of CDS software as it moves into wider use in clinical health care settings.

2.1 Introduction

The term “clinical decision support” (CDS) software includes various tools for enhancing clinical decision making and patient care. Examples include systems that provide alerts and reminders to health care providers and patients, or algorithms that offer recommendations about the best diagnosis or treatment for a patient.Footnote 1 The US Food and Drug Administration (FDA) conceives CDS software as data processing systems that combine patient-specific information (such as a patient’s test results or clinical history) with generally applicable medical knowledge (such as clinical practice guidelines, information from drug labeling, or insights gleaned from outcomes observed in other patients) to provide a health care professional with patient-specific recommendations about how to diagnose, treat, or prevent disease in clinical health care settings.Footnote 2

Congress defines an FDA-regulable medical device as an “instrument, apparatus, implement, machine, contrivance … ” or “any component, part, or accessory” thereof which is “intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease.”Footnote 3 Despite its physical intangibility, CDS software arguably meets this definition. For many years, the FDA has regulated “software in a medical device”Footnote 4 – software embedded in traditional hardware devices like x-ray machines, where the software affects the safety and effectiveness of the device as a whole.Footnote 5 In 2013, as CDS software was growing more common in clinical health care, the FDA worked with medical product regulators in other countries to develop the concept of “software as a medical device” (SaMD): standalone medical software, designed to run on diverse platforms such as smartphones, laptops, or in the cloud, that constitutes a medical device in its own right.Footnote 6 The notion was that when software is intended for use in diagnosing, treating, or preventing disease, then the software is itself a medical device, and its status as a device does not hinge on being incorporated into specific hardware.

Concerned that the FDA might be contemplating broad regulation of standalone medical software, the software industry pressed Congress for clarification. In December 2016, Congress responded in Section 3060 of the 21st Century Cures Act (the “Cures Act”).Footnote 7 Section 3060 includes some (but not all) CDS software in the definition of a device that the FDA can regulate and provides a jurisdictional rule distinguishing which software is – and which is not – a medical device.Footnote 8 In two subsequent draft guidance documents,Footnote 9 the FDA has attempted to clarify this distinction, but key uncertainties remain unresolved for CDS software that incorporates AI/ML techniques.

Whether a piece of software is subject to FDA oversight has important legal impacts apart from the immediate burden and delay of having to comply with the FDA’s regulations. This chapter explains why the FDA’s regulation of medical software could increase the likelihood that state courts would view it as a product that is subject to strict product liability tort regimes. Software liability has long been a contested topic. Courts have shown reluctance to apply product liability to software, whether because its intangible nature seems at odds with the notion of a product, or because software seems better characterized as a service.Footnote 10 If classified as a service, professional malpractice or ordinary negligence regimes would apply to software vendors. If classified as a product, they could face product liability (which encompasses both negligence and strict liability claims). The fact that product vendors face product liability does not prevent plaintiffs from also bringing malpractice suits against physicians and other health care professionals who ordered, prescribed, or used a defective product in the course of treating the patient. Product liability and malpractice coexist in the medical setting, and a single injury can generate both types of suit.

This chapter briefly explains the jurisdictional rule Congress set out in Section 3060 of the Cures Act and identifies key uncertainties after the FDA’s two recent attempts at clarification. The chapter next summarizes some of the policy rationales for product liability and their applicability to CDS software. The chapter then explores two intriguing types of product liability suits that could emerge in connection with FDA-regulated AI/ML CDS software.

2.2 The FDA’s Authority to Regulate CDS Software

The very fact that the FDA regulates a piece of software militates in favor of its classification as a product, as opposed to an informational or professional service, potentially subjecting it to product liability suits. This proposition may strike readers as nonobvious, but it is an artifact of how the FDA’s jurisdiction is defined under the Food, Drug, and Cosmetic Act (FDCA).

A key divide in health law is between FDA regulation of medical products versus state-level licensure directed at health care services such as the practice of medicine. “The scope of FDA’s power is defined almost entirely by the list of product categories over which it has jurisdiction.”Footnote 11 The major exception is that the FDA shares broad powers with the Centers for Disease Control and Prevention to manage the spread of communicable diseases, but those powers arise under a different statute.Footnote 12 Under the FDCA, the FDA’s ability to regulate persons or entities rests on whether they are developing, manufacturing, shipping, storing, importing, or selling an item that fits within one of the product categories that Congress authorizes the FDA to regulate: drugs, devices, biological products, food, animal drugs, et cetera.Footnote 13 The FDA’s regulatory authority under the FDCA extends to products rather than services.Footnote 14 Medical devices, as FDA-regulated products, are routinely subject to product liability suits.Footnote 15

When the FDA asserts that it has jurisdiction to regulate something, the agency is making a determination that that thing fits into one of these congressionally defined product categories, and therefore is not a service. Once the FDA determines that something is a product, it is conceivable that a state court hearing a tort lawsuit might disagree, but this is unlikely. Doing so would amount to a state court finding that the FDA regulated outside of its lawful jurisdiction. Suits challenging the FDA’s jurisdiction pose federal questions to be heard in federal court, not state court. Moreover, the FDA is making scientific/technical determinations when it classifies something as a medical product, and courts (both state and federal) tend to give “super deference” to such decisions.Footnote 16 If the FDA determines that software fits within Congress’s definition of a medical device, and therefore is a product, state courts seem likely to defer.

The jurisdictional rule for CDS software under the Cures Act carefully respects the line between products and services, as has all FDA-related legislation dating back to the 1930s when the scope of the FDA’s power to regulate medical practice was hotly debated before Congress passed the FDCA.Footnote 17 Congress denied intent for FDA regulation of medical products to encompass regulation of health care services, a traditional province of the states.Footnote 18 As a policy matter, the FDA seeks to avoid regulating physicians’ activities, even though courts have never found constitutional limits on the FDA’s power to do so.Footnote 19 “There is little doubt under modern law that Congress has ample power to regulate the manufacture, distribution, and use of drugs and medical devices.”Footnote 20 Regulating use is tantamount to regulating health care services when, for many types of devices used in health care facilities, the provider rather than the patient is the user.Footnote 21 Tension is seen in the 1976 Medical Device Amendments, which authorize the FDA to approve medical devices subject to restrictions on their use,Footnote 22 but expressly forbid the FDA to interfere with physicians’ discretion to use those devices however they see fit in the context of practitioner–patient relationships.Footnote 23

The product/service distinction grows even more strained when the device is CDS software, which by its very design is intended to influence the practice of medicine. The Cures Act traces a line between CDS software that performs device-like functions (which the FDA appropriately can regulate) versus CDS software whose functions resemble medical practice (which the FDA should not regulate).Footnote 24 The baseline assumption is that CDS software performs a practice-related function and should be excluded from the FDA’s oversight. Congress recognizes two situations, however, where FDA oversight is appropriate. These are portrayed in Figure 2.1.

Figure 2.1: The FDA’s jurisdiction to regulate CDS software under the Cures Act

At the far left, the FDA can regulate CDS software when its “function is intended to acquire, process, or analyze a medical image or a signal from an in vitro diagnostic device or a pattern or signal from a signal acquisition system.”Footnote 25 The FDA has, for many years, regulated this type of software which includes, for example, software that enhances mammogram images to highlight areas suspicious for disease.Footnote 26 In one sense, this is CDS software because it helps a human actor – the radiologist – make a diagnosis. Still, another way to view it is that the software is helping a device (the imaging machine) do its job better by transforming outputs into a user-friendly format. By leaving such software under the FDA’s oversight, the Cures Act treats it as mainly enhancing the performance of the device rather than the human using the device. The software is, in effect, a device accessory, and an accessory to a device is itself a device that the FDA can regulate.Footnote 27

The FDA can regulate some, but not all, of the remaining CDS software which more directly aims to bolster human performance. The Cures Act allows the FDA to regulate CDS software if it is not intended to enable the “health care professional to independently review the basis for such recommendations that such software presents” so that there is an intent that the “health care professional rely primarily on any of such recommendations to make a clinical diagnosis or treatment decision regarding an individual patient.”Footnote 28 This murky language expresses a rather simple concept: the FDA can regulate CDS software if the developer intends for it to operate as a “black box,” to use Nicholson Price’s phrase.Footnote 29 To use engineering parlance, the FDA can regulate AI/ML CDS software if it is not intended to function as explainable artificial intelligence (XAI).Footnote 30 CDS software that makes recommendations falls under the FDA’s regulatory jurisdiction if those recommendations are not intended to be transparent to the health care professionals using the software. On the other hand, if CDS software is transparent enough that a health care professional would be able to understand its recommendations and challenge them – that is, when it is not a black box – then Congress excludes it from FDA regulatory oversight.

The Cures Act parses the product/practice regulatory distinction as follows: Congress sees it as a medical practice issue (instead of a product regulatory issue) to make sure health care professionals safely apply CDS software recommendations that are amenable to independent professional review. In that situation, safe and effective use of CDS software is best left to clinicians and to their state practice regulators, institutional policies, and the medical profession. When CDS software is not intended to be independently reviewable by the health care provider at the point of care, there is no way for these bodies to police appropriate clinical use of the software. In that situation, the Cures Act tasks the FDA with overseeing its safety and effectiveness. Doing so has the side effect of exposing CDS software developers to a risk of product liability suits. Product liability regimes may provide a useful legal framing for some of the problems CDS software presents.

2.3 Why There May be a Role for Product Liability

An emerging literature on the limits of AI and big data analytics has raised serious concerns about the safety of these technologies, including in their CDS software applications. Lack of reproducibility may occur because of nonrepresentative datasets, or because vendors and developers refuse to permit others to scrutinize their wares. As Rebecca Robbins reported in 2020, “Some of these AI models are fraught with bias, and even those that have been demonstrated to be accurate largely haven’t yet been shown to improve patient outcomes. Some hospitals don’t share data on how well the systems work.”Footnote 31 Narrow validity undermines some models’ applicability in certain health care settings, but overblown claims of accuracy or assistance can lead physicians not to mention that the software is in use, much less to seek patients’ informed consent to it.Footnote 32 Data opacity also creates situations where even those who might be concerned about CDS software cannot adequately complete due diligence or otherwise explore its limits. Dr. Eric Topol summarized many examples of these problems (lack of reproducibility, narrow validity, overblown claims, and nontransparent or hidden data).Footnote 33

There is also concern that the data involved may not merely lack representativeness generally but may be biased in particularly troubling ways. Datasets may inadequately reflect all groups in society,Footnote 34 or may underinclude womenFootnote 35 and overrepresent persons of European ancestry,Footnote 36 causing the software to provide unreliable or unsafe recommendations for the underrepresented groups.Footnote 37

Some observers hope that the FDA or National Institute of Standards and Technology will gradually nudge CDS software vendors toward better practices. However, the current path of development of medical software casts doubt on whether FDA oversight can fulfil this role. The agency’s Digital Innovation Action PlanFootnote 38 and its Digital Health Software Precertification (Pre-Cert) ProgramFootnote 39 acknowledge these concerns:

FDA’s traditional approach to moderate and higher risk hardware-based medical devices is not well suited for the faster iterative design, development, and type of validation used for software-based medical technologies. Traditional implementation of the premarket requirements may impede or delay patient access to critical evolutions of software technology.Footnote 40

In response, the FDA is “reimagining its approach to digital health medical devices,”Footnote 41 but the agency’s policies are still a work in progress. Roiled by long-term trends toward underfunding, politically motivated attacks on its expertise, and flagging public confidence in the wake of the US COVID-19 debacle, the FDA faces a difficult path ahead and may be particularly challenged when it comes to regulating the safety and effectiveness of AI/ML CDS software.Footnote 42 The agency’s priorities may justifiably be elsewhere, and its ability to recruit experts at government salary scales is suspect when AI/ML experts command significantly more than current public compensation levels.

When diagnostic AI ignores problems with inclusivity and bias yet still manages to deliver better results than unaided human observation for many or most patients, the patients who do suffer an injury may not have a tort remedy under a negligence standard – particularly if the standard of care is unaided human observation. Even if standard-setting bodies enunciate standards for database inclusion, many states continue to base negligence liability on customary standards of care.Footnote 43 The next section explores whether failures to use more representative databases might be deemed actionable in strict product liability.

The FDA’s announced approaches for regulating Software as a Medical Device (SaMD) seemingly would not preempt state product liability suits under doctrines announced in prior medical device cases like Medtronic v. LohrFootnote 44 and Riegel v. Medtronic.Footnote 45 This opens the door for product liability suits to help fill the regulatory gaps and help incentivize quality improvement, accountability, and responsibility that an overburdened FDA may be incapable of ensuring.Footnote 46

Other factors suggesting a need for product liability include medical software contract practices that blunt the impact of negligence suits against software developers. It is common for developers to shield themselves from negligence through license terms that shift liability to (or require indemnification from) health care providers that use their software.Footnote 47 Such terms are seen, for example, in vendor contracts for electronic health record (EHR) systems, which may also include alternative dispute resolution procedures and gag clauses that stifle public disclosure of safety problems.Footnote 48 Patients hurt by defective medical software might attempt to sue their health care provider but would face challenges establishing negligence of the software developer. The provider, who might possess facts bearing on the developer’s negligence, cannot pursue claims under terms of the licensing agreement. The result is to channel negligence claims toward providers while the software developer goes unscathed. In contrast, product liability widens opportunities for patients to sue any party in the chain of commerce that resulted in their injuries. Private contracts between software developers and health care providers can foreclose suits between those two signatories but cannot waive the rights of patients to sue developers whose defective software causes medical injuries.

The next section explores two possible product liability causes of action that offer promise for this gap-filling role. The first is manufacturing defect suits for lack of explainability, when software fails to live up to developers’ claims that the algorithm is transparent to physicians tasked with using it. The second is design defect suits when software uses training or operational datasets that are too small, inaccurate, biased, or otherwise inappropriate for the actual patients for whom the software renders recommendations.

2.4 Can Manufacturing Defect Suits Promote Explainability of AI/ML CDS Software?

In its 2017 and 2019 draft guidance documents on CDS software,Footnote 49 the FDA failed to clarify the central jurisdictional enigma in the Cures Act: How will the agency determine whether AI/ML CDS software is intended to enable the health care professional to independently review the basis for the software’s recommendations? This section describes the problem and explores whether product liability suits might help.

The FDA has a clear process for assessing device manufacturers’ intent,Footnote 50 but needs to explain how this process applies to developers of AI/ML software: How, exactly, will the FDA assess whether a software developer intends for its software to be explainable? The agency could, for example, describe algorithmic features that support an inference that CDS software is medical XAI. Alternatively, the agency could prescribe a clinical testing process (such as having physicians use the software and surveying whether they understand the basis for its decisions). The FDA has done neither.

The draft guidance documents both view simple, rule-based CDS systems – those that merely apply existing knowledge that is “publicly available (e.g., clinical practice guidelines, published literature)”Footnote 51 – as meeting the § 360j(o)(1)(E)(iii) “explainability” standard, thus escaping FDA regulation. The 2017 draft guidance did not directly discuss AI/ML systems that derive and apply new insights from real-world evidence. It seemed to presume that all such systems would be subject to FDA regulation – an overly expansive view of the FDA’s authority that ignored the jurisdictional rule in the Cures Act. The 2019 draft guidance acknowledges that the explainability of AI/ML software is a key jurisdictional issue but failed to provide any standards or processes for judging whether software is intended to be explainable.

This default has serious consequences. If the FDA deems all but the simplest CDS systems to be unexplainable, this could have detrimental impacts on innovation and on patients. What incentive will software developers have to invest in making AI/ML medical software more explainable to physicians, if the FDA deems all such software to be unexplainable no matter what they do? Simple, rule-based CDS software would escape FDA oversight. Promising AI/ML software to enable a learning health care system informed by real clinical evidence might face long regulatory delays.

The FDA’s failure to set standards – or at least a process – for assessing software explainability leaves the agency with no basis to rebut developers’ claims that their software is intended to be explainable and to allow independent review by physicians. Developers seemingly could escape FDA regulation by simply asserting that they intend for software to be explainable (whether or not it actually is) and by labeling the software as “not intended for use without independent review by a healthcare professional” and “not intended to serve as the primary basis for making a clinical diagnosis or treatment decision regarding an individual patient.”Footnote 52 Developers have strong incentives to pursue this strategy. They might escape FDA regulation under the Cures Act’s jurisdictional rule. This in turn would let them argue that their software is a service, rather than an FDA-regulated product subject to product liability. Any physician that relies on the software’s recommendations as the main basis for decision making would be using it off-label, and negligence liability for off-label use rests with the physician, rather than the software developer. Why would a rational software developer not try this strategy?

Strict product liability might provide an answer. Under the Third Restatement of Torts, a plaintiff establishes a manufacturing defect by showing that a product “departs from its intended design even though all possible care was exercised in the preparation and marketing of the product.”Footnote 53 The plaintiff merely needs to show that the product deviated from the intended design when it left the developer’s possession.Footnote 54 If an AI/ML CDS software developer states that it intended for its software to allow independent review by physicians, and perhaps even escaped FDA oversight by making that claim, then that proves that the software was intended to be explainable. If the software later lacks explainability, hindering independent physician review, then the software clearly departs from its intended design and has a manufacturing defect. Plaintiffs seemingly face low evidentiary hurdles to establish the defect: they could call their physician to the witness stand and ask the physician to explain to the jury how the AI/ML software reached its recommendations that affected patient care. If the physician cannot do so, the plaintiff would have proved the defect. In light of the FDA’s ongoing failure to enunciate standards for AI/ML software explainability, manufacturing defect suits are a promising tool to incentivize investment in improved explainability (and frank disclosures when explainability is lacking).

2.5 Can Design Defects Promote the Use of Appropriate Training Datasets?

The FDA’s 2017 draft guidance on CDS software suggested that the Cures Act explainability standard cannot be met unless physician users have access to the data underlying a software product’s recommendations.Footnote 55 The agency backed away from this position in its 2019 draft guidance, possibly reflecting the reality that software developers are deeply opposed to sharing their proprietary training datasets with anyone – neither users nor regulators. At most, developers express willingness to share summary statistics, such as the kinds of health conditions, demographics, and number of patients included in the training dataset. The FDA’s oversight of AI/ML training datasets thus seems destined to be cursory.

There have been calls for software developers to have legal duties relating to the accuracy and appropriateness (representativeness) of training datasets, as well as the integrity of all data inputs and the transparency of outputs.Footnote 56 Prospective regulation by the FDA is proving an uncertain legal vehicle for establishing such duties. Can design defect suits address this deficiency?

“Strict” product liability/design defect suits allege that a product is unreasonably dangerous even though it may conform to its intended design. Complex products like CDS software are unsuitable for a consumer expectations test, which applies only if jurors would be able to understand a product’s risks without the aid of expert witnesses. Courts likely would apply a risk-utility test, which usually involves requiring the plaintiff to show that a reasonable alternative design (RAD) existed at the time of sale and distribution. This reliance on reasonability concepts causes strict liability suits for design defects to bear a considerable resemblance to negligence suits, which is why this paragraph put “strict” between quotation marks.

Selection of the training dataset is a central design decision when developing AI/ML software. If the training dataset is too small, inappropriate, inaccurate, or biased and nonrepresentative of patients the software later will analyze, then the software – by its design – cannot provide accurate recommendations for their care. An alternative design seemingly always exists: that is, train the software on a larger, more appropriate, more accurate, less biased dataset that better reflects the intended patient population. However, the “R” in RAD stands for “reasonable,” and it would be left for the trier of fact to decide whether it would have been reasonable for the software developer to have used that alternative, better dataset, in view of the cost, delay, availability, and accessibility of additional data.

Framing the problem as a design defect of the AI/ML software (which in most cases will be an FDA-regulated product) may avert some of the difficulties seen in prior product liability suits alleging defects in information itself. Because information is intangible, some courts struggle with treating it as a product and applying strict liability.Footnote 57 Suits for defective graphic presentation of information occasionally succeed, as in Aetna Casualty and Surety v. Jeppeson & Co.,Footnote 58 involving a deadly air crash after the pilot relied on a Jeppeson instrument approach chart – a product consisting almost entirely of the graphic presentation of information, which the district court found defective. That case is considered anomalous, however, and many courts hesitate to allow design defect suits over deadly information, whether on First Amendment grounds or reluctance to hinder free flows of information in our society.Footnote 59 Suits for defective information seem most likely to fail when the information in question resembles expressive content,Footnote 60 which might not be an issue for AI/ML training datasets, which are not expressive. Still, courts have a well-known reluctance to treat information as a “product” that was “defective.” The approach proposed here avoids this problem. The alleged defect is not in the information itself, but in the design of the software product that relied on the information. The information in an AI/ML training dataset is best conceived as a design feature of the software rather than a product in its own right.

2.6 Conclusion

Some commentators express concern that applying product liability to software could have adverse impacts on innovation and might delay diffusion of software.Footnote 61 We agree that these are valid concerns that courts will need to weigh carefully when considering claims by patients injured during the use of AI/ML CDS software. At the same time, however, a vast and growing literature on algorithmic accountability and critical algorithm studies has painstakingly documented that AI/ML software, even if it provides useful results for most people, can harm members of groups that were underrepresented in the datasets on which the software relies.Footnote 62 Such injuries are predictable and need remedies when they do occur. Product liability should not be ruled out. Slowing the diffusion of software might well be justified if the software injures people or entrenches historical disparities in access to high-quality health care.

Other commentators question applying product liability to AI/ML continuous-learning software which can evolve independently of the manufacturer.Footnote 63 To date, the FDA has only cleared or approved software that is “locked” – that is, stops evolving – prior to the FDA’s review, which removes this concern. As continuously learning software does reach the market, the possibility of software evolution underscores the need to program in restraints and checks against problematic forms of evolution.Footnote 64 The FDA has not explained how it will (or whether it can) ensure such restraints. Product liability has long served alongside the FDA’s oversight to promote patient safety.

Some commentators endorse product liability in the CDS context, pointing to the need for courts to recognize and counteract automation bias, which can arise when often overworked professionals seek tools to ease their workload.Footnote 65 More stringent product liability standards are a way of promoting a lower risk level in the health care industry and can ease the difficulties injured patients will face establishing negligence, given the extraordinarily complex and even trade-secret protected methods used to develop CDS software.Footnote 66

The time may be right to reconsider product liability for medical software. The FDA’s foray into this regulatory sphere bestows “product” status on medical software that courts often have tended to view as information services. By doing so, the agency’s regulation of CDS software opens the door to product liability suits. This chapter has suggested two examples that merit further study. Such suits could help nudge software developers to improve the explainability of their software and ensure appropriate training datasets and could promote greater industry transparency about CDS software on which patients’ lives may depend.

3 Are Electronic Health Records Medical Devices?

Craig Konnoth

Are Electronic Health Records (EHRs) medical devices? Answering this question is important. It will determine, in part, which agency will regulate EHRs, and under what paradigms. Either the Food and Drug Administration (FDA) will regulate EHRs as medical devices, or the Office of the National Coordinator of Health Information Technology (ONC), another subagency within HHS that focuses on health data regulation, will provide the framework. This chapter argues that the task should be divided between the two agencies in a way that reflects their respective expertise to produce an optimum outcome. The criterion should be the extent to which the particular function being regulated involves networking with other systems and users. To the degree that it does, the ONC should hold primacy. But for more patient-facing functions that do not involve networking, the FDA should run point. Thus, the ONC should control data-format standardization in EHRs; the FDA might lead clinical decision support (CDS) efforts.

At the outset, some may argue that the question I raise is moot, and my solution is impossible. Section 520(o)(1)(C) of the Food Drug and Cosmetic Act (FDCA), inserted by the 21st Century Cures Act of 2016 (Cures Act), seems to shift the balance of power toward the ONC. It provides that EHRs are not medical devices if they were “created, stored, transferred, or reviewed by health care professionals,” “are part of health information technology that is certified” by the ONC, and “such function is not intended to interpret or analyze patient records.”Footnote 1 But at the same time, the HHS Secretary has the authority to undo the exclusion, admittedly subject to notice and comment rulemaking, and a finding that a particular “software function would be reasonably likely to have serious adverse health consequences.”Footnote 2 If the exclusion of EHRs from FDA jurisdiction does not make sense, then, the Secretary could likely take steps to undo or modify the statutory mandate.

The question then is, should they? And the statute provides no answer to that question. On one hand, the statute does exclude EHRs as medical devices. But at the same time, by negative implication, Section 520(o)(1)(C) suggests that but for its exclusion, EHRs would be medical devices – after all, why bother to say a product was not a device, if that product would not have, anyway, been covered in the definition of device?Footnote 3 While the statute quite clearly excludes EHRs as medical devices, neither the statute, nor the legislative history, is clear on the reasons for doing so. Thus, there is little guidance in the statute as to how the Secretary can and should exercise discretion.

I argue that the key aspect of EHRs that render them foreign to the FDA’s jurisdiction is their systemwide interconnectedness; they affect and are affected, both directly and indirectly, by third parties. First, a patient’s EHR affects others. The EHRs must work in a certain way, not just for the safety of the patient, but for the integrity of the system as a whole. The data from EHRs is used for both clinical and quality management research, for example. On the other hand, the safety of EHRs involves greater systematic, upstream regulation – of third-party networks, data formats, and other issues that present collective action problems. This goes far beyond the mandate of the FDA that fails to consider such issues and lacks jurisdiction over many necessary third parties.Footnote 4

While I therefore endorse some aspects of the FDA’s historic reasoning with respect to EHRs, which I describe below, I argue that it should be allowed to regulate only those functions that have a direct and primary effect on the particular patient – such as the quality of a particular algorithm that renders CDS. However, it should not be allowed to regulate aspects of EHRs such as data format and interoperability that present these third-party and systematic considerations.

3.1 Existing Reasons Against Regulating EHRs and their Shortcomings

The Food, Drug, and Cosmetic Act of 1938 was amended in 1976 to include medical devices within the FDA’s ambit. A device is:

[A]n instrument, apparatus, implement, machine, contrivance, implant, in vitro reagent, or other similar or related article, including any component, part, or accessory, which is … intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease.Footnote 5

Devices are categorized as Class I through III, depending on the extent to which they support or sustain human life, or present a risk of injury. Class I devices do not support or sustain human life and do not unreasonably risk injury; Class II devices are somewhere in the middle, as they support or sustain human life and present a higher risk; Class III devices present a high risk of injury.Footnote 6 FDA controls are commensurate with device risk. Class I devices are subject to “general controls” – for example, prohibitions on misbranding. Class II devices are subject to some additional, discretionary controls. Class III devices require premarket approval from the Secretary, though there are methods for obtaining exemptions.Footnote 7

Turning next to EHRs, these consist of software that offers various kinds of functionality, including data entry, storage, retrieval, transmission, and CDS, among others.Footnote 8 In the late 1980s and early 1990s, EHRs began to take on their modern form, offering functions such as computerized order entry, CDS, and medical device interfaces – even though the early computer systems were slow, had low storage capacity, and paper was omnipresent.Footnote 9

On my account, the FDA regulates EHRs in whole or in part when it takes on regulation of these functions, including when these functions appear in an EHR context. Thus, if the FDA declares authority over CDS regulation, it engages in regulation of EHRs to some degree because of the ubiquity of CDS in EHR contexts. Starting in the late 1980s, as EHRs took on their modern form, the FDA offered various reasons both for and against regulating EHRs and EHR-type products.

On one hand, the reasons for regulating EHRs seem obvious – it falls within the medical device definition. It seems that EHRs constitute an “apparatus” or “contrivance” that is “intended for use” in the process of disease diagnosis, as well as in “the cure, mitigation, treatment, or prevention of disease.” Thus, when data from bloodwork is fed into an EHR, and a medical professional looks at the EHR to make a diagnosis, and also looks at the EHR to determine what other medication a patient is taking, so that they can determine what should be prescribed, the EHR plays a role in both the process of diagnosis and treatment. Prominent data regulation scholars, Sharona Hoffman and Andy Podgurski, similarly conclude: “Given that they feature decision support, order entry, and other care delivery and management functions, one might reasonably conclude that EHR systems are as essential to patient care as are many regulated devices. Furthermore, their software can be more complicated than that found in many computer-controlled medical devices that are subject to FDA jurisdiction.”Footnote 10

This understanding appears to have undergirded the thinking of at least one senior FDA official, who, a decade ago, suggested that EHRs should be regulated as medical devices. As he explained, Health Information Technology – in this case, it would appear from context, specifically EHRs, are vulnerable to various errors that affect patient safety. These include “(1) errors of commission, such as accessing the wrong patient’s record … (2) errors of omission or transmission, such as the loss or corruption of vital patient data (3) errors in data analysis, including medication dosing errors of several orders of magnitude and (4) incompatibility between multi-vendor software applications and systems, which can lead to any of the above.”Footnote 11 Two years later, a dissenting view in an Institute of Medicine Report advanced similar reasons for regulating EHRs as medical devices.Footnote 12 EHR “components participate directly in diagnosis, cure, mitigation, treatment, and prevention of specified individual human beings” squarely falling within the definition of medical devices.Footnote 13 Indeed, for reasons I will not engage here, the dissent argued that EHRs should be regulated as Class III devices.

On the other hand, over the years, regulators have offered various reasons against regulating EHRs as devices, though none seem to overcome the squarely textual reasoning above. First, the FDA has noted EHR outputs are subject to independent clinical judgment. Physicians can use their independent experience and knowledge to evaluate the EHR output and make their own decisions concerning patients.Footnote 14 Second, “health IT is constantly being upgraded and modified to reflect new evidence and clinical interventions, changing work flows, and new requirements … Constantly evolving systems … don’t lend themselves to discontinuous oversight mechanisms such as those used for medical devices.”Footnote 15 Third, the FDA lacks “capacity” to regulate;Footnote 16 and fourth, that “regulation of health IT [including EHRs] by FDA as a Class III device could have” an impact “on innovation.”Footnote 17

But there are problems with these rationales. The independent review rationale also founders because professionals are just as reliant on EHRs as they are on many other devices (and relatedly, unable to carry out fully independent reviews). Indeed, depending on the error, a professional may be more likely to see if an x-ray machine malfunctioned – because the image is fuzzy, perhaps – than if an EHR contains wrong data. Similarly, as Hoffman and Podgurski note, “in the midst of surgery or in the intensive care unit” it would be hard for a provider to reflect on the data that the EHR has provided.Footnote 18 Further, the concept of “intervention” is hard to suss out. Does “intervention” require the practitioner to follow the EHR’s output or recommendation only if it accords with their assessment, but ignore it otherwise? If so, the value-add of the EHR is unclear – if the practitioner is going to stick to their judgment no matter what.

Similarly, the other rationales also fail. On the second objection, medical devices generally are subject to various kinds of upgrades and “constant[] evol[ution]”; the FDA has offered a preliminary discussion regarding upgrading different kinds of software, with different tracks for “locked” versus continuously evolving algorithms.Footnote 19 As for the third, FDA funding and support can be increased. And the fourth concern goes to the kind of regulation that would be appropriate for EHRs as medical devices – it does not speak to whether EHRs are devices, and whether the FDA should have control.

Thus, while it is clear that many stakeholders have concerns with giving the FDA full control over EHR regulation, they have not provided strong rationales.

3.2 Additional Rationale: Networked versus NonNetworked Aspects of EHR Use

In this section, I argue that fundamental aspects of EHRs – namely, their systemwide interconnectedness – render at least some important EHR functionalities foreign to FDA expertise. Thus, I argue that the FDA should refrain from regulation on aspects of EHRs that directly implicate data networks. That regulation should remain in the hands of the ONC, which has relationships with data networks and EHR developers.Footnote 20 However, FDA regulation may be appropriate where the subject of regulation is the point at which the EHR interacts directly with patient care.

In other work, I have explained that EHR use occurs at two levels: at the individual level, and a population-based level.Footnote 21 At the individual level, a medical professional uses EHR for the care of a specific patient. They look up a patient’s medical history, past medical conditions, treatments and the like. They can use the data to treat the patient, ensuring that there are no adverse drug interactions.

At a systemwide level, many EHRs are connected in ways that allow the data they contact to be pulled together and analyzed to draw conclusions on the safety and effectiveness of treatments and procedures, among other purposes, across vast populations. For this purpose, troves of data are cleaned, collated, and analyzed. The goal of a so-called learning health system would be to pull together data – much, if not most of it, in the form of EHRs – in real-time to figure out what interventions work best based on current knowledge, to reenter data back into the system, which in turn, is then used to refine the outcome for future interventions on other patients in an iterative feedback loop.Footnote 22 While not all EHRs can carry out these functions, many of them do so, and the goal is to have full interconnectivity.

Further, it is not just the uses of EHRs that invite population and systemwide considerations. Pulling together EHRs involves other population- and system-level considerations. For example, the data formats and elements that one EHR uses have ramifications for how other, unrelated EHR systems work – if they do not use the same formats and elements, the overall system cannot function properly.Footnote 23 Thus, as one regulatory entity put it: “[i]ndividual health IT components may meet their stated performance requirements, yet the system as a whole may yield unsafe outcomes.”Footnote 24

These questions of population-level data and interconnected networks should determine the bounds of FDA jurisdiction. The operation of EHRs in a certain instance, then, is fundamentally interconnected to a broader system. When a doctor deploys an EHR in a particular context, their action draws on data, data formats, users (who may have input the data years ago), and networks. More than that, their engagement with the EHR can have implications for patient care – not just that of their patient, but, if the EHR is agglomerated and used elsewhere, on that of other patients.

This is the key difference between most devices and EHRs. As long as other devices are integrated into the relevant medical system of which they are a part, they fulfil their primary function. Safety considerations therefore focus on the particular context in which the device is used – while there may be downstream effects, they are less important. The purpose of EHRs however, is to record, transmit, aggregate, and use information downstream. At a fundamental level EHRs must engage with other systems and subsequent patients – or the same patient in subsequent visits.

Because of this interconnected nature, unlike with other devices, where the safety of a particular MRI is not (within reason) dependent upon which supplier the provider obtained it from, it is harder to tease EHR and their data away from how it was delivered and sourced, and how it may play with other systems. In regulating EHRs, the FDA would not have to just consider the particular EHR system at hand. It would have to consider how the EHR system works with other EHR systems and formats, and other users. It would have to consider downstream uses of the data thus input, as it may be used for future analyses.

Limiting the FDA’s ability to engage with the third party and indirect effects of EHRs fits in with the broader approach it currently takes. In previous work, I have argued that the FDA generally lacks expertise and has limited authority to regulate inter alia indirect drug effects and drug effects on third parties. As I explain, an indirect cause is one which is “separated from an effect by an intervening cause. This intervening cause must 1) be independent of the original act, 2) be a voluntary human act or an abnormal natural event, and 3) occur in time between the original act and the effect.”Footnote 25 Thus, the use of birth control may lead (some claim) to higher incidents of STDs, since individuals may have condomless sex. But such condomless sex is a voluntary, intervening act. Similarly, “[t]hird-party harm occurs when the drug is prescribed for use, and actually used by person A, but person B is harmed by the use either directly or indirectly.”Footnote 26 Such harm includes, for example, secondhand smoke directly inhaled by third parties who do not use cigarettes. Some harms are both indirect and third party, such as downstream partners who may contract an STD caused by condomless sex that some claim occurs due to the availability of birth control.Footnote 27 I explain that while the FDA should sometimes regulate indirect and third-party harm, its expertise and authority are at its nadir when it does so, and its intervention should be limited. That is the situation in which EHRs reside.

Without considering its implications for regulatory control, two regulatory entities have recognized that EHRs raise questions of indirect and third-party harms. They are part of “a complex sociotechnical system.”Footnote 28 Yet, they do not distinguish the networked and nonnetworked aspects of EHRs. Rather, they focus on the interaction of users with EHRs, and the error that results. As they emphasize, the interactive nature of EHRs, organizational workflow, and user understanding, determine safety.Footnote 29 Scholars, such as Sara Gerke and coauthors, writing in the context of artificial intelligence (AI), have similarly argued that AI in health implicates a “system” view, by which they mean the intersection of humans, and organizational workflow, with technology.Footnote 30

But in the EHR context,Footnote 31 the distinguishing factor is not user-technology interaction. After all, other devices raise concerns regarding user-technology interactions, and the errors that result, and the FDA has, to some degree at least, sought to regulate such concerns by reviewing labeling, and the like.Footnote 32 There are limits – for example, the FDA cannot “regulate the practice of medicine.”Footnote 33 But the user-error concerns here arise with respect to all medical devices. They are not unique to EHRs (or, for that matter, to medical software more generally). Rather, the relevant boundary is between networked and nonnetworked EHR functions.

Separating EHR use into two aspects allows us to determine the bounds of FDA jurisdiction. On one hand, it may make sense for the FDA to regulate certain aspects of the EHR as they pertain to a specific patient – subject to the limits on regulating the practice of medicine. But when networked aspects of the EHR are involved, the FDA should step back. In that situation, the ONC, which has developed relationships with EHR developers, national data networks, and indeed, has created a process to create a voluntary national health data network, should step in and regulate. How might this play out? Let us consider a taxonomy developed by various HHS agencies and see how the approach works.

3.3 Applying the Framework

The Food and Drug Administration Safety and Innovation Act (FDASIA) required the FDA, ONC, and the Federal Communications Commission (FCC) to develop “a report that contains a proposed strategy and recommendations on an appropriate, risk-based regulatory framework pertaining to health information technology, including mobile medical applications, that promotes innovation, protects patient safety, and avoids regulatory duplication.”Footnote 34

The report separated Health IT into three sets of functions:

1) administrative health IT functions [namely ‘billing and claims processing, practice and inventory management, and scheduling’], 2) health management health IT functions [namely ‘health information and data exchange, data capture and encounter documentation, electronic access to clinical results, most clinical decision support, medication management, electronic communication and coordination, provider order entry, knowledge management, and patient identification and matching’], and 3) medical device health IT functions [namely ‘computer aided detection software, remote display or notification of real-time alarms from bedside monitors, and robotic surgical planning and control’].Footnote 35

The report suggested that only category 3) functions were subject to FDA regulation. That seems correct, but not for the reasons in the Report.

First, administrative functions – “billing and claims processing, practice and inventory management, and scheduling” – are not patient facing, and can be separated on that ground.

Next, health management health IT functions include “health information and data exchange, data capture and encounter documentation, electronic access to clinical results, most clinical decision support, medication management, electronic communication and coordination, provider order entry, knowledge management, and patient identification and matching.”Footnote 36 The Report concluded that it did not have to regulate these functions because they presented a lower risk.Footnote 37 In so concluding, it cited little evidence. The better reason is that these functions all have to do with EHR integration with other systems and its interaction with multiple users. They all have to do with EHR as a networked product – networked with both technology and system users. The FDA, which rarely regulates at a systemwide level, focused primarily on the interaction between device and patient, is ill-suited for such regulation. Rather, the ONC, which has developed relationships with multiple players in the health data world, should take the lead role.Footnote 38

Finally, medical device health IT functions include “computer aided detection software, remote display or notification of real-time alarms from bedside monitors, and robotic surgical planning and control.”Footnote 39 The Report suggested that these functions are higher risk, and therefore fall within the FDA’s purview. On my account, these functions are more focused on HIT functionality as it pertains to specific patients, rather than networking aspects. It therefore falls more within FDA expertise.

Similar issues arise in the context of CDS regulation. Cures Act Section 520(o)(1)(C) excludes software that is meant to display medical information about a patient and the like, as long as the “health care professional [can] independently review the basis for such recommendations that such software presents so that it is not the intent that such health care professional rely primarily on any of such recommendations to make a clinical diagnosis or treatment decision regarding an individual patient.”Footnote 40 In guidance, the FDA explained that whether a professional exercised such judgment depended on “[t]he purpose or intended use of the software function; The intended user (e.g., ultrasound technicians, vascular surgeons); The inputs used to generate the recommendation (e.g., patient age and gender); and [t]he rationale or support for the recommendation. In order for the software function to be excluded from the definition of device, the intended user should be able to reach the same recommendation on his or her own without relying primarily on the software function.”Footnote 41

Commenters responded with confusion.Footnote 42 As Professor Efthimios Parasidis noted,

“The FDA’s statement does not represent a reasonable interpretation of the statute, because it focuses on the physician’s ability to come up with a treatment decision independent of the CDS program, rather than focusing on the ability of the physician to independently review ‘the basis of such recommendation that such software presents.’ It is one thing to be able to diagnose a patient independent of a CDS program, and another to understand and independently review the output of a CDS program. The statute covers the latter, while the FDA’s draft guidance appears to cover the former.”Footnote 43

In 2019, the FDA doubled down on this approach, however.Footnote 44

On my account, CDS should fall within the FDA’s purview to the extent it involves the quality of an algorithm and the outputs it produces. The ONC has little expertise on issues of algorithmic quality, while the FDA encounters such issues in its regulation of other devices apart from EHR.Footnote 45 However, CDS relies on the data collected from a range of different EHRs. To the extent that a CDS problem arises with data quality, transmission, or input from EHRs – that is, issues relating to the quality of networking across the national health information network – it falls under the ONC’s authority. The HHS Secretary should use their authority to recalibrate the relevant authority between the two agencies in this way.

3.4 Conclusion

I have argued that the delineation of authority between the FDA and the ONC should not be based on the extent of provider intervention or control over HIT, which involves conceptually hard distinctions. It should not be based on how risky a piece of HIT is as such outcomes are highly context-dependent and still empirically hard to ascertain. Rather, they should be based on whether the aspect of the HIT subject to regulation involves its ability to network with other systems and users. To the degree that it does, the HIT should fall within ONC regulation. However, as long as the focus of the HIT function does not implicate networking – such as the quality of algorithmic analysis – FDA jurisdiction is appropriate. The line between the categories can be blurry – after all, the analysis of algorithmic quality might implicate questions of data collection and standardization. But if history is any guide, such blurriness will inevitably be the case, no matter what standard is adopted, as we move to more and more automated health systems.


1 Lifecycle Regulation and Evaluation of Artificial Intelligence and Machine Learning-Based Medical Devices

1 T.J. Hwang et al., Lifecycle Regulation of Artificial Intelligence and Machine Learning-Based Software Devices in Medicine, 322 JAMA 2285 (2019); M.E. Matheny et al., Artificial Intelligence in Health Care: A Report from the National Academy of Medicine, 323 JAMA 507 (2020).

2 M. Hutson, AI Glossary: Artificial Intelligence, in So Many Words, 357 Science 19 (2017).

3 A.S. Adamson & H.G. Welch, Machine Learning and the Cancer-Diagnosis Problem – No Gold Standard, 381 N. Engl. J. Med. 2285, 2285–7 (2019).

4 Footnote Id.; see US Food & Drug Admin., Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) (Apr. 2, 2019),; G. Hinton, Deep Learning – A Technology with the Potential to Transform Health Care, 320 JAMA 1101 (2018).

5 US Food & Drug Admin., Proposed Regulatory Framework, supra Footnote note 4.

6 US Food & Drug Admin., Artificial Intelligence and Machine Learning in Software as a Medical Device,

7 US Food & Drug Admin., Proposed Regulatory Framework, supra Footnote note 4.

8 US Food & Drug Admin., Proposed Regulatory Framework, supra Footnote note 4; Hwang et al., supra Footnote note 1.

9 W.N. Price, Regulating Black-Box Medicine, 116 Mich. L. Rev. 421 (2017).

10 US Food & Drug Admin., Proposed Regulatory Framework, supra Footnote note 4.

11 Hwang et al., supra Footnote note 1.

12 Hwang et al., supra Footnote note 1; US Food & Drug Admin., Premarket Notification 510(k),; US Food & Drug Admin., Premarket Approval (PMA),

13 US Food & Drug Admin., Humanitarian Device Exception,

14 US Food & Drug Admin., Premarket Approval (PMA), supra Footnote note 12.

15 US Food & Drug Admin., The 510(k) Program: Evaluating Substantial Equivalence in Premarket Notifications [510(k)] (Feb. 5, 2018),

16 US Food & Drug Admin., Evaluation of Automatic Class III Designation (De Novo) Summaries (Oct. 27, 2020),

17 Hwang et al., supra Footnote note 1.

18 Footnote Id.; U.J. Muehlematter et al., Artificial Intelligence and Machine Learning Based Medical Devices in the US and Europe (2015–2020) – A Comparative Analysis (accepted at The Lancet Digital Health).

19 Hwang et al., supra Footnote note 1.

20 B.M. Ardaugh et al., The 510(k) Ancestry of a Metal-on-Metal Hip Implant, 368 N. Engl. J. Med 97, 97100 (2013).

21 Hwang et al., supra Footnote note 1; Letter from Robert Ochs, Director, US Food & Drug Admin., to John Axerio-Cilies, Chief Operating Officer, Arterys, Inc. (Jan. 25, 2018),

22 See Ochs, supra Footnote note 21.

23 Hwang et al., supra Footnote note 1.

25 Muehlematter et al., supra Footnote note 18.

26 Letter from Robert Ochs, Director, US Food & Drug Admin., to John J. Smith, Partner, Hogan Lovells US LLP (Aug. 1, 2018),

27 Letter from Robert Ochs, Director, US Food & Drug Admin., to Orlando Tadeo Jr., Senior Manager, Canon Medical Systems USA (Feb. 21, 2020),

28 Muehlematter et al., supra Footnote note 18.

29 Compare Royal Philips, Philips Illumeo with adaptive intelligence has been selected by University of Utah Health radiologists, Philips News Center (Nov. 26, 2018),, with Letter from Robert Ochs, Director, US Food & Drug Admin., to Yoram Levy, QA/RA Consultant, Philips Medical Systems Technologies Ltd. (Jan. 12, 2018),

30 K.N. Vokinger et al., Artificial Intelligence und Machine Learning in der Medizin, Jusletter (Aug. 28, 2017),

35 Muehlematter et al., supra Footnote note 18; T.J. Hwang et al., Comparison of Rates of Safety Issues and Reporting of Trial Outcomes for Medical Devices Approved in the European Union and United States: Cohort Study, 353 BMJ 3323 (2016).

36 Footnote Id.; A.G. Fraser et al., Commentary: International Collaboration Needed on Device Clinical Standards, 342 BMJ 2952 (2011); N. Williams, The Scandal of Device Regulation in the UK, 379 Lancet 1789–90 (2012); D. Cohen, Patient Groups Accuse European Parliament of Putting Economic Interests Ahead of Safety on Medical Devices, 347 BMJ 6446 (2013); D.B. Kramer et al., Regulation of Medical Devices in the United States and European Union, 366 N. Engl. J. Med. 848–55 (2012).

37 European Comm’n, Medical Devices – EUDAMED,

39 BAM, Recherche in öffentlichen Medizinprodukte Datenbanken,; MHRA, Medical Device Manufacturers by Name,; ANSM, Mise sur le marché des dispositifs médicaux et dispositifs médicaux de diagnostic in vitro (DM/DMIA/DMDIV),

40 US Food & Drug Admin., Proposed Regulatory Framework, supra Footnote note 4.

41 Hwang et al., supra Footnote note 1; Muehlematter et al., supra Footnote note 18.

42 T.M. Maddox et al., Questions for Artificial Intelligence in Health Care, 321 JAMA 31, 31 (2019); W.W. Stead, Clinical Implications of Artificial Intelligence and Deep Learning, 320 JAMA 1107, 1107 (2018).

43 Hwang et al., supra Footnote note 1.

44 Maddox et al., supra Footnote note 42.

45 Price, supra Footnote note 9.

46 Hwang et al., supra Footnote note 1.

47 US Food & Drug Admin., Proposed Regulatory Framework, supra Footnote note 4.

48 Hwang et al., supra Footnote note 1.

49 Footnote Id.; R.B. Barikh et al., Regulation of Predictive Analytics in Medicine, 363 Science 810, 810–12 (2019).

50 Hwang et al., supra Footnote note 1.

51 US Food & Drug Admin., Proposed Regulatory Framework, supra Footnote note 4; T. Minssen et al., Regulatory Responses to Medical Machine Learning, 7 J. Law & Biosciences 1 (2020).

52 T.J. Hwang et al., Evaluating New Rules on Transparency in Cancer Research and Drug Development, 5 JAMA Oncol. 461 (2019).

54 US Food & Drug Admin., Proposed Regulatory Framework, supra Footnote note 4.

2 Product Liability Suits for FDA-Regulated AI/ML Software

1 See, e.g., Clinical Decision Support, (Apr. 10, 2018), [].

2 See US Food & Drug Admin., Clinical Decision Support Software: Draft Guidance for Industry and Food and Drug Administration Staff (Sept. 2019),; see also US Food & Drug Admin., Clinical and Patient Decision Support Software: Draft Guidance for Industry and Food and Drug Administration Staff (Dec. 2017) (providing earlier draft guidance replaced in Sept. 2019).

3 21 U.S.C. § 321(h).

4 See Int’l Medical Device Regulator’s Forum, Software as a Medical Device (SaMD): Key Definitions (Dec. 9, 2013),

5 Footnote Id.; see also, US Food & Drug Admin., What Are Examples of Software as a Medical Device? (updated Dec. 6, 2017),

6 See Int’l Medical Device Regulator’s Forum, supra Footnote note 4.

7 21st Century Cures Act, Pub. L. No. 114–255, § 3060(a), 130 Stat. 1033 (2016).

8 21 U.S.C. § 360j(o)(1)(E).

9 See US Food & Drug Admin., supra Footnote note 2.

10 See Joseph L. Reutiman, Defective Information: Should Information Be a Product Subject to Products Liability Claims, 22 Cornell J. L. and Pub. Pol’y 194–6 (2012) (discussing cases that have treated software as a service).

11 Peter Barton Hutt et al., Food and Drug Law 77 (4th ed. 2014).

12 See Footnote id. (discussing the FDA’s jurisdiction under the Public Health Service Act).

13 See 21 U.S.C. § 321 (defining these and other product categories that trigger FDA jurisdiction).

14 US Food & Drug Admin., What Does FDA Regulate? (2018),

15 Elizabeth O. Tomlinson, Proof of Defective Design of Medical Device in Products Liability Action, 149 Am. Jur. Proof of Facts 407 (2015); See also Barbara J. Evans & Ellen Wright Clayton, Deadly Delay: The FDA’s Role in America’s COVID-Testing Debacle, 130 Yale Law Journal Forum 78100 (2020), (discussing the product/service distinction in FDA regulation of diagnostics).

16 Emily Hammond Meazell, Super Deference, the Science Obsession, and Judicial Review as Translation of Agency Science, 109 Mich. L. Rev. 733 (2010–11).

17 See Joel E. Hoffman, Administrative Procedures of the Food and Drug Administration, in 2 Fundamentals of Law and Regulation: An In-Depth Look at Therapeutic Products 13, 1724 (David G. Adams et al. eds., 1999) [hereinafter Fundamentals of Law and Regulation].

18 See Legal Status of Approved Labeling of Prescription Drugs; Prescribing for Uses Unapproved by the Food and Drug Administration, 37 Fed. Reg. 16,503 (Aug. 15, 1972) (discussing Congress’s legislative intent).

19 Footnote Id.; see also David G. Adams, The Food and Drug Administration’s Regulation of Health Care Professionals, in 2 Fundamentals of Law and Regulation, supra Footnote note 17, at 423–5.

20 Richard A. Epstein, Why the FDA Must Preempt Tort Litigation: A Critique of Chevron Deference and a Response to Richard Nagareda, 1 J. Tort L. 7 (2006), (emphasis added).

21 See Patricia J. Zettler, Toward Coherent Federal Oversight of Medicine, 52 San Diego L. Rev. 427 (2015) (exploring de facto FDA regulation of medical practice).

22 Medical Device Amendments of 1976, Pub. L. No. 94–295, § 2, 90 Stat. 539, 565 (adding Section 520(e) of the Food, Drug, and Cosmetic Act) (codified as amended at 21 U.S.C. § 360j(e) (2006)).

23 21 U.S.C. § 396.

24 21 U.S.C. § 360j(o)(1)(E).

26 Bradley Merrill Thompson, Learning from Experience: FDA’s Treatment of Machine Learning, Mobile Health News (Aug. 23, 2017),; [].

27 21 U.S.C. § 321(h).

28 21 U.S.C. § 360j(o)(1)(E)(iii).

29 W. Nicholson Price II, Regulating Black-Box Medicine, 116 Mich. L. Rev. 421–74 (2017).

30 Enrico Tjoa & Cuntai Guan, A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI,

31 Rebecca Robbins, An Invisible Hand: Patients Aren’t Being Told about the AI Systems Advising Their Care, Stat News (July 15, 2020),

32 See W. Nicholson Price II, Medical AI and Contextual Bias, 33 Harv. J. L. & Tech. 66 (2019) (discussing narrow validity of AI systems developed in resource-rich contexts when implemented in lower-resource settings).

33 Eric Topol, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again (Basic Books ed., 2019); See also Matthew Zook et al., 10 Simple Rules for Responsible Big Data Research, 13 PLoS Computational Biology (2017) (identifying similar limits); Danah Boyd & Kate Crawford, Critical Questions for Big Data, 15 J. Info., Commc’n and Soc’y 662–79 (2012).

34 Ziad Obermeyer et al., Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations, 366 Science 447–53 (2019); Ruha Benjamin, Assessing Risk, Automating Racism, 366 Science 421–2 (2019). But see Frank Pasquale & Danielle Keats Citron, Promoting Innovation While Preventing Discrimination: Policy Goals for the Scored Society, 89 Wash. L. Rev. 1413–24 (2014) (discussing ways to address biases).

35 Carolyn Criado Perez, Invisible Women: Data Bias in a World Designed for Men (Abrams ed., 2019).

36 Alice B. Popejoy et al., The Clinical Imperative for Inclusivity: Race, Ethnicity, and Ancestry (REA) in Genomics, 39 Human Mutation 1713–20 (2018).

37 Adewole S. Adamson & Avery Smith, Machine Learning and Health Care Disparities in Dermatology, 154 JAMA Dermatology 1247 (2018).

38 US Food & Drug Admin., Digital Health Innovation Action Plan (2017),

39 US Food & Drug Admin., Digital Health Software Precertification (Pre-Cert) Program,

40 US Food & Drug Admin., supra Footnote note 38, at 2.

41 Footnote Id. at 5.

42 On the history of challenges to the FDA, see Frank Pasquale, Grand Bargains for Big Data: The Emerging Law of Health Information, 72 Md. L. Rev. 682 (2013); Efthimios Parasidis, Patients over Politics: Addressing Legislative Failure in the Regulation of Medical Products, 5 Wis. L. Rev. 929 (2011).

43 Frank Pasquale, Data Informed Duties, 119 Colum. L. Rev. 1917–39 (2019).

44 518 U.S.C. § 470 (1996).

45 552 U.S.C. § 312 (2008); See also Barbara J. Evans, The Streetlight Effect: Regulating Genomics Where the Light Is, 48 J. L., Med. Ethics Supp: LawSeq 105 (2020) (discussing why the FDA’s proposed approaches do not appear to preempt failure-to-warn suits).

46 There is not complete harmony between tort and regulation here; some preemption issues may arise, for example, if high-risk software were regulated as a Class III medical device. See, e.g., Charlotte Tschider, Medical Device Artificial Intelligence: The New Tort Frontier, 46 BYU L. Rev. 1551, 1573–86 (2021),

47 Liis Vihul, The Liability of Software Manufacturers for Defective Products, Tallinn Paper No. 2 (Cooperative Cyber Defense Center of Excellence ed., 2014).

48 See Jim Hawkins et al., Nontransparency in Electronic Health Record Systems, in Transparency in Health and Health Care in the United States 273–85 (Holly F. Lynch et al. eds., 2019).

49 See US Food & Drug Admin., supra Footnote note 2.

50 See 21 C.F.R. § 801.4.

51 See US Food & Drug Admin 2017 Draft Guidance, supra Footnote note 2, at 8.

52 See 21 U.S.C. § 360j(o)(1)(E)(iii).

53 Restatement (Third) of Torts: Products Liability § 2(a).

55 21 U.S.C. § 360j(o)(1)(E)(iii).

56 Pasquale, supra Footnote note 43.

57 Reutiman, supra Footnote note 10, at 183.

58 642 F.2d 339 (9th Cir. 1981); see also Brocklesby v. United States, 767 F.2d 1288, 1294–5 (9th Cir. 1985), and discussion in Oren Bracha & Frank Pasquale, Federal Search Commission? Access, Fairness, and Accountability in the Law of Search, 93 Cornell L. Rev. 1149, 1194 (2008).

59 See Reutiman, supra Footnote note 10, at 188–9.

60 See, e.g., Winter v. G.P. Putnam’s Sons, 938 F.28 1033 (9th Cir. 1991) (no liability for dangerous misinformation in a book); Wilson v. Midway Games, Inc., 198 F. Supp 2d 167 (D. Conn. 2002) (rejecting claim that a video game was dangerously defective for stimulating violent behavior in users, noting similarities to expressive media like movies and television).

61 See, e.g., Bryan H. Choi, Crashworthy Code, 94 Wash. L. Rev. 39 (Mar. 2019); Jamil Ammar, Defective Computer-Aided Design Software Liability in 3d Bioprinted Human Organ Equivalents, 35 Santa Clara High Tech. L. J. 58 (2019); Karni A. Chagal-Feferkorn, Am I an Algorithm or a Product? When Products Liability Should Apply to Algorithmic Decision-Makers, 30 Stan. L. & Pol’y Rev. 82 (2019).

62 Frank Pasquale, The Black Box Society (Harvard University Press ed., 2015); Frank Pasquale, New Laws of Robotics (Harvard University Press ed., 2020).

63 Jacob Turner, Robot Rules: Regulating Artificial Intelligence 94100 (Palgrave Macmillan ed., 2018).

64 Stuart J. Russell, Human Compatible: Artificial Intelligence and the Problem of Control (New York: Penguin Random House ed., 2019).

65 Efthimios Parasidis, Clinical Decision Support: Elements of a Sensible Legal Framework, 20 J. Healthcare L. & Pol’y (2018); see also Nicholas Carr, The Glass Cage: How Our Computers Are Changing Us (W.W. Norton ed., 2015); see also, Kevin R. Pinkney, Putting Blame where Blame Is Due: Software Manufacturer and Customer Liability for Security-Related Software Failure, 13 Albany L. J. Sci. & Tech. (2002) (focusing on security defects); Michael D. Scott, Tort Liability for Vendors of Insecure Software: Has the Time Finally Come?, 67 Md. L. Rev. 469–70 (2017) (same).

66 Frances E. Zollers et al., No More Soft Landings for Software: Liability for Defects in an Industry That Has Come of Age, 21 Santa Clara Computer & High Tech. L. J. 777 (2005).

3 Are Electronic Health Records Medical Devices?

1 21 U.S.C. § 360j(o)(1)(C).

2 21 U.S.C. § 360j(o)(3)(A)–(B).

3 Cf. Metro. Life Ins. Co. v. Massachusetts, 471 U.S. 724, 741 (1985) (“Unless Congress intended to include laws regulating insurance contracts within the scope of the insurance saving clause, it would have been unnecessary for the deemer clause explicitly to exempt such laws from the saving clause when they are applied directly to benefit plans.”).

4 For an extended treatment of this framework see, generally Craig J. Konnoth, Drugs’ Other Side Effects, 104 Ia. L. Rev. 171 (2019).

5 21 U.S.C. § 321(h).

6 21 U.S.C.A. § 360c(a)(1).

7 Footnote Id.; Sharona Hoffman & Andy Podgurski, Finding a Cure: The Case for Regulation and Oversight of Electronic Health Record Systems, 22 Harv. J. L. & Tech. 103, 137 (2009).

8 R.S. Evans, Electronic Health Records: Then, Now, and In the Future, Yearbook Med. Inform. S4861 (2016),; see also, Clinical Decision Support, (“The majority of CDS applications operate as components of comprehensive EHR systems”).

9 See Evans, supra Footnote note 8.

10 Hoffman & Podgurski, supra Footnote note 7at 130. They raise concerns about FDA authority and would give the oversight to the Centers for Medicare and Medicaid Services, another subagency in HHS. However, the Cures Act allows the Secretary to entrust authority to the FDA. See also Nicolas Terry, Pit Crews with Computers: Can Health Information Technology Fix Fragmented Care?, 14 Hous. J. Health L. & Pol’y 129, 183 (2014) (“In straining to avoid untimely over-regulation, the FDA may have under-regulated. If the agency had asserted its jurisdiction over EMRs rather than backing down to ONC and CMS during MU, maybe better, safer products would have been brought to market (admittedly later).”).

11 Jeffrey Shuren, Dir. of FDA’s Ctr. for Devices and Radiological Health, Testimony at the Health Info. Tech. Policy Comm. Adoption/Certification Workgroup (Feb. 25, 2010) (acknowledging the receipt of 260 reports of malfunctioning EHR systems since 2008),

12 Inst. of Med., Health IT and Patient Safety: Building Safer Systems for Better Care 194 (2012), [hereinafter IOM Report].

14 This remarkably stable rationale has spanned the last thirty years. Compare 52 Fed. Reg. 36,104 (1987) with Bipartisan Policy Center Health Innovation Initiative, An Oversight Framework for Assuring Patient Safety in Health Information Technology 15 (2013),; See also infra Footnote note 42 (describing recently released guidance pertaining to the Cures Act).

15 Bipartisan Policy Center, supra Footnote note 14, at 16.

16 IOM Report, supra Footnote note 12, at 154.

18 Hoffman & Podgurski, supra Footnote note 7.

19 US Food & Drug Admin., Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML) Based Software as a Medical Device (SaMD): Discussion Paper and Request for Feedback 3 (2019), [hereinafter SaMD Discussion Paper].

20 See generally Craig Konnoth & Gabriel Scheffler, Can Electronic Health Records Be Saved?, 46 Am. J.L. & Med. 7, 719 (2020).

21 Craig Konnoth, Data Collection, EHRs, and Poverty Determinations, 46 J.L., Med. & Ethics 622, 625–6 (2018).

22 Craig Konnoth, Health Information Equity, 165 U. Penn. L. Rev. 1317, 1319 (2017).

23 Craig Konnoth, Regulatory De-arbitrage in Twenty-First Century Cures Act’s Health Information Regulation, 29 Ann. of Health L. & Life Sci. 136, 137 (2020).

24 US Food & Drug Admin., FDASIA Health IT Report: Proposed Strategy and Recommendations for a Risk-Based Framework 10 (Apr. 2014), [hereinafter, FDASIA Report].

25 Craig J. Konnoth, Drugs’ Other Side Effects, 105 Ia. L. Rev. 171, 197 (2019).

26 Footnote Id. at 200.

27 Richard J. Fehring et al., Influence of Contraception Use on the Reproductive Health of Adolescents and Young Adults, 85 Linacre Q. 167, 167–77 (2018).

28 FDASIA report, supra Footnote note 24, at 10.

29 IOM Report, supra Footnote note 12, at 61–2.

30 Sara Gerke et al., The Need for a System View to Regulate Artificial Intelligence/Machine Learning-Based Software as Medical Device, 3 npj Digital Med. (2020),

31 And I emphasize that the scholars above were not considering the EHR context.

33 Gerke et al., supra Footnote note 30.

34 FDASIA Report, supra Footnote note 24.

35 Footnote Id. at 11–12.

36 Footnote Id. at 13.

37 Footnote Id. at 12.

38 See IOM Report, supra Footnote note 12, at 20–1 (describing the ONC’s relationship with stakeholders).

39 FDASIA Report, supra Footnote note 24, at 13.

40 21 U.S.C. § 360j(o)(E)(iii).

41 US Food & Drug Admin., Clinical and Patient Decision Support Software: Draft Guidance for Industry and Food and Drug Administration Staff 8 (2017),

42 See Barbara Evans & Pilar Ossorio, The Challenge of Regulating Clinical Decision Support Software After 21st Century Cures, 44 Am. J. L. and Med. 237, 239–40 (2018) (“The Cures Act singles out CDS software that recommends diagnoses or actions to treat or prevent disease. It defines a standard for deciding when such software can be excluded from FDA regulation. Congress excludes CDS software from FDA regulation if the software is intended to enable the ‘health care professional to independently review the basis for such recommendations that such software presents so that it is not the intent that such health care professional rely primarily on any of such recommendations to make a clinical diagnosis or treatment decision regarding an individual patient.’ To escape FDA regulation, the software vendor/manufacturer must intend for the software to make it possible for health care professionals to override its recommendations by explaining its rationale in terms that a clinician could understand, interrogate, and possibly reject. Whether CDS software is subject to FDA regulation potentially turns on the software’s ability to answer the quintessential epistemological question: How do we know?”).

43 Efthimios Parasidis, Comment on Clinical and Patient Decision Support Software: Draft Guidance for Industry and Food and Drug Administration Staff, 82 Fed. Reg. 53,987 (Dec. 8, 2017),; Am. Med. Informatics Ass’n, Comment on Clinical and Patient Support Software: Draft Guidance for Industry and Food and Drug Administration Staff, 82 Fed. Reg. 53,987 (Dec. 8, 2017),

44 US Food & Drug Admin., Clinical Decision Support Software: Draft Guidance for Industry and Food and Drug Administration Staff 12 (2019),

45 See generally SaMD Discussion paper, supra Footnote note 19.

Figure 0

Figure 2.1: The FDA’s jurisdiction to regulate CDS software under the Cures Act

Save book to Kindle

To save this book to your Kindle, first ensure is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the or variations. ‘’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats