The Sentinel System (Sentinel) is a national electronic safety-monitoring system for post-market evaluation of drugs and devices created by the US Food and Drug Administration (“FDA” or the “Agency”). Sentinel now has the ability to access data on more than 178 million individuals by tapping into existing databases maintained largely by private health care insurers and providers (Health Affairs 2015: 3). Unlike other post-market surveillance tools that primarily rely on third parties to submit reports of adverse events to FDA, Sentinel allows FDA to proactively monitor for safety issues in near real time. Sentinel is one of many initiatives designed to use electronic health records (EHRs) for purposes other than patient care (secondary use), and it may be the most successful domestic example of secondary use (Anonymous interviewee). Janet Woodcock, the director of the FDA’s Center for Drug Evaluation and Research (CDER), has argued that Sentinel could “revolutionize” product safety (Abbott 2013: 239).
Sentinel came about as a result of a congressional mandate in the wake of the controversy over Vioxx’s withdrawal. In September 2007, Congress passed the Food and Drug Administration Amendments Act (FDAAA), which, among other things, called for the Agency to develop an active surveillance system capable of accessing data from 25 million individuals by July 2010, and 100 million individuals by July 2012.1 But FDAAA did not provide the Agency with much guidance in terms of how to build the system, and it did not provide FDA with any funding. Nor did Congress require the primary holders of this data (largely insurers and health care providers) to share data with FDA.
After FDAAA was passed, a small group at FDA (the FDA Team) took ownership of the project and began making key decisions that would dictate the system’s structure and governance (Anonymous interviewee). The group elected to operate Sentinel by contracting and self-regulation rather than through notice-and-comment rule making. That meant the Agency had to convince data holders to voluntarily share data. To do that, the Sentinel team engaged in an extensive and successful campaign to engage data holders by giving them the opportunity to participate in Sentinel’s creation. The end result was that Sentinel was structured as a primarily distributed model, meaning that data holders almost entirely maintain control over their own data and only share aggregated results with FDA. However, the Agency did require data holders to translate their data into a common data format to allow FDA to analyze different data sets using common software. The Agency also elected to contract with a private entity to operate Sentinel, and to focus the system exclusively on safety research.
These strategic decisions were likely responsible for Sentinel achieving widespread data holder participation and for meeting (and exceeding) all of FDAAA’s requirements. By that measure, Sentinel has certainly been a success. Until recently the Agency had no meaningful active surveillance tool, and now it has Sentinel – already answering queries from FDA on a regular basis and contributing to regulatory actions. Yet these strategic decisions limited the ability of Sentinel to be used by third parties and for non-safety queries.
This case study followed the modified version of the Institutional Analysis and Development (IAD) framework (Frischmann et al. 2014). First, a literature review was conducted to search for published articles about Sentinel using medical, legal, and sociological databases including JSTOR, LexisNexis, PubMed, Scopus, and Westlaw. Relevant articles were reviewed and used to generate interview questions. Second, a series of semi-structured interviews were conducted with a range of stakeholders involved with the Sentinel Initiative (the larger program in which the Sentinel System is housed). The interviews ranged in length from 30 minutes to 60 minutes, with an average duration of about 45 minutes. Each interview was structured according to the modified IAD framework, using both generic and interviewee-specific interview questions. Information obtained from the interviewees was organized according to the IAD framework.
The following persons were interviewed for this case study:
Richard Platt, Professor and Chair of the Department of Population Medicine, Harvard Pilgrim Health Care Institute. Dr. Platt was the Principal Investigator of the FDA Mini-Sentinel program and is now the Principal Investigator of Sentinel.
Rachel Sherman, Deputy Commissioner for Medical Products and Tobacco. At the time of the interview, Dr. Sherman had recently retired from a 25-year career at FDA. She had been previously responsible for implementing Sentinel as Director of the Center for Drug Evaluation and Research’s (CDER) Office of Medical Policy and Associate Center Director for Medical Policy.
CDR Carlos Bell, USPHS, Senior Program Manager, Office of Medical Policy, FDA. CDR Bell is responsible for the program’s day-to-day management.
Marsha Raebel, Senior Investigator at Kaiser Permanente Colorado. Dr. Raebel leads the Sentinel program at Kaiser Permanente, one of the data partners participating in Sentinel.
Barbara Evans, Alumnae College Professor of Law at the University of Houston. Professor Evans served as a consultant on privacy issues related to Sentinel for FDA.
Additional parties were interviewed and spoke on the condition of anonymity. Several data partners declined to be interviewed for this study.
6.2 Sentinel’s Background Environment
The use of electronic health records (EHRs) is booming, driven by ever-improving information technology, pro-EHR government policies, and the promise of EHR utilization resulting in better patient care with reduced costs (Hsiao et al. 2014). Yet widespread adoption of EHRs has the potential to do more than improve patient care (primary use); it may also be useful for a variety of non-direct patient care functions (secondary use).2 For example, EHRs can be used in medical research for new drugs and devices, to evaluate health care providers, to compare hospital facilities, and to help insurance companies make utilization decisions (Safran et al. 2007). In fact, EHR use is even more prevalent among health insurance companies than health providers.3
Another obvious application for secondary use of EHRs is in the context of post-market drug and device surveillance, which is to say the ongoing evaluation of the safety and efficacy of medical products that are already being sold to the public (Abbott, 2014a). Unfortunately, the use of EHRs in post-market surveillance has been slow to develop. In part, this may be due to challenges in working with EHR data, which often suffers from quality problems, and EHRs often struggle (sometimes by design) with interoperability (meaning different EHR systems do not communicate well) (Weiskopf et al. 2013).
More importantly, access to EHR data is usually tightly restricted. EHRs are used primarily by insurance companies and health care providers, and these entities have good reason to avoid sharing access to their data. Chief among those reasons are privacy concerns and the risk of civil and criminal liability for unauthorized disclosure of patient data. Patient privacy is largely protected under the Health Insurance Portability and Accountability Act of 1996 (HIPAA), but also under a hodgepodge of state laws and less important federal regulations. Stakeholders are also concerned about the perception of privacy. Providers and insurers want consumers to believe their data is secure.
Stakeholders have reasons other than privacy and the perception of privacy to restrict access to their data. EHR data may have substantial financial value. For example, the data can be used by organizations to monitor their providers and promote or discourage unprofitable practices, by biotechnology companies to identify new uses for their existing products, and by insurance companies to inform contract negotiations. Sharing EHR data may reduce its value by eliminating its comparative advantage to a stakeholder. Data sharing may also be harmful because it may reveal stakeholder deficiencies. For example, a hospital may find that it has higher than average complication rates for a particular procedure or an insurer may find it has a higher percentage of inappropriately denied claims. In sum, while the public health benefits of using EHR data to conduct post-market surveillance are obvious, so are the numerous barriers to data holders actually sharing data. It is not surprising that a meaningful post-market surveillance system based on secondary use of EHR did not arise before Sentinel.
Some interviewees expressed the opinion that Sentinel’s development was facilitated by the post-market withdrawal of Vioxx (rofecoxib) in 2004, the largest drug withdrawal in history (Health Affairs 2015: 3). The drug’s withdrawal and ensuing controversy resulted in a series of congressional hearings on the safety of FDA-approved drugs, and criticism was leveled at the Agency for not withdrawing Vioxx sooner. In September 2005, the US Department of Health and Human Services (HHS) secretary asked FDA to expand its system for post-market monitoring.4 In part, the secretary asked the Agency to evaluate building on the capabilities of existing data systems and creating a public-private collaboration framework for such an effort.5 FDA also commissioned a report from the Institute of Medicine (IOM) on methods to improve the safety of marketed medicines.6 That report, issued in 2006, made several recommendations for improving post-market surveillance, including recommendations that FDA should “(a) increase their intramural and extramural programs that access and study data from large automated healthcare databases and (b) include in these programs studies on drug utilization patterns and background incident rates for adverse events of interest, and (c) develop and implement active surveillance of specific drugs and diseases as needed in a variety of settings.”7 The IOM report was followed by an FDA workshop with a diverse group of stakeholders to explore the feasibility of creating a national electronic system for monitoring medical product safety.8 The general consensus in that workshop was that FDA could develop such a system by tapping into existing resources, which covered more than 100 million people (Platt interview 2014).
In September 2007, Congress passed the Food and Drug Administration Amendments Act of 2007 (FDAAA), which called for active post-market risk identification and analysis.9 That was “either a great coincidence or someone was paying attention” (Platt interview 2014). Specifically, Section 905 of FDAAA required the HHS secretary to “develop validated methods for the establishment of a post-market risk identification and analysis system to link and analyze safety data from multiple sources.”10 FDAAA set a goal of accessing data from 25 million individuals by July 2010, and 100 million individuals by July 2012.11 The law also required FDA to work closely with partners from public, academic, and private entities.12 Section 905 was, like many of FDA’s mandates, unfunded (Bell interview 2014).
FDAAA was a “vague piece of legislation” (Anonymous interviewee) with few hard requirements for Sentinel. One of those requirements was to create an “active” surveillance system. Most of FDA’s tools for post-market evaluation are passive, meaning that they depend on third parties to recognize and report suspected adverse events in order for the Agency to be aware of potential problems.13 Passive tools include the CDER’s Adverse Event Reporting System (AERS), which is a system that receives reports of suspected adverse drug reactions and medical errors.14 FDA can receive reports through passive systems such as AERS from the drug and device industries (for which reporting is mandatory), as well as from providers and consumers (for whom reporting is voluntary).15 Even before FDAAA, the medical community has been critical of reliance on passive surveillance, which suffers from underreporting of adverse events (it has been estimated that only about 10 percent of adverse events are reported) (Health Affairs 2015: 2). By contrast, active surveillance systems such as Sentinel enable FDA to query data in near real time, using large data sets to evaluate broad swaths of the population.16 Prior to Sentinel, the Agency did some active surveillance, for example, by utilizing claim databases to investigate safety questions, but this was done on an ad hoc basis (Woodcock 2008: 2).17 For such investigations, the Agency had to identify specific systems it wanted data from and then it had to arrange access (Woodcock 2008: 2). Sentinel was the first effort to create a linked, sustainable network to continuously evaluate post-market safety questions in near real time (Woodcock 2008: 2).
6.3 Attributes and Governance
This section explores Sentinel’s structure and function. It begins by discussing how a small team of administrators at FDA (the FDA Team) started the program based on a congressional mandate, and how that team elected to finance Sentinel with general Agency funds. The project’s financial structure limited the resources available for Sentinel and constrained its governance structure, but in ways desirable to the FDA Team. The section continues to discuss how the FDA Team engaged community members by conducting a protracted stakeholder outreach program. Based on the feedback from stakeholder outreach, FDA made a series of key decisions about Sentinel’s structure designed to make the system useful to the Agency, convince insurers and health care organizations to share their data, and alleviate community concerns (largely related to privacy). As discussed in further detail later in this section, that involved contracting with a private entity for Sentinel’s operations; designing Sentinel as a partially distributed, common data model; financing data partner participation; and focusing exclusively on safety research. Finally, the section provides some illustrative examples of Sentinel’s outcomes and its impact on Agency decision making.
While there had been talk about a Sentinel-like program for some time at FDA, FDAAA was a turning point in actually creating a program. Even though FDAAA did not provide financial support for Sentinel, “Congressional mandates help even if they’re unfunded” (Anonymous interviewee). After FDAAA, the FDA Team took control of the project and began making key decisions. Richard Platt believes the Agency did a “masterful job” of acting on the mandate (Platt interview 2014). He credited Rachel Sherman with being the person at FDA who “owned” Sentinel at a senior level, and who made many of the pivotal decisions (Platt interview 2014).
One of the first questions facing the FDA Team was how to finance the initiative. FDA is chronically underfunded,18 and soon after the act was passed the US economy took a downturn in the fall of 2008. The FDA Team made a conscious decision not to seek dedicated money from Congress or money associated with the Recovery Act (The American Recovery and Reinvestment Act of 2009 (ARRA)) (Anonymous interviewee). Although Congress might have been willing to earmark funds for Sentinel, there was a perception that any money would have come with an expectation for a “quick, splashy outcome” (Anonymous interviewee). Instead, the FDA Team skillfully navigated FDA’s internal budgeting process to finance the project with general funds from the Center for Drug Evaluation and Research (CDER) (Anonymous interviewee). That gave the group the ability to do what it believed was right from a scientific and programmatic perspective with minimal outside interference (Anonymous interviewee). Sentinel stayed out of the limelight, and the FDA Team had the opportunity to “crawl before we walked” (Anonymous interviewee).
The recession may have also facilitated a greater reliance on external collaborators. FDAAA required FDA to work closely with private partners, but there was pushback within the Agency about abrogating important functions to private entities (Anonymous interviewee). Those protests dwindled as money was short and people became more receptive to outsourcing. It would likely have been substantially more expensive to manage Sentinel entirely in-house (and potentially less effective) (Anonymous interviewee). Sentinel remains intramurally funded by FDA.
6.3.2 Community Members
The Sentinel Team had to engage a number of constituencies – “the biggest challenge was getting people to buy into this” (Bell interview 2014). Sentinel’s primary users are the various FDA Centers (primarily CDER, where the initiative already had strong support), and Sentinel had to be constructed in such a way that it would provide valuable information to the Agency. Critically, the FDA Team also had to convince EHR data holders to participate in the program. This was necessary because Sentinel relies entirely on private entities voluntarily sharing their data – there is no mandated data sharing (Bell interview 2014). The EHR data holders who elect to share data with Sentinel are referred to as “data partners” in the initiative.
For the most part, data partners are insurers and provider organizations (although other data sources can contribute such as disease and medical device registries) (Brown et al. 2009: 6). Provider databases generally contain more useful data than insurance databases, but most individuals lacked provider EHRs at the time Sentinel was being developed (Brown et al. 2009). By contrast, some large insurers had data on more than million individuals each (Moore et al. 2008). It was thought that combining several of these insurance databases could yield a data set of more than 100 million individuals (Brown et al. 2009: 6). Also, Sentinel did not require access to all of the data in provider EHRs (Brown et al. 2009: 6). It primarily needs data on exposures (with dates), outcomes, and comorbidities, as well as enrollment and demographic information (Brown et al. 2009: 6). This data can be used to calculate rates of use and complications. It is also helpful to be able to link data sets (cross-sectionally (e.g., different insurers), and longitudinally (e.g., different time points)) (Brown et al. 2009: 6).19
Interviewees also reported that the Agency wanted to get other community members to support Sentinel, including other government agencies, academic researchers, patient advocacy groups, and the public at large. Yet while support from a diverse group of stakeholders was important, support from data partners was critical. Without data partner participation, the Agency would not be able to fulfill FDAAA’s mandate and Sentinel would have limited functionality. Given the imperative to recruit data partners, and the significant barriers to data sharing, FDA focused most of its outreach efforts on courting data holders: “A lot of time went in upfront to make sure everything was well thought out, all of the policy issues, how partners would interact with FDA, what you can and can’t do with the data … it doesn’t usually happen like that … communication was the key” (Bell interview 2014).
FDA conducted a series of stakeholder workshops, meetings, conferences, and webinars about Sentinel, including an annual public meeting at the Brookings Institution. Stakeholders were given numerous opportunities to express their thoughts about the project and preferences for Sentinel’s structure. Major data partner concerns centered on protecting patient privacy (and the perception of privacy), protecting proprietary and valuable data, protecting institutional interests, evaluating opportunity costs, and ensuring appropriate professional rewards for personnel involved in the project: “What will bring them to the table, keep them, or drive them away? … We needed to give them a voice and to understand them” (Anonymous interviewee).
Along with these meetings, FDA issued a series of contracts to generate white papers on issues including governance, privacy, data and infrastructure, scientific operations, and outreach (Woodcock 2008: 17–18).20 Contractors solicited opinions from FDA Centers as well as other agencies, individuals who had attended FDA’s Sentinel workshops and members of the International Society for Pharmacoepidemiology (Brown et al. 2009: 2). They also surveyed academics, data holders, biopharma industry representatives, patient advocates, and private firms (Brown et al. 2009: 3).
According to interviewees, all of these efforts created enthusiasm and thoughtful discussion: “Every organization felt it couldn’t be left behind” (Platt interview 2014). It created a “culture of shared goals and sense of trust that the program will work on the things that are of mutual interest” (Platt interview 2014). The stakeholder feedback helped determine Sentinel’s structure: “Partners are the ones driving this, the system was modeled after feedback from partners” (Bell interview 2014). Stakeholders provided guidance on what structures would encourage data sharing as well as how to make the system useful to FDA from a technical perspective: “This was new territory, we realized it was essential to get good feedback upfront to make this a success” (Bell interview 2014). This level of public engagement was time consuming, but the FDA Team believed it was critical foundational work: “Getting partnerships in place, doing the necessary work on privacy and security was not sexy or exciting, but it had to get done … staying under the radar really made us possible” (Bell interview 2014). FDAAA had given FDA a few years before any deliverables were due.
The stakeholder outreach sessions allowed FDA to identify objectives and potential obstacles to cooperation, to evaluate them, and to proactively design governance solutions. A similar process of stakeholder outreach sessions could be useful in other contexts where similarly diverse groups of constituencies, interests, and resources exist.
Sentinel functions as a collaboration between FDA, Sentinel’s Coordinating Center (Harvard Pilgrim, a nonprofit health insurer based in Massachusetts), and around 18 data partners. It was clear early on that the Agency needed to operate a portion of Sentinel – FDA would never cede its core decision-making position (Anonymous interviewee). But the Agency elected to contract with a private organization in a competitive process to manage most of Sentinel’s operational functions. The Agency released a request for proposal (RFP) for a Coordinating Center, and the resulting contract went to Harvard Pilgrim for a five-year term with Platt as principal investigator. Platt considered it “the opportunity of a lifetime to help FDA develop this new capability.” During FDA’s first five-year contract with Harvard Pilgrim from 2009 to 2014 (a US$120 million pilot project), the Sentinel System was named “Mini-Sentinel.” In 2014, Harvard Pilgrim’s contract was renewed for an additional five years (in a contract for up to US$150 million), and the program was renamed the “Sentinel System.” At the moment Harvard Pilgrim was awarded the initial contract, “there were four of us involved in this activity [at Harvard Pilgrim] now there are sixty of us” (Platt interview 2014).
Prior to winning the contract to serve as Coordinating Center, Harvard Pilgrim was one of the contractors that worked with the Agency to help design Sentinel’s structure. Creating numerous smaller contracts not only helped develop Sentinel, it was also an important step in “getting this out of government … Once the small contracts had been successful, it built momentum for larger contacts, and everything really took off with the contract to Harvard Pilgrim” (Anonymous interviewee).21
6.3.4 A Partially Distributed, Common Data Model with Centralized Analysis
Sentinel utilizes a partially distributed network and common data model with centralized (single analytic program) analysis (Platt interview 2014). The network is referred to as distributed because the data partners maintain physical and operational control over their own electronic data (Platt interview 2014). In other words, partners maintain and analyze their own data without sharing it with FDA (Platt interview 2014). FDA sends requests to partners for partners to analyze their own data, which partners do behind firewalls (a network security system that protects the resources of a private network from users of other networks). Partners then send the aggregate results of their analysis (e.g., days of exposure) to FDA (Platt interview 2014). Platt estimated that in the vast majority of cases, and thousands of protocols have already been run, partners only send de-identified data to FDA in the form of counts or pooled data (Platt interview 2014).
Data storage is not fully distributed (Platt interview 2014). In some cases, person-level or even personally identifiable information is deliberately shared, although these can generally be de-identified before they are sent to FDA (Platt interview 2014). Of potential concern, it may be possible to re-identify de-identified data, particularly when a data set is very small or when it involves an uncommon event (Abbott 2013). Still, these data sets fully meet de-identified standards, and the risks of re-identification should be minimal given that Harvard Pilgrim and FDA usually maintain this information internally (Platt interview 2014). As one of the data partners commented, there are “generally enough layers of de-identification that [re-identification] is a theoretical but not a realistic concern” (Raebel interview 2014). On rare occasions, it may be necessary for partners to share actual source-level patient data with FDA, for example, to confirm that an adverse event has occurred (Platt interview 2014). Platt estimated that in the vast majority of cases, only de-identified data is transferred to the Agency. In the protocols in which individual medical record review is necessary, FDA requests only the minimum amount of directly identifiable information necessary (Platt interview 2014). This does make the model “partially” rather than “fully” distributed, even though in most cases the model functions in a fully distributed manner (Platt interview 2014). The decision to make the network partially distributed results in FDA having greater access to patient data, but it comes at the expense of allowing FDA to claim the model is fully distributed (one of the Agency’s goals) (Platt interview 2014).
Even though data partners maintain their own electronic health data and do analysis behind firewalls, they are required to translate their data into a common format. This is largely to ensure that a single program can be used to analyze every partner’s data (Brown et al. 2009: 23). The common data format is thought to be a critical element of Sentinel’s structure, along with the centralized analysis it facilitates (Platt interview 2014). As one interviewee stated, “Why did Baltimore burn at the end of the century? Not because they didn’t send fire trucks, but because the hoses didn’t connect … the common data model allows the hoses to connect” (Anonymous interviewee). For each Sentinel query, expert programmers at the Sentinel Operations Center (at Harvard Pilgrim) write software that they send to data partners for the partners to execute against their databases (Platt interview 2014).22 When partners receive software, the partners (should) already have a copy of the data in the common format (Platt interview 2014). “When all goes well,” the programs run without modification (Platt interview 2014). This is a centralized analysis model (the Sentinel Operations Center develops the analysis software) as opposed to a decentralized analysis model (e.g., each partner develops its own analysis software) (Curtis et al. 2012). A centralized model reduces the potential for inconsistency associated with each data partner implementing FDA’s protocols in their own way and makes it more likely that complex approaches are implemented identically across data sets. In other words, it helps ensure that the results from different partners can be compared apples to apples. A decentralized analysis could result in a lack of compatibility for results, complex programming requirements, and redundant effort (Brown et al. 2009: 23). The downside to requiring a common data model is largely that it is burdensome to data partners to translate their existing data into a new format.
A distributed, common data model was not the only option for Sentinel. In May 2009, Harvard Pilgrim prepared a report commissioned by FDA to evaluate possible database models to implement Sentinel (Brown 2010). That report discussed a number of possible configurations, depending on whether data storage was distributed or centralized, whether a common data model was used, and whether analysis was centralized or decentralized (Brown et al. 2009: 23). Harvard Pilgrim recommended a distributed, common data model with centralized analysis, and this was the model requested by FDA in its RFP for a Coordinating Center (Platt interview 2014).23 From an analytical perspective, there would have been benefits to a centralized data storage model (e.g., at FDA or Harvard Pilgrim), but both the Agency and Harvard Pilgrim endorsed the distributed model because it would be more acceptable to data holders, alleviate privacy concerns, and could in principle accomplish all the same things as a centralized model. A distributed system “allows partners to have more control over their databases and work behind their firewalls” (Bell interview 2014). Requiring partners to share their data might have resulted in less partner participation (Brown et al. 2009: 23). Distributed data storage also makes it easier for FDA to tap into partner expertise with their own data and may result in cost savings (Platt interview 2014). Platt stated that he cannot recall any data partner deferring a query for which it had relevant data (only for queries involving data it did not have access to such as non-formulary drugs) (Platt interview 2014).
Another interviewee added that from his perspective a distributed network had important advantages. As he put it, every data set is “different and quirky,” and the data partners understand their data better than FDA could (Anonymous interviewee). Partner data is crucial to their business, and “taking them out of the equation” would make them feel unimportant and discourage participation (Anonymous interviewee). Early feedback from two of FDA’s contractors on legal issues also suggested that a distributed network would be less appealing to plaintiffs’ attorneys, who might otherwise attempt to use Sentinel to support malpractice claims.
Marsha Raebel reported that for Kaiser a centralized system would have been a nonstarter (Raebel interview 2014). The risks of privacy loss, HIPAA violations, and the perception of privacy loss would have outweighed the benefits of participation (Raebel interview 2014). A distributed model was the only option where the burden of current laws and privacy concerns would allow Kaiser to participate.
The burden on Kaiser to translate its data into the common data model is modest (Raebel interview 2014). As a result of its internal research and data sharing, Kaiser already maintains its data in a standard data model, which allows data to be extracted and translated into alternate formats (Raebel interview 2014). Kaiser has substantial experience with data sharing and associated infrastructure. It has several regions with research institutions and a history of data sharing across these regions and with other collaborating organizations. Kaiser has maintained a “virtual data warehouse” for nearly two decades (Raebel interview 2014). In fact, Sentinel’s common data model has many similarities to Kaiser’s model (Raebel interview 2014). That may be because Harvard Pilgrim, which proposed the common data model, was familiar with Kaiser’s model (Raebel interview 2014). In some respects, Sentinel improved the Kaiser model, and Kaiser subsequently incorporated those improvements into its own model (Raebel interview 2014). There is still a financial burden associated with Kaiser translating and refreshing data for Sentinel, but it is likely more burdensome for other data partners (Raebel interview 2014).
The FDA Team believed that laws governing privacy and security were relatively simple compared to dealing with expectations about privacy. Indeed, the Agency is exempt from compliance with HIPAA as a public health entity, and it does not need to comply with state privacy laws. Data partners have to remain compliant with HIPAA and state laws, but they are permitted to share information with public health agencies, and data partners are already knowledgeable about relevant state privacy laws. Still, early in the process, FDA commissioned a multi-state privacy study, and the Agency worked closely with contracted privacy experts to ensure any legal requirements related to handling protected health information were complied with (Evans 2010).
More important to FDA than dealing with privacy as a legal issue was dealing with stakeholder expectations related to privacy. The most important action the FDA Team took to deal with privacy expectations was to structure Sentinel in a distributed fashion. Sentinel’s lack of a centralized data repository assuaged data partners that their confidential data would be safe, because the data stays with the original owner almost exclusively. Data partners also directly guard against transmission of re-identifiable data (e.g., someone with a rare disease in a remote geographical area). In addition, one interviewee believed that for issues of perception, it was helpful that FDA already had a reputation for strongly protecting privacy (Anonymous interviewee). All of the interviewees believed that FDA had robust protections in place and that Sentinel’s structure adequately addressed privacy concerns. Sentinel has not yet experienced a data breach.
3.3.6 Safety vs. Effectiveness
At present, Sentinel is only used to evaluate post-market safety issues (Health Affairs 2015: 3).24 But the system could also be used for a variety of other purposes: for example, to aid in translational research (as the National Center for Advancing Translational Sciences (NCATS) is attempting to do with its own resources) or comparative effectiveness research (CER) (as the Patient-Centered Outcomes Research Institute (PCORI) is attempting to do with PCORnet) (www.pcori.org/research-results/pcornet-national-patient-centered-clinical-research-network).25
Safety surveillance was the only use mandated by FDAAA, but FDA’s attorneys expressed the opinion that the Agency could have attempted to use Sentinel for other purposes (Anonymous interviewee). Sentinel’s objective was narrowed for political reasons – FDA wanted to have stakeholder support for the initiative: “Easy to get everyone on board with safety. Easy to start talking about death panels when you start talking about comparative effectiveness research” (Anonymous interviewee). As Platt noted, “Sentinel is 15 percent technical and 85 percent governance. I’m on the governance side where there is a culture of shared goals and sense of trust that the program will work on things that are of mutual interest and not to the disadvantage of parties” (Platt interview 2014). Sentinel could easily be used to compare organizations and to create the impression that some partners are not delivering the highest quality care (Etheredge 2010). That could be used in competitive marketing (Abbott 2014b).
6.3.7 Related Efforts to Pool EHRs
While Sentinel relies entirely on data from privately insured patients, FDA has also worked to develop active surveillance tools in collaboration with other government entities (the Federal Partners’ Collaboration). As part of the Federal Partners’ Collaboration, FDA has interagency agreements with the Centers for Medicare and Medicaid Services (CMS), the Veteran’s Administration (VA), and the Department of Defense (DOD) to share data. Like Sentinel, this initiative’s model is distributed with each partner operating its own unique data infrastructure. Unlike Sentinel, no common data model is utilized (Bell interview 2014). The Federal Partners’ Collaboration is “not nearly” to the scale of Sentinel (Bell interview 2014). FDA also operates the Safe RX Project in collaboration with CMS. That project was launched in 2008 when Medicare Part D data became available and evolved from earlier collaborations between CMS and FDA primarily related to products covered by Medicare Part B.
Some of the stakeholders involved in Sentinel are involved in other efforts to pool EHR data for secondary use. For example, Kaiser does not sell data, but it does share data with numerous initiatives including the Clinical Data Research Networks (CDRNs) and the HMO Research Network. Platt is independently working with PCORI as the principal investigator of the Coordinating Center for PCORnet. PCORI is an organization created by the Affordable Care Act to focus on CER. Among its other initiatives, PCORI is trying to develop PCORnet as a Sentinel-like system focused on CER by pooling data largely from large clinical networks. Platt stated that Sentinel and PCORnet are complementary systems and that they could be used together.
6.3.8 Data Partner Participation
Enlisting data partners was very much a “back and forth process” (Anonymous interviewee). Platt initially identified the potential data partners that he believed might have useful data. In the case of Kaiser, Raebel reported that Platt approached her directly to inquire about Kaiser participating, as the two knew each other professionally (Raebel interview 2014). Kaiser was a particularly attractive potential data partner for Sentinel as one of the largest nonprofit integrated health systems with extensive EHR utilization.
Data partners are paid for their participation; however, the amount is generally only enough to make participation cost neutral. One interviewee reported that price was largely not an obstacle to FDA working with data partners, although the Agency did not work with some potential partners that wanted “a lot of money” without bringing new data to the table (Anonymous interviewee). Sentinel now costs around $10 million a year to operate, with the majority of funding still coming from CDER general funds (Bell interview 2014). FDA has a contract with each data partner requiring a certain level of performance and requiring queries to be returned within a certain amount of time. Yet the Agency did not have enough money to drive participation on a financial basis. Although it was important to provide enough funding to make partner participation cost neutral, the FDA Team believed that funding alone could not overcome partner objections to alternate Sentinel structures. Indeed, none of the interviewees thought that a direct financial benefit was motivating any of the data partners.
Data partners seem to have non-financial motivations for participation. As Platt reported, Janet Woodcock and FDA were masterful in setting the stage for Sentinel such that partners were “lined up to participate.” One interviewee expressed the opinion that data partners largely chose to participate because it was the “right thing to do” (Anonymous interviewee). Other interviewees echoed the sentiment that everyone involved with Sentinel is excited about improving safety (but that some entities are opposed to addressing issues such as effectiveness).
Sentinel was a natural fit for Kaiser because the nonprofit organization is mission driven to benefit the community. Indeed, revenue generated by Kaiser that exceeds expenses is reinvested into member or community services. After being contacted by Platt, Raebel worked with health plan leaders and research directors across all of Kaiser’s regions to drum up grassroots interest as well as leadership endorsement. Raebel stated that Kaiser is in this for improving safety and to help learn how to best avoid adverse events. Kaiser “looks outside the financial for benefits” (Raebel interview 2014). FDA covers Kaiser’s costs for data sharing, but Kaiser does not financially profit for participation. Kaiser budgets based on expected work and invoices based on work actually performed. Ultimately, six of the seven Kaiser regions decided to participate in Sentinel (Raebel interview 2014).
Raebel notes that while Sentinel has entirely focused on safety, she believes that focus is likely to broaden. From Kaiser’s perspective, it is important that participation does not financially burden member premiums, but the organization would be open to other uses of Sentinel that fit in from a member and organizational perspective. Raebel argued that Sentinel has become a “huge resource,” and that it would not be appropriate to limit the system to safety (Raebel interview 2014). As for using Sentinel in CER, each project would have to be evaluated on its merits.
Platt noted that it was also important to provide appropriate professional rewards for individuals working for data partners. That often took the form of coauthorship of academic articles, but sometimes it simply involved providing individualized materials such as a letter from Janet Woodcock stating that a report was helpful, or providing evidence that Sentinel produced useful results for a federal advisory committee meeting.
Sentinel answers queries submitted by the various FDA Centers, and the results provided by Sentinel help the Agency protect public health. FDA also releases some of the results of Sentinel’s analyses and that information may be useful to other entities such as academic researchers or pharmaceutical sales teams.26 Data partners who have not previously translated their data into a common data model may also realize some positive externalities as translated data may facilitate internal statistical analysis or commercialization (e.g., data sale to pharmaceutical companies).
Sentinel has already proved useful to the Agency. The Mini-Sentinel phase alone assessed 137 drugs (Health Affairs 2015: 4). For example, the use of Sentinel contributed to a label change for rotovirus vaccines, contributed to a label change for the anti–high blood pressure drug olmesartan, provided insight into the use of prasugrel (a drug approved in 2009 for acute coronary syndrome but contraindicated in patients with a history of stroke or transient ischemic attack (TIA)), and helped resolve concerns related to dabigatran (Southworth et al. 2013).
Dabigatran (Pradaxa) is an oral anticoagulant (blood thinner) drug developed by Boehringer Ingelheim and approved by FDA on October 19, 2010. The drug was approved to prevent strokes in patients with atrial fibrillation (an abnormal heartbeat that increases the risk of stroke). Dabigatran is used as an alternative to warfarin (coumadin), which has been used since 1954. While warfarin has been the anticoagulant of choice for most physicians for about 60 years, it requires careful monitoring and maintenance of drug blood levels. Dabigatran does not require the same monitoring. However, both drugs (and for that matter all blood thinners) create a risk of excessive bleeding. The effects of warfarin can be reversed if drug blood levels get too high or if a patient has a bleeding event, but there is no similar way to reverse the effects of dabigatran. The clinical trials submitted to FDA for dabigatran’s approval showed the drug had a similar bleeding risk to warfarin.
After dabigatran was approved, the Agency received an unexpectedly high level of reports of patients on dabigatran experiencing severe bleeding events. Of course, as a known side effect, it was inevitable that some patients would experience severe bleeding events. However, it would have been a serious concern if dabigratan turned out to be riskier than warfarin. The reason for the reports was unclear: the approval studies might have been inaccurate; new (off-label) uses, dosages, or durations of the drug might be occurring; new populations (e.g., children) might be receiving the drug; there might be a reporting bias because of the drug’s novelty or media coverage; physicians might be improperly adjusting for kidney function; and so on. The reports had limited information, but they did not appear to reveal an unrecognized risk factor for bleeding or off-label use. The AERS reports prompted the Agency to issue a drug safety communication in November 2011, regarding dabigatran’s bleeding risk.
FDA used Sentinel (then Mini-Sentinel) to supplement the AERS analysis. Sentinel looked at bleeding rates from the time of the drug’s approval until December 31, 2011, and found that bleeding rates associated with dabigatran were not higher than bleeding rates associated with warfarin (Southworth et al. 2013). Those results were consistent with the results of the drug’s clinical trials. Sentinel’s analysis was limited by a difficulty of adjusting for confounders and a lack of detailed medical record review to confirm actual bleeding occurred. Still, the Agency’s analysis with Sentinel helped interpret a safety signal that could have resulted in dabigatran’s withdrawal or a labeling change. A subsequent article in the journal Nature Reviews Drug Discovery heralded that “Mini-Sentinel saves dabigatran” (Nature Reviews Drug Discovery 2013: 333).27
As one interviewee reported, Sentinel has been very effective at rapidly resolving the question of whether an emergency exists (Anonymous interviewee). Potential emergencies are expensive for the Agency because they pull staff off other projects, and most of the time there is no emergency. Before Sentinel, a large number of personnel had to “stop, look, and see … are there 747s of patients falling to the ground?” (Anonymous interviewee). Now Sentinel can prevent a significant disruption to the Agency’s workflow.
The commentary on Sentinel has been mixed (Avorn 2013; Carnahan and Moores 2012; Carnahan et al. 2014). Some observers have praised the initiative, for example, as an “impressive achievement that deserves ongoing support” (Psaty and Breckenridge 2014). Others have argued that most of the data fueling Sentinel was never meant to be used for pharmacovigilance, and that “the biggest danger is that people will get a false reassurance about safety” (Thomas Moore, senior scientist at the Institute for Safe Medication Practices, qtd. in Greenfieldboyce 2014). Critics have also argued that Sentinel’s results “may contradict the gold-standard clinical evidence from [randomized controlled trials]” (Sipahi et al. 2014). A 2015 article in Health Affairs claimed, “Sentinel has not yet become a tool for the rapid assessment of potential drug safety problems, which was one initial vision for the system. That’s in large part because of persistent technical and methodological challenges … And conflicting findings from different databases and studies can still lead to confusion and slow action, despite the power of Sentinel’s database” (Health Affairs 2015: 4). On the whole, however, there has been relatively little praise or criticism of Sentinel. Several interviewees have been surprised by the relative dearth of academic and industry commentary on what one interviewee described as a “paradigm change” (Anonymous interviewee).
One obvious measure of success is with regard to Congress’s mandate to the Agency in FDAAA. On that front FDA has certainly been successful. Sentinel meets all of FDAAA’s requirements; in fact, it exceeds by about 78 million the number of patients FDAAA required. Compliance with FDAAA (and timely compliance) is a substantial accomplishment, particularly given FDAAA’s ambition, a lack of funding, and a requirement to rely on voluntary participation by data holders. The Agency’s success in meeting FDAAA’s goals should not be undervalued.
Indeed, in 2008 when the HHS Secretary Mike Leavitt pledged “a national, integrated, electronic system for monitoring medical product safety,” he also said it would be tied to another initiative, the Nationwide Health Information Network (NHIH), which would “connect clinicians across the health care system and enable the sharing of data as necessary with public health agencies” (Health Affairs 2015: 3). Like most efforts at secondary use of EHRs, NHIH never materialized. Sentinel, by contrast, now has access to 358 million person-years of data, including 4 billion prescriptions, 4.1 billion doctor or lab visits and hospital stays, and 42 million acute inpatient (hospital) stays (Health Affairs 2015: 3).
All of the individuals interviewed believed that Sentinel is an unqualified success (although all of these individuals have been involved in the program). They noted that seven years ago there was no active surveillance system to speak of, and now FDA has Sentinel. The Agency also has “solid, ethical policies” in place and strong privacy protections (Anonymous interviewee). Sentinel has become a valuable and regular resource to FDA, and it is poised to “prevent the next Vioxx” (Anonymous interviewee). To the extent that Sentinel was not used earlier or more prominently, this likely reflects Agency caution and the FDA Team’s deliberate decision to proceed slowly and methodically.
One interviewee argued that FDAAA made bold demands without a roadmap of how to achieve them. Sentinel was the most difficult project he worked on professionally, and every effort was a new challenge. He attributes Sentinel’s success to working methodically; taking care to worry about patient safety, security, and stakeholder engagement; awarding contracts to the best contractors; and never losing sight of the focus that Sentinel was designed as a “new tool for the Agency” (Anonymous interviewee). That interviewee believed that the biggest risk to sustainability is that only FDA has access to Sentinel, and he believes that access should be expanded to other users and uses.
Platt notes that five years into Sentinel, “people are comfortable.” Specifically, and critically, data partners remain engaged. Bell believes that Sentinel is the biggest success story of secondary use, and that FDA is the only entity that has been able to get so many people to stay at the table. “Participation is voluntary; our collaborators participate on their own accord. And that’s a major part of the success of the Sentinel program” (Bell interview 2014).
Raebel stated, “the overall philosophy and approach is totally in the right direction.” She believes the only areas for improvement are small technical issues where greater efficiencies could be built in, and that there is a need to ensure data quality. For example, some critics have argued that claims data from insurers may poorly align with actual patient outcomes (Greenfieldboyce 2014). Raebel also mentioned it was critical to adjust for confounders, and that FDA has previously released nonadjusted results, although she added that the Agency often has to act on incomplete information. For example, Sentinel’s analysis for dabigatran found a lower risk of GI bleeding compared to warfarin, while FDA’s subsequent trial with Medicare data found an increased risk of bleeding. That may have been because of age differences between the populations examined and an inability to rigorously adjust for confounding variables with Sentinel’s data.28 The issue with age differences was specific to the case of dabigatran, but the inability to rigorously adjust for confounding variables is likely a function of Sentinel’s design.
Sentinel might have been more successful as an analytic tool if it incorporated a centralized data repository. It might also have provided greater public health benefits if non-FDA parties had access, and if Sentinel was used for more than safety research. Those things did not occur for a variety of reasons, in large part because the Agency did not want to alienate data partners. Indeed, it seems likely that at least some data partners would not want to provide data to a centralized data repository, and that others would not want to participate in non-safety queries. Catering to data partners is necessary given that Sentinel operates by contracting and self-regulation rather than through notice-and-comment rule making.
Reliance on voluntary participation was not the only option. For example, England, which has the largest and the oldest single-payer health care system in the world (the National Health Service (NHS)), has a single, comprehensive repository for patient-level health care data known as the Secondary Uses Service (SUS). Although there are restrictions on access, the SUS can be accessed by multiple actors for a variety of purposes, including tracking referral treatment timelines, supporting payment by results (the hospital payment system in England), improving public health, or developing national policy.29 The system de-identifies personal data at the point of entry and refers to such data as “pseudonymized.” SUS has around 10 billion pseudonymized records and it is built to support more than 9,000 concurrent users.30 Single-payer systems in the EU generally have significantly more developed patient databases for longitudinal study.
Congress and the Agency could have elected to mandate data sharing by data partners. Of course, that would have been logistically challenging given that EHR data in the United States is very different than it is from that in England. More importantly, it may not have been possible politically given the resistance to government intervention in health care. Mandating sharing almost certainly would have generated substantial pushback (and legal challenges) from the health care industry. It might also have generated public backlash if there was a perception that such sharing might result in privacy violations. In any case, the decision to rely on contracting and self-regulation reflects a particular governance philosophy that ended up dictating Sentinel’s structure. It may have resulted in a system with less utility and one that requires duplicated effort to share data in other contexts. On the other hand, increased government regulation has not always proved the best solution to a public health challenge, and mandating data sharing might have excluded the expertise of data partners, created new burdens on industry that would harm innovation, and resulted in privacy violations.
That Sentinel exists at all is a testament to the work of its architects in identifying stakeholders, objectives, potential barriers and solutions, and to their ability to navigate complex bureaucracies and political environments. And Sentinel does more than exist – it has been entirely successful at meeting FDAAA’s mandate and at engaging data partners. In the future, Sentinel may remain a distributed network that solely focuses on safety. Non-safety queries may have to occur thorough other efforts at secondary use of EHR such as PCORnet.
Yet Sentinel still has the freedom to evolve within its current framework to incorporate a centralized data repository and non-safety queries: “Ideally our vision would be to have the infrastructure be a national resource” (Bell interview 2014). Woodcock has written that it is FDA’s hope to allow other users to access Sentinel for multiple purposes (Woodcock 2014), and such access may help sustain Sentinel. It is possible that a critical mass of existing data partners would continue to participate in the program if it evolves. Alternately, new laws or regulations mandating data partner participation or providing dedicated funding for Sentinel might facilitate its expansion. Now that Sentinel is established, there is likely less risk to the system from accepting earmarks. Legislation mandating data sharing may now be less controversial given that Sentinel’s feasibility has been demonstrated and there have been no privacy violations.
The FDA Team hypothesized that Sentinel could only be successful with an exclusive focus on safety and a distributed data model. To investigate whether this hypothesis is valid with similarly situated medical research knowledge commons, future case studies should investigate whether voluntary health data-sharing projects that are not exclusively focused on safety issues, such as PCORnet, are able to achieve sufficient third-party data contribution to function effectively. Similarly, it will be important to investigate whether voluntary health data-sharing projects that utilize centralized data repositories are able to achieve sufficient contributions.
References And Suggested Additional Reading
Ryan Abbott is Professor of Law and Health Sciences, School of Law, University of Surrey, Guildford, Surrey, UK, and Adjunct Assistant Professor, David Geffen School of Medicine, University of California, Los Angeles.
1 Food and Drug Administration Amendments Act of 2007, Pub. L. No. 110–85, 121 Stat. 823 (2007).
2 “Secondary use of health data can enhance healthcare experiences for individuals, expand knowledge about disease and appropriate treatments, strengthen understanding about the effectiveness and efficiency of our healthcare systems, support public health and security goals, and aid businesses in meeting the needs of their customers” (Safran et al. 2007: 3).
3 This chapter uses the term “electronic health records” to refer to electronic databases of patient information maintained by both providers and insurers, even though the term is not always used to refer to insurance company databases. Insurer databases generally do not contain the same sorts of patient information as provider databases.
4 US Dep’t of Health and Hum. Svcs., The Sentinel Initiative: National Strategy for Monitoring Medical Product Safety (2008), www.fda.gov/downloads/Safety/FDAsSentinelInitiative/UCM124701.pdf
6 See The Future of Drug Safety: Promoting and Protecting the Health of the Public, 17 Inst. of Med. of the Nat’l Acads. 1, 1–2 (2007), http://books.nap.edu/openbook.php?record_id=11750&page=R1
8 US Dep’t of Health and Hum. Svcs. (2008).
9 Food and Drug Administration Amendments Act of 2007.
13 US Dept. of Health and Human Services, Office of Medical Policy, The Sentinel Initiative: Access to Electronic Healthcare Data for More Than 25 Million Lives, Achieving FDAAA Section 905 Goal One 1–3 (2010), www.fda.gov/downloads/Safety/FDAsSentinelInitiative/UCM233360.pdf; see also Health Affairs 2015: 1–2.
14 Ibid. Other passive surveillance systems include FDA’s Center for Biologics Evaluation and Research’s (CBER’s) Vaccine Adverse Event Reporting System (VAERS), a database that captures reports of suspected vaccine-related adverse reactions, and FDA’s Center for Devices and Radiological Health’s (CDRH’s) Manufacturer and User Facility Device Experience (MAUDE) that captures reports of suspected medical device–related adverse reactions. In addition to mandatory reporting by industry, FDA receives reports submitted through FDA’s MedWatch program, which enables health care professionals and consumers (i.e., patients, family members, caregivers) to voluntarily report suspected adverse drug reactions and medication errors” Ibid.
17 The FDA also sometimes requires pharmaceutical companies to conduct post-market studies.
18 See, e.g., Ryan Abbott, Big Data and Pharmacovigilance: Using Health Information Exchanges to Revolutionize Drug Safety, 99 Iowa L. Rev. 225, 245–46 (2013).
19 Access to this data on a large enough scale (ideally more than 100 million people) would allow the agency to use Sentinel for adverse event augment surveillance and signal strengthening, confirmatory safety studies, monitoring adoption and diffusion, registry information, calculation of background incident rates, better information regarding appropriate use of electronic data/validity of coding, and so on. Jeffrey Brown, Defining and Evaluating Possible Database Models to Implements the FDA Sentinel Initiative, Second Annual Sentinel Initiative Public Workshop, Dept. of Population Medicine Harvard Medical School and Harvard Pilgrim Health Care 5 (Jan. 11, 2010), www.brookings.edu/~/media/events/2010/1/11-sentinel-workshop/03_brown.pdf
20 These included defining and evaluating possible database models (Harvard Pilgrim); evaluating existing methods for safety, signal identification for Sentinel (Group Health Cooperative Center); evaluation of timeliness of medical uptake for surveillance and healthcare databases (IMS Government Solutions), and evaluation of potential data sources for the Sentinel Initiative (Booz Allen Hamilton) (Woodcock 2008: 17–18).
21 A list of contracts granted by FDA is available at www.fda.gov/Safety/FDAsSentinelInitiative/ucm149343.htm
22 Raebel did note there have been instances with site-specific questions where programmers at that site have written software instead of programmers at Harvard Pilgrim (Raebel email communication 2009).
23 There were numerous antecedent distributed models for Sentinel to model itself after, including the Electronic Primary Care Research Network (ePCRN), the Informatics for Integrating Biology and the Bedside (i2b2), the Shared Health Research Network (SHRINE), the HMO Research Network Virtual Data Warehouse (HMORN VDW), the Vaccine Safety Datalink (VSD), the Menengococcal Vaccine Study, and the Robert Wood Johnson Foundation Quality and Cost Project (Brown et al. 2009: 19–20).
24 Though as Platt noted, with some safety queries, such as comparing warfarin to newer anticoagulants where complications of interest are related to warfarin’s very narrow therapeutic index, it may be difficult to separate safety from efficacy (Platt interview 2014).
25 However, as Platt noted, concerns about using observational data for safety analysis may be magnified in CER. For example, clinicians may decide between treatments on the basis of data not available in EHRs. There are scientific reasons to potentially limit analysis to safety issues.
26 Press Release Archive, FDA Mini-Sentinel Assessment Reinforces Safety Data of Pradaxa (dabigatran etexilate mesylate), Boehringer Ingelheim (Nov. 2, 2012), http://us.boehringer-ingelheim.com/news_events/press_releases/press_release_archive/2012/11–02-12-FDA-Mini-Sentinel-Assessment-Safety-Data-Pradaxa-dabigatran-etexilate-mesylate.html
27 It should be noted that Sentinel’s dabigatran findings were controversial. Some experts disagreed with the analysis and subsequent studies produced conflicting results (Health Affairs 2015: 4). Yet FDA’s most recent update on dabigatran in May 2014 notes the Agency completed an observational cohort study of Medicare beneficiaries comparing dabigatran and warfarin that largely supported the findings of dabigatran’s approval studies (FDA Drug Safety Communication: FDA Study of Medicare Patients Finds Risks Lower For Stroke and Death But Higher for Gastrointestinal Bleeding with Pradaxa (Dabigatran) Compared to Warfarin, US Food & Drug Admin., http://www.fda.gov/Drugs/DrugSafety/ucm396470.htm (last visited May 18, 2016).
28 FDA Drug Safety Communication: FDA Study of Medicare Patients Finds Risks Lower For Stroke and Death But Higher for Gastrointestinal Bleeding with Pradaxa (Dabigatran) Compared to Warfarin, US Food & Drug Admin., www.fda.gov/Drugs/DrugSafety/ucm396470.htm (last visited May 18, 2016).
29 Secondary Uses Service (SUS), Health & Social Care Information Centre, www.hscic.gov.uk/sus (last visited May 18, 2016).
30 NHS Secondary Uses Service: Using the Power of Information to Improve Healthcare, BT.com, www.globalservices.bt.com/uk/en/casestudy/nhs_sus (last visited May 18, 2016).