To save this undefined to your undefined account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your undefined account.
Find out more about saving content to .
To save this article to your Kindle, first ensure email@example.com is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Financial relationships between physicians and industry are widespread. Highly publicized financial relationships between physicians and industry raised disturbing questions about the trustworthiness of clinical research, practice guidelines, and clinical care decisions. Recent incidents spurred calls for stricter conflict of interest policies and led to new federal laws and NIH regulations. These stricter policies have evoked praise, concerns, and objections. Because these new federal requirements need to be interpreted and implemented, spirited discussions of conflicts of interest in medicine will continue.
Physicians, scholars, and policymakers continue to be concerned about conflicts of interests among health care providers. At least two main types of objections to conflicts of interest exist. Conflicts of interests may be intrinsically troublesome if they violate providers’ fiduciary duties to their patients or they contribute to loss of trust in health care professionals and the health care system. Conflicts of interest may also be problematic in practice if they bias the decisions made by providers, adversely impacting patient outcomes or wastefully increasing health care costs. This latter objection may be observed in differences in the prescriptions written, procedures performed, or costs billed by health care professionals who have conflicting interests, when compared to those that do not have such financial relationships.
Conflicts of interest have been reduced to financial conflicts. The National Institutes of Health’s (NIH) new rules for managing conflicts of interest in medical research, the first major change to the regulations in over 15 years, address only financial ties. Although several commentators urged that the regulations also cover non-financial interests, the Department of Health and Human Services declined to do so. Similarly, the Institute of Medicine’s (IOM) influential 2009 Conflict of Interest Report focuses almost exclusively on financial conflicts. Institutional policies at academic medical centers and guidance from professional bodies and medical journals also primarily emphasize financial ties. Even broadly worded rules are applied more readily to financial ties than non-financial interests, such as the regulations that restrict institutional review board (IRB) members with conflicting interests from participating in protocol reviews.
The medical profession is under a state of increasing scrutiny. Recent high profile scandals regarding substantial industry payments to physicians, surgeons, and medical researchers have raised serious concerns over conflicts of interest. Amidst this background, the public, physicians, and policymakers alike appear to make the same assumption regarding conflicts of interest; that doctors who succumb to influences from industry are making a deliberate choice of self-interest over professionalism and that these doctors are corrupt. In reality, a myriad of evidence from social science indicates that influence from conflicts of interest often occurs on a subconscious and unintentional level. This poses an important issue, since such conflicts can steer wellintentioned physicians away from their primary professional goal to provide the best medical advice and treatment possible.
Physicians and patients rely on medical journals as trusted sources of medical information. Unfortunately, in multiple instances conflicts of interest have undermined the credibility of the medical literature.
The primary sources of conflict of interest at medical journals are authors, reviewers, editors (a category that includes editorial staff, editorial boards, and other advisory groups), and journals (a category that includes owners and publishers). Consider these examples.
In recent years, the government, advocacy organizations, the press, and the public have pressured universities, academic medical centers, and physicianinvestigators to do more to ensure that their financial interests and relationships do not conflict with their duties to conduct high-quality research and protect the safety and welfare of clinical trial participants. A number of factors underlie the increased focus. First, private sector funding of clinical research has grown both in absolute terms and as a proportion of overall funding. In 2008, the pharmaceutical, medical device, and biotechnology industries’ domestic research and development expenditures constituted approximately 60.9% of funding for biomedical research in the United States; the next largest funder, the National Institutes of Health, funded 27.9%. Private industry spent $58.6 billion on research in 2007, up from $40 billion in 2003, an increase of 25% after adjusting for inflation.
Why do physicians have financial conflicts of interest? They arise because society expects physicians to act in their patients’ interest, while simultaneously, financial incentives encourage physicians to practice medicine in ways that promote their own interests or those of third parties. Because physicians’ clinical choices, referrals, and prescriptions affect the fortune of third parties (providers, medical facilities, insurers, drug firms and suppliers of ancillary services), these third parties may offer physicians financial incentives to make income-driven clinical choices. In the past, physicians and scholars typically conceived of conflicts of interest as an ethical issue to be resolved according to individual judgment or professional and organizational norms. However, society can mitigate or eliminate conflicts of interest by changing financial and organizational arrangements in medicine. Conflicts of interest, therefore, are as much matters of public policy and management as individual choices or social norms.
In one of the televised debates among Republican primary candidates for the 2012 U.S. presidential election, moderator Wolf Blitzer presented this hypothetical case to candidate Ron Paul:
A healthy 30 year old young man has a good job, makes a good living, but decides — you know what — ‘I’m not going to spend 200 or 300 dollars a month for health insurance because I’m healthy, I don’t need it.’ But something terrible happens, all of a sudden he needs it. Who’s going to pay if he goes into a coma?
Paul, known for his libertarian views, initially responded that the patient “should assume responsibility for himself,” and that he should have purchased a major medical policy before he became ill.
These comments seek to take issue with the contention that society has a responsibility to provide its members with any needed health care. In order to deal with this claim, we must first make clear exactly what it meant by the proposition. I take it that those who embrace this view mean considerably more than that each of us has a moral obligation to contribute to those in need of medical attention who are unable, for one reason or another, to afford the necessary care. This is a moral proposition and is traditionally dealt with under the heading of charity. But the contention, as here used, means considerably more since its main implications are not moral but primarily political.
Markets have long had a whiff of sulphur about them. Plato condemned innkeepers, whose pursuit of profit he believed led them to take advantage of their customers, Aristotle believed that the pursuit of profit was indicative of moral debasement, and Cicero held that retailers are typically dishonest as this was the only path to gain. And even those who are more favorably disposed towards markets in general are frequently inclined to be suspicious of markets in medical goods and services. For example, Margaret Thatcher (to take someone far removed — in many respects! — from Plato, Aristotle, and Cicero) supported the legal prohibition of markets in kidneys despite being arguably the most pro-market Prime Minister the United Kingdom saw in the 20th century.
The intensity of the opposition to health reform in the United States continues to shock and perplex proponents of the Patient Protection and Affordable Care Act (PPACA). The emotion (“Abort Obama”) and the apocalyptic rhetoric (“Save our Country, Protect our Liberty, Repeal Obamacare”), render civil and evidence-based debate over the implications and alternatives to specific provisions in the law difficult if not problematic. The public debate has largely barreled down two non-parallel yet non-intersecting paths: opponents focus on their fear of government expansion in the future if PPACA is implemented now, while proponents focus on the urgency and specifics of our health care market problems and the limited number of tools we have to address them. Frustration on both sides has led opponents to deny the seriousness of our health system’s problems and proponents to ignore the risk of governmental overreach. These non-intersecting lines of argument are not moving us closer to a desired and necessary resolution.
The Patient Protection and Affordable Care Act of 2010 (the Affordable Care Act) is the law of the land. But it faces an uncertain future.
During congressional deliberations on the 2,700-page legislation leading up to its enactment, from February to March 2010, not one major survey recorded majority support for the legislation. Since its enactment, popular opposition to the Affordable Care Act has hardened, and was a significant factor in the 2010 congressional election, in which Democrats lost 63 seats and Republicans regained the majority in the House of Representatives. Ballot initiatives in Missouri and Ohio, showcasing popular opposition to the individual mandate, passed in 2010 with overwhelming majorities. While the United States Supreme Court in National Federation of Independent Business et al. v. Sebelius, 132 S. Ct. 2566 ( 2012), declared the mandate on the states to expand Medicaid unconstitutionally coercive, the majority of the Justices also upheld the individual mandate as a permissible tax. The new law thus emerged as a central topic in the 2012 election.
Various kinds of consumer-driven reforms have been attempted over the last 20 years in an effort to rein in soaring costs of health care in the United States. Most are based on a theory of moral hazard, which holds that patients will over-utilize health care services unless they pay enough for them. Although this theory is a basic premise of conventional health insurance, it has been discredited by actual experience over the years. While ineffective in containing costs, increased cost-sharing as a key element of consumer-driven health care (CDHC) leads to restricted access to care, underuse of necessary care, lower quality and worse outcomes of care. This paper summarizes the three major problems of U.S. health care urgently requiring reform and shows how cost-sharing fails to meet that goal.
There are many reasons for dissatisfaction with current U.S. health care. One-sixth of the population is uninsured, costs are 150-200% of those in other economically advanced nations, and the quality of care, as measured by disease specific mortality and morbidity data, is rarely better and often worse than in others nations’ less costly systems. A case for reform can mirror any or all of these concerns: cover more of the population with insurance, control costs, improve the effectiveness of prevention and treatment. I argue that two of these goals — greater population coverage and more disciplined costs — gain a significant part of their justification from moral beliefs about justice and fairness.
The “Father of the United States Constitution,” James Madison, once described justice as “the end” of both government and of civil society. Yet curiously, Madison said little about justice in elaborating the principles of American federalism in The Federalist Papers and elsewhere. His fundamental concerns, to the contrary, were in contriving a system of separated, countervailing powers and in establishing a first federal principle of enumerated powers — in which federal powers “are few, and defined.” This strategy, for Madison, was the most feasible way of checking the innate tendency of political power to accumulate, centralize, and trample on citizens’ liberties.
Consider this hypothetical scenario involving a choice not to vaccinate a child. Ms. S has a niece who is autistic. The girl's parents are suspicious that there is some relationship between her autism and her Measles Mumps and Rubella (MMR) vaccination. They have shared their concerns with Ms. S. She then declines to have her own daughter, Jinny S., vaccinated with the MMR vaccine. To bypass the state's mandatory vaccination requirement, Ms. S claims a state-legislated philosophical exemption, whereby she simply attests to the fact that she is opposed to vaccinating her daughter due to a conscientiously held belief. At the age of four, Jinny goes on a trip by airplane to Germany with her mother. After returning to the United States, she attends daycare despite having some mild cold symptoms. Subsequently, she develops a classic measles rash, at which point her mother brings her to a pediatrician and keeps her home from daycare.
The current ethical norms of genomic biobanking creating and maintaining large repositories of human DNA and/or associated data for biomedical research have generated criticism from every angle, at both the practical and theoretical levels. The traditional research model has involved investigators seeking biospecimens for specific purposes that they can describe and disclose to prospective subjects, from whom they can then seek informed consent. In the case of many biobanks, however, the institution that collects and maintains the biospecimens may not itself be directly involved in research, instead banking the biospecimens and associated data for other researchers. Moreover, the future uses of biospecimens may be unknown, if not unknowable, at the time of collection. Biobanking may thus stretch the meanings of inform and consent to their breaking point: if you cannot inform subjects about what their biospecimens will be used for (because you do not know), what can they consent to? Given that informed consent by individual subjects is the ethical gold standard, the seeming dilution of the concept in the context of biobanking is a profound problem.
Can a country with a free press and a robust political debate provide its citizens with actionable information so that they can protect themselves from a threat to their health or safety? By actionable information, I mean accurate facts and reasonable interpretations of those facts upon which an individual should rely in making reason-based decisions. In the context of public health, this includes information that allows an individual to weigh the risk to one’s self, family, and community before deciding to act in an uncertain environment under threatening conditions. The recent H1N1 pandemic, and in particular the government’s campaign for public acceptance of the vaccine, highlights the challenges in ensuring that individuals receive accurate information and perceive that such information should be the basis on which to make potential life-saving decisions.