To save this undefined to your undefined account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your undefined account.
Find out more about saving content to .
To save this article to your Kindle, first ensure email@example.com is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Human and animal research both operate within established standards. In the United States, criticism of the human research environment and recorded abuses of human research subjects served as the impetus for the establishment of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, and the resulting Belmont Report. The Belmont Report established key ethical principles to which human research should adhere: respect for autonomy, obligations to beneficence and justice, and special protections for vulnerable individuals and populations. While current guidelines appropriately aim to protect the individual interests of human participants in research, no similar, comprehensive, and principled effort has addressed the use of (nonhuman) animals in research. Although published policies regarding animal research provide relevant regulatory guidance, the lack of a fundamental effort to explore the ethical issues and principles that should guide decisions about the potential use of animals in research has led to unclear and disparate policies. Here, we explore how the ethical principles outlined in the Belmont Report could be applied consistently to animals. We describe how concepts such as respect for autonomy and obligations to beneficence and justice could be applied to animals, as well as how animals are entitled to special protections as a result of their vulnerability.
In a recent paper in Nature1 entitled The Moral Machine Experiment, Edmond Awad, et al. make a number of breathtakingly reckless assumptions, both about the decisionmaking capacities of current so-called “autonomous vehicles” and about the nature of morality and the law. Accepting their bizarre premise that the holy grail is to find out how to obtain cognizance of public morality and then program driverless vehicles accordingly, the following are the four steps to the Moral Machinists argument:
1) Find out what “public morality” will prefer to see happen.
2) On the basis of this discovery, claim both popular acceptance of the preferences and persuade would-be owners and manufacturers that the vehicles are programmed with the best solutions to any survival dilemmas they might face.
3) Citizen agreement thus characterized is then presumed to deliver moral license for the chosen preferences.
4) This yields “permission” to program vehicles to spare or condemn those outside the vehicles when their deaths will preserve vehicle and occupants.
This paper argues that the Moral Machine Experiment fails dramatically on all four counts.
To what extent, if any, should minors have a say about whether they participate in research that offers them no prospect of direct benefit? This article addresses this question as it pertains to minors who cannot understand enough about what their participation would involve to make an autonomous choice, but can comprehend enough to have and express opinions about participating. The first aim is to defend David Wendler and Seema Shah’s claim that minors who meet this description should not be offered a choice about whether they participate. The second aim is to show, contra Wendler and Shah, that the principle of nonmaleficence requires more with respect to giving these minors a say than merely respecting their dissent. Additionally, it requires that investigators obtain affirmation of their non-dissent. This addresses intuitive concerns about denying children a choice, while steering clear of the problems that arise with allowing them one.
Advance directives entail a refusal expressed by a still-healthy patient. Three consequences stem from that fact: (a) advance refusal is unspecific, since it is impossible to predict what the patient’s conditions and the risk-benefit ratio may be in the foreseeable future; (b) those decisions cannot be as well informed as those formulated while the disease is in progress; (c) while both current consent and refusal can be revoked as the disease unfolds, until the treatment starts out, advance directives become effective when the patient becomes incapable or unconscious; such decisions can therefore not be revoked at any stage of the disease. Therefore, advance directives are binding for doctors only at the stage of advance treatment planning, i.e., only if they refer to an illness already in progress.
This article considers recent ethical topics relating to medical AI. After a general discussion of recent medical AI innovations, and a more analytic look at related ethical issues such as data privacy, physician dependency on poorly understood AI helpware, bias in data used to create algorithms post-GDPR, and changes to the patient–physician relationship, the article examines the issue of so-called robot doctors. Whereas the so-called democratization of healthcare due to health wearables and increased access to medical information might suggest a positive shift in the patient-physician relationship, the physician’s ‘need to care’ might be irreplaceable, and robot healthcare workers (‘robot carers’) might be seen as contributing to dehumanized healthcare practices.
Any space program involving long-term human missions will have to cope with serious risks to human health and life. Because currently available countermeasures are insufficient in the long term, there is a need for new, more radical solutions. One possibility is a program of human enhancement for future deep space mission astronauts. This paper discusses the challenges for long-term human missions of a space environment, opening the possibility of serious consideration of human enhancement and a fully automated space exploration, based on highly advanced AI. The author argues that for such projects, there are strong reasons to consider human enhancement, including gene editing of germ line and somatic cells, as a moral duty.
In a recent paper in Cambridge Quarterly of Healthcare Ethics on the necessary conditions for morally responsible animal research David DeGrazia and Jeff Sebo claim that the key requirements for morally responsible animal research are (1) an assertion of sufficient net benefit, (2) a worthwhile-life condition, and (3) a no-unnecessary-harm condition. With regards to the assertion (or expectation) of sufficient net benefit (ASNB), the authors claim that morally responsible research offers unique benefits to humans that outweigh the costs and harms to humans and animals. In this commentary we will raise epistemic, practical, and ethical challenges to DeGrazia and Sebo’s emphasis on benefits in the prospective assessment of research studies involving animals. We do not disagree with DeGrazia and Sebo that, at the theoretical level, the benefits of research justify our using animals. Our contribution intends to clarify, at the practical level, how we should understand benefits in the prospective assessment and moral justification of animal research. We argue that ASNB should be understood as an assessment of Expectation of Knowledge Production (EKP) in the prospective assessment and justification of animal research. EKP breaks down into two further claims: (1) that morally responsible research generates knowledge worth having and (2) that morally responsible research is designed and executed to produce generalizable knowledge. We understand the condition called knowledge worth having as scientists’ testing a hypothesis that, whether verified or falsified, advances an important interest, and production of generalizable knowledge in terms of scientific integrity. Generalizable knowledge refers to experimental results that generalize to a larger population beyond the animals studied. Generalizable scientific knowledge is reliable, replicable, and accurately descriptive. In sum, morally responsible research will be designed and carefully executed to successfully test a hypothesis that, whether verified or falsified, advances important interests. Our formulation of EKP, crucially, does not require further showing that an experiment involving animals will produce societal benefits.
The mission and value statements of healthcare organizations serve as the foundational philosophy that informs all aspects of the organization. The ultimate goal is seamless alignment of values to mission in a way that colors the overall life and culture of the organization. However, full alignment between healthcare organizational values and mission in a fashion that influences the daily life and culture of healthcare organizations does not always occur. Grounded in the belief that a lack of organizational alignment to explicit organizational mission and value statements often stems from the failure to develop processes that enable realization of the leadership’s good intentions, the authors propose an organizational ethics dashboard to empower leaders of healthcare organizations to assess the adequacy of systems in place to support alignment with the stated ethical mission.