Skip to main content Accessibility help
×
Home
  • Print publication year: 2021
  • Online publication date: May 2021

5 - Morality and the Machine

from Part II - Ethics and Engineering Design

Summary

We are moving at a fast pace towards the era of machines that are in charge of moral decisions, as in the case of self-driving cars. By reviewing the accident with the Uber self-driving car in Arizona in 2018, this chapter discusses the complexities of assigning responsibilities when such an accident occurs as a result of a joint decision between human and machine, begging the question: Can we ascribe any form of responsibility to the car, or does the responsibility lie solely with the car designer or manufacturer? There is a tendency among scientists and engineers to emphasize the imperfection of human beings and argue that computers could be the "moral saints" we humans can never be because they are not prone to human emotions with their explicit and implicit biases. By reviewing examples from loan approval practices, the chapter shows why this is incorrect. The chapter reviews the ethics of artificial intelligence (AI), specifically focusing on the problems of agency and bias. It further discusses meaningful human control in autonomous technologies as a powerful way of looking at human–machine interactions from the perspective of active responsibilities.