Published online by Cambridge University Press: 10 June 2021
Our ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a widening responsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called ‘artificial moral agents’ (AMAs) is inevitable. Still, this notion may seem to push back the problem, leaving those who have an interest in developing autonomous technology with a dilemma. We may need to scale-back our efforts at deploying AMAs (or at least maintain human oversight); otherwise, we must rapidly and drastically update our moral and legal norms in a way that ensures responsibility for potentially avoidable harms. This paper invokes contemporary accounts of responsibility in order to show how artificially intelligent systems might be held responsible. Although many theorists are concerned enough to develop artificial conceptions of agency or to exploit our present inability to regulate valuable innovations, the proposal here highlights the importance of—and outlines a plausible foundation for—a workable notion of artificial moral responsibility.
1. For the archived story of the family’s settlement, see: https://www.nytimes.com/1983/08/11/us/around-the-nation-jury-awards-10-million-in-killing-by-robot.html (last accessed 4 Dec 2019).
3. See Sharkey, N. Saying “no!” to lethal autonomous targeting. Journal of Military Ethics 2010;9:369–383CrossRefGoogle Scholar; Asaro, P. On banning autonomous weapon systems: Human rights, automation, and the dehumanization of lethal decision-making. International Review of the Red Cross 2012;94:687–709CrossRefGoogle Scholar; Wagner, M. Taking humans out of the loop: Implications for international humanitarian law. Journal of Law, Information & Science 2012;21:155–165.Google Scholar
5. See Char, DS, Shah, NH, Magnus, D. Implementing machine learning in healthcare – addressing ethical challenges. New England Journal of Medicine 2018;378:981–983CrossRefGoogle ScholarPubMed; Sharkey, A. Should we welcome robot teachers? Ethics and Information Technology 2016;18:283–297CrossRefGoogle Scholar; Van Wynsberghe, A. Service robots, care ethics, and design. Ethics and Information Technology 2016;18:311–321CrossRefGoogle Scholar; Himmelreich, J. Never mind the trolley: The ethics of autonomous vehicles in mundane situations. Ethical Theory and Moral Practice 2018;21:669–684CrossRefGoogle Scholar.
6. Allen, C, Wallach, W. Moral Machines: Teaching Robots Right from Wrong. New York: Oxford University Press; 2009Google Scholar; Allen, C, Wallach, W. Moral machines: Contradiction in terms or abdication of human responsibility? In: Lin, P, Abney, K, Bekey, GA, eds. Robot Ethics: The Ethical and Social Implications of Robotics. Cambridge, MA: MIT Press; 2011:55–68.Google Scholar
8. See Book III of Aristotle’s Nicomachean Ethics.
10. These have recently been dubbed contrastive or “instead of” reasons. See Dorsey, D. Consequentialism, cognitive limitations, and moral theory. In: Timmons, M, ed. Oxford Studies in Normative Ethics 3. Oxford: Oxford University Press; 2013:179–202CrossRefGoogle Scholar; Shoemaker, D. Responsibility from the Margins. New York: Oxford University Press; 2015.CrossRefGoogle Scholar
14. This condition may be diagnosed as an antisocial personality disorder, such as psychopathy. See the American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders (DSM-5). Washington, DC; 2013.
15. In this way, those who plead ignorance are attempting to eschew responsibility by dissolving their agency. The common reply—“you should have known”—is, then, a way of restoring agency and proceeding with blame. See Biebel, N. Epistemic justification and the ignorance excuse. Philosophical Studies 2018;175:3005–3028.CrossRefGoogle Scholar
16. According to recent work on implicit biases, it seems very few of us are moral agents in the robust sense outlined here. See, e.g., Doris, J. Talking to Our Selves: Reflection, Ignorance, and Agency. Oxford University Press; 2015CrossRefGoogle Scholar; Levy, N. Implicit bias and moral responsibility: Probing the data. Philosophy and Phenomenological Research 2017;94:3–26CrossRefGoogle Scholar; Vargas, M. Implicit bias, responsibility, and moral ecology. In: Shoemaker, D, ed. Oxford Studies in Agency and Responsibility 4. Oxford University Press; 2017Google Scholar.
19. In Asimov’s “A Boy’s Best Friend,” for example, the child of a family settled on a future lunar colony cares more for his robotic canine companion than for a real-life dog. Thanks to Nathan Emmerich for the pointer.
21. Here I have in mind the Kantian idea that we have indirect duties to non-human animals on the grounds that cruelty towards them translates to cruelty towards humans. See Kant’s Lectures on Ethics 27:459. For related discussion, on our treatment of artifacts, See Parthemore, J, Whitby, B. Moral agency, moral responsibility, and artifacts: What existing artifacts fail to achieve (and why), and why they, nevertheless, can (and do!) make moral claims upon us. International Journal of Machine Consciousness 2014;6:141–161.CrossRefGoogle Scholar
23. Ibid., at 25–26. See also Nyholm S. Humans and Robots: Ethics, Agency, and Anthropomorphism. Rowman & Littlefield; 2020.
24. In some ways, I’ve so far echoed the expansion of agency seen in Floridi, L, Sanders, JW. On the morality of artificial agents. Minds and Machines 2004;14:349–379.CrossRefGoogle Scholar Differences will emerge, however, as my focus turns to various ways of holding others responsible, rather than expanding agency to encompass artificial entities. Similarities can also be drawn to Coeckelbergh M. Virtual moral agency, virtual moral responsibility: on the moral significance of the appearance, perception, and performance of artificial agents. AI & Society 2009;24:181–189. Still, my account will rely less on AMAs’ appearance and more on human attitudes and interactions within the moral community.
25. See Bastone N. Google assistant now has a ‘pretty please’ feature to help everybody be more polite. Business Insider 2018 Dec 1. available at: https://www.businessinsider.co.za/google-assistant-pretty-please-now-available-2018-11 (last accessed 12 Mar 2019).
27. For the same reasons, it may be beneficial to design some AI and robotic systems with a degree of ‘social responsiveness.’ See Tigard D, Conradie N, Nagel S. Socially responsive technologies: Toward a co-developmental path. AI & Society 2020;35:885–893.
28. Strawson, PF. Freedom and resentment. Proceedings of the British Academy 1962;48:1–25.Google Scholar
29. Ibid., at 5.
34. The advantages sketched here are persuasively harnessed by the notion of rational sentimentalism, notably in D’Arms, J, Jacobson, D. Sentiment and value. Ethics 2000;110:722–748CrossRefGoogle Scholar; D’Arms, J, Jacobson, D. Anthropocentric constraints on human value. In: Shafer-Landau, R, ed. Oxford Studies in Metaethics 1. Oxford: Oxford University Press; 2006: 99–126Google Scholar.
37. Ibid., at 66; italics added.
38. Ibid., at 69; italics added. Comparable inconsistencies are seen in Floridi and Sanders 2004 (note 24).
40. Ibid., at 71; italics added.
44. Champagne, M, Tonkens, R. Bridging the responsibility gap in automated warfare. Philosophy and Technology 2015;28:125–137.CrossRefGoogle Scholar See also Johnson, DG. Technology with no human responsibility? Journal of Business Ethics 2015;127:707–715.CrossRefGoogle Scholar My account will be consistent with Johnson’s view that the “responsibility gap depends on human choices.” However, while Johnson focuses on the design choices in technology itself, the choices that occupy my attention concern how and where we direct our responsibility practices. I’m grateful to an anonymous reviewer for comments here.
46. Ibid., at 72.
48. Consider also that we punish corporations (e.g. by imposing fines) despite the implausibility of such entities displaying the right sort of response, an anonymous reviewer aptly notes. By contrast, consequential accounts of punishment can be seen as inadequate depictions of moral blame, since they don’t fully explain our attitudes and might not properly distinguish wrongdoers from others. See Wallace, RJ. Responsibility and the Moral Sentiments. Harvard University Press 1994; 52–62.Google Scholar I’m grateful to Sven Nyholm for discussion here.
49. Proponents of the ‘process view’ applied to technology can be said to include Johnson, DG, Miller, KW. Un-making artificial moral agents. Ethics and Information Technology 2008;10:123–133.CrossRefGoogle Scholar Despite some similarities to this work, my account does not fit neatly into Johnson and Miller’s Computational Modelers or Computers-in-Society group.
53. Exemptions are contrasted with excuses (and justifications). See, e.g., Watson 2004;224–225 (note 50).
55. However, these sorts of sanctioning mechanisms are less likely to succeed where the target AI system has surpassed humans in general intelligence. See the discussion of ‘incentive methods’ for controlling AI, in Bostrom, N. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press 2014: 160–163.Google Scholar
56. Such ‘bottom-up’ moral development in AI is discussed in Allen and Wallach 2009 (note 6). Compare also Hellström, T. On the moral responsibility of military robots. Ethics and Information Technology 2013;15:99–107CrossRefGoogle Scholar. Again, for some (e.g. Wallace 1994, in note 48), consequential accounts of responsibility will be unsatisfying. While a fuller discussion isn’t possible here, in short, my goal has been to unearth general mechanisms for holding diverse objects responsible, which admittedly will deviate from the robust sorts of responsibility (and justifications) we ascribe to natural moral agents. Again, I’m here indebted to Sven Nyholm.
57. See, e.g., Ren, F. Affective information processing and recognizing human emotion. Electronic Notes in Theoretical Computer Science 2009;225:39–50CrossRefGoogle Scholar. Consider also recent work on Amazon’s Alexa, e.g., in Knight W. Amazon working on making Alexa recognize your emotions. MIT Technology Review 2016.
58. Similarly, Helen Nissenbaum suggests that although accountability is often undermined by computing, we can and should restore it, namely by promoting an ‘explicit standard of care’ and imposing ‘strict liability and producer responsibility.’ Nissenbaum, H. Computing and accountability. Communications of the ACM 1994;37:72–80CrossRefGoogle Scholar; Nissenbaum, H. Accountability in a computerized society. Science and Engineering Ethics 1996;2:25–42.CrossRefGoogle Scholar I thank an anonymous reviewer for connecting my account with Nissenbaum’s early work.
61. John Danaher likewise frames the problem in terms of trade-offs, namely increases in efficiency and perhaps well-being, but at the cost of human participation and comprehension. See Danaher, J. The threat of algocracy: reality, resistance and accommodation. Philosophy and Technology 2016;29:245–268CrossRefGoogle Scholar; also Robots, Danaher J, law and the retribution gap. Ethics and Information Technology 2016;18:299–309.Google Scholar
62. In a follow-up paper, I explain further how pluralistic conceptions of responsibility can address the alleged gap created by emerging technologies. See Tigard D. There is no techno-responsibility gap. Philosophy and Technology 2020; available at: https://doi.org/10.1007/s13347-020-00414-7.