Hostname: page-component-7bb8b95d7b-qxsvm Total loading time: 0 Render date: 2024-09-27T04:21:39.173Z Has data issue: false hasContentIssue false

Unpredictable robots elicit responsibility attributions

Published online by Cambridge University Press:  05 April 2023

Matija Franklin
Affiliation:
Experimental Psychology Department, University College London, London WC1E 6BT, UK matija.franklin@ucl.ac.uk; https://www.ucl.ac.uk/pals/research/experimental-psychology/person/matija-franklin/ d.lagnado@ucl.ac.uk; https://www.ucl.ac.uk/pals/research/experimental-psychology/person/david-lagnado/
Edmond Awad
Affiliation:
Economics Department, University of Exeter, Exeter EX4 4PU, UK e.awad@exeter.ac.uk; https://www.edmondawad.me
Hal Ashton
Affiliation:
Computer Science Department, University College London, 66-72 Gower Street, London WC1E 6EA, UK ucabha5@ucl.ac.uk; https://algointent.com/
David Lagnado
Affiliation:
Experimental Psychology Department, University College London, London WC1E 6BT, UK matija.franklin@ucl.ac.uk; https://www.ucl.ac.uk/pals/research/experimental-psychology/person/matija-franklin/ d.lagnado@ucl.ac.uk; https://www.ucl.ac.uk/pals/research/experimental-psychology/person/david-lagnado/

Abstract

Do people hold robots responsible for their actions? While Clark and Fischer present a useful framework for interpreting social robots, we argue that they fail to account for people's willingness to assign responsibility to robots in certain contexts, such as when a robot performs actions not predictable by its user or programmer.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Autonomous machines are increasingly used to perform tasks traditionally undertaken by humans. With little or no human insight, these machines make decisions that significantly impact people's lives. Clark and Fischer (C&F) argue that people conceive of social robots as depictions of social agents. They differentiate between the “base scene” – representing the physical materials the robot is made from, the “depictive scene” representing the robot's recognizable form along with an interpretive authority, and the “scene depicted” which either transports people into an imagined world inhabited by the robot or imports the robot's imagined character into the real world. We argue that this framework fails to account for people's willingness to assign responsibility to social robots (and AI more generally). Specifically, we argue that in a range of cases people assign some degree of responsibility to social robots, and do not shift all responsibility to the “authority” that uses the robot. These cases include robots that behave in novel ways not predictable by their users or programmers. We also argue that responsibility attribution is not a finite resource; thus users and robots can simultaneously be held responsible.

Recent work (Tobia, Nielsen, & Stremitzer, Reference Tobia, Nielsen and Stremitzer2021) explores the question of who is held responsible for the actions of autonomous machines. Experimental evidence suggests that people are willing to attribute blame or praise to robots as agents in their own right (Ashton, Franklin, & Lagnado, Reference Ashton, Franklin and Lagnado2022; Awad et al., Reference Awad, Levine, Kleiman-Weiner, Dsouza, Tenenbaum, Shariff and Rahwan2020; Franklin, Awad, & Lagnado, Reference Franklin, Awad and Lagnado2021). As agents, autonomous machines are sometimes treated differently from humans. For example, people tend to hold humans accountable for their intentions while holding machines accountable for the outcomes of their actions (Hidalgo, Orghian, Canals, De Almeida, & Martin, Reference Hidalgo, Orghian, Canals, De Almeida and Martin2021). Further, people ascribe more extreme intentions to humans while only ascribing narrow intentions to machines. This is a puzzle for the depiction framework because it shows that people are prepared to attribute responsibility to depictions of agents as well as to the depiction's authority.

C&F argue that attributing responsibility to a depiction's authority is intuitive for the ventriloquist's dummy or a limited social robot like Asimo. However, the examples they list in Table 2 concern those whose behavior is largely predictable, at least by the authority. Recent technological advances have produced social robots capable of generating original behavior not conceived even by their creators (Woodworth, Ferrari, Zosa, & Riek, Reference Woodworth, Ferrari, Zosa and Riek2018). Using machine learning methods, modern social robotics learn human preferences by observing human behavior in various contexts, developing adaptive robot behavior which is tailored to the user (Wilde, Kulić, & Smith, Reference Wilde, Kulić and Smith2018). The mechanisms by which they reach their decisions are opaque, complex, and not directly encoded by the creator. We propose that such social robots are more likely to elicit responsibility attributions in their own right.

Perceived increases in machine autonomy come with increases in attributed responsibility toward those machines. First, higher machine autonomy is associated with intent inferences toward machines becoming more like humans (Banks, Reference Banks2019). Thus research shows that when robots are described as autonomous, participants attribute responsibility to them nearly as much as they do to humans (Furlough, Stokes, & Gillan, Reference Furlough, Stokes and Gillan2021). Additionally, more autonomous technologies decrease the perceived amount of control that the authority has over them, which in turn decreases the credit the authority receives for positive outcomes (Jörling, Böhm, & Paluch, Reference Jörling, Böhm and Paluch2019). Similarly, drivers of manually controlled vehicles are deemed more responsible than the drivers of automated vehicles (McManus & Rutchick, Reference McManus and Rutchick2019).

Furthermore, C&F's assertion that the creator of the depiction is responsible for the interpretation of their depictions relies on the fact that the depiction's behavior is predictable by the creator. The authors write: “We assume that Michelangelo was responsible not only for carving David, but for its interpretation as the biblical David” (target article, sect. 8, para. 1). But this argument fails for machines that behave unpredictably. When the painting “Edmond De Belamy,” generated by a deep learning algorithm, sold at an art auction for $432,500, many credited the machine (Christie's, 2018). This attribution to machine creativity goes beyond anecdotal evidence (Epstein, Levine, Rand, & Rahwan, Reference Epstein, Levine, Rand and Rahwan2020). Similarly, AlphaGo, in beating World Champion Go-player Lee Sedol, used novel strategies as adopted by human players (Chouard, Reference Chouard2016). Such novel moves prompted comments worldwide about machine creativity (McFarland, Reference McFarland2016), giving credit to AlphaGo rather than just DeepMind's team. While the DeepMind team intended AlphaGo to win the match, they did not envisage these novel moves.

Moreover, accounts of responsibility attribution should avoid committing the fixed-pie fallacy (Kaiserman, Reference Kaiserman2021) – the false assumption that there is a total amount of responsibility that can be allocated, or in other words, treating responsibility as a finite resource. The statement “when Ben interacts with Asimo, he would assume that there are authorities responsible for what Asimochar actually does…” (target article, sect. 8.1, para. 4) hints at this error. People are willing to attribute responsibility to both autonomous machines and their users (e.g., a self-driving car and the driver; Awad et al., Reference Awad, Levine, Kleiman-Weiner, Dsouza, Tenenbaum, Shariff and Rahwan2020).

There are also strong normative arguments that go against this fixed-pie fallacy. Some argue that neither the creators nor the operators of autonomous machines should bear sole responsibility (Sparrow, Reference Sparrow2007). Others have drawn parallels between artificial intelligence and group agency – usually assigned to large corporations – as both are nonhuman goal-directed actors (List, Reference List2021). Even in the case of recent fatal autonomous car crashes, attribution of legal responsibility to the car's manufacturer has not proved as straightforward as C&F's model would predict (De Jong, Reference De Jong2020).

C&F present an insightful framework to cover predictable and pre-programmed social robots. Here we have argued that more intelligent, autonomous, and thus, unpredictable social robots exist today. People are willing to attribute responsibility to such robots for their mistakes (Ashton et al., Reference Ashton, Franklin and Lagnado2022; Awad et al., Reference Awad, Levine, Kleiman-Weiner, Dsouza, Tenenbaum, Shariff and Rahwan2020; Franklin, Ashton, Awad, & Lagnado, Reference Franklin, Ashton, Awad and Lagnado2022). Further, for more anthropomorphized social robots, research suggests that people are even willing to attribute experiential mental states (Fiala, Arico, & Nichols, Reference Fiala, Arico, Nichols, Machery and O’Neill2014). The framework thus needs to be extended to handle the more intelligent robots currently being produced, and normative theories in philosophy and law suggesting that social robots may need to share social responsibility.

Financial support

This research received no specific grant from any funding agency, commercial, or not-for-profit sectors.

Competing interest

None.

References

Ashton, H., Franklin, M., & Lagnado, D. (2022). Testing a definition of intent for AI in a legal setting. Unpublished manuscript.Google Scholar
Awad, E., Levine, S., Kleiman-Weiner, M., Dsouza, S., Tenenbaum, J. B., Shariff, A., … Rahwan, I. (2020). Drivers are blamed more than their automated cars when both make mistakes. Nature Human Behaviour, 4(2), 134143.CrossRefGoogle ScholarPubMed
Banks, J. (2019). A perceived moral agency scale: Development and validation of a metric for humans and social machines. Computers in Human Behavior, 90, 363371.CrossRefGoogle Scholar
Chouard, T. (2016). The go files: AI computer clinches victory against go champion. Nature, https://doi.org/10.1038/nature.2016.19553Google Scholar
Christie's (2018). Is artificial intelligence set to become art's next medium? [Blog post]. Retrieved from https://www.christies.com/features/a-collaboration-between-two-artists-one-human-one-a-machine-9332-1.aspxGoogle Scholar
De Jong, R. (2020). The retribution-gap and responsibility-loci related to robots and automated technologies: A reply to Nyholm. Science and Engineering Ethics, 26(2), 727735.CrossRefGoogle ScholarPubMed
Epstein, Z., Levine, S., Rand, D. G., & Rahwan, I. (2020). Who gets credit for AI-generated art?. iScience, 23(9), 101515.CrossRefGoogle ScholarPubMed
Fiala, B., Arico, A., & Nichols, S. (2014). You robot. In Machery, E. & O’Neill, E. (Eds.), Current controversies in experimental philosophy (1st ed., pp. 3147). Routledge. https://doi.org/10.4324/9780203122884CrossRefGoogle Scholar
Franklin, M., Ashton, H., Awad, E., & Lagnado, D. (2022). Causal Framework of Artificial Autonomous Agent Responsibility. In Proceedings of 5th AAAI/ACM Conference on AI, Ethics, and Society (AIES ’22), Oxford, UK.Google Scholar
Franklin, M., Awad, E., & Lagnado, D. (2021). Blaming automated vehicles in difficult situations. iScience, 24(4), 102252.CrossRefGoogle ScholarPubMed
Furlough, C., Stokes, T., & Gillan, D. J. (2021). Attributing blame to robots: I. The influence of robot autonomy. Human Factors, 63(4), 592602.CrossRefGoogle ScholarPubMed
Hidalgo, C. A., Orghian, D., Canals, J. A., De Almeida, F., & Martin, N. (2021). How humans judge machines. MIT Press.CrossRefGoogle Scholar
Jörling, M., Böhm, R., & Paluch, S. (2019). Service robots: Drivers of perceived responsibility for service outcomes. Journal of Service Research, 22(4), 404420.CrossRefGoogle Scholar
Kaiserman, A. (2021). Responsibility and the “pie fallacy”. Philosophical Studies, 178(11), 35973616.CrossRefGoogle Scholar
List, C. (2021). Group agency and artificial intelligence. Philosophy & Technology, 34(4), 12131242.CrossRefGoogle Scholar
McFarland, M. (2016). What AlphaGo's sly move says about machine creativity. The Washington Post, retrieved from washingtonpost.com/news/innovations/wp/2016/03/15/what-alphagos-sly-move-says-about-machine-creativity/Google Scholar
McManus, R. M., & Rutchick, A. M. (2019). Autonomous vehicles and the attribution of moral responsibility. Social Psychological and Personality Science, 10(3), 345352.CrossRefGoogle Scholar
Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 6277.CrossRefGoogle Scholar
Tobia, K., Nielsen, A., & Stremitzer, A. (2021). When does physician use of AI increase liability?. Journal of Nuclear Medicine, 62(1), 1721.CrossRefGoogle ScholarPubMed
Wilde, N., Kulić, D., & Smith, S. L. (2018). Learning User Preferences in Robot Motion Planning through Interaction. In 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia (pp. 619–626). IEEE.CrossRefGoogle Scholar
Woodworth, B., Ferrari, F., Zosa, T. E., & Riek, L. D. (2018). Preference Learning in Assistive Robotics: Observational Repeated Inverse Reinforcement Learning. In Machine Learning for Healthcare Conference, Stanford University, USA (pp. 420–439). PMLR.Google Scholar