Hostname: page-component-848d4c4894-xm8r8 Total loading time: 0 Render date: 2024-06-15T17:02:08.534Z Has data issue: false hasContentIssue false

Robots and Respect: A Response to Robert Sparrow

Published online by Cambridge University Press:  12 September 2016

Extract

Robert Sparrow recently argued in this journal that several initially plausible arguments in favor of the deployment of autonomous weapon systems (AWS) in warfare are in fact flawed, and that the deployment of AWS faces a serious moral objection. Sparrow's argument against AWS relies on the claim that they are distinct from accepted weapons of war in that they either fail to transmit an attitude of respect for enemy combatants or, worse, they transmit an attitude of disrespect. In this reply we argue that this distinction between AWS and widely accepted weapons is illusory, and therefore cannot ground a moral difference between AWS and existing methods of waging war. We also suggest that if deploying conventional soldiers in a given situation would be permissible, but we could expect to cause fewer civilian casualties by instead deploying AWS, then it would be consistent with an intuitive understanding of respect to deploy AWS in this situation.

Type
Response
Copyright
Copyright © Carnegie Council for Ethics in International Affairs 2016 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

NOTES

1 See Sparrow, Robert, “Robots and Respect: Assessing the Case against Autonomous Weapon Systems,” Ethics & International Affairs 30, no. 1 (2016), pp. 93116 CrossRefGoogle Scholar.

2 Nagel, Thomas, “War and Massacre,” Philosophy & Public Affairs 1, no. 2 (1972)Google Scholar.

3 Purves, Duncan, Jenkins, Ryan, and Strawser, Bradley J., “Autonomous Machines, Moral Judgment, and Acting for the Right Reasons,” Ethical Theory and Moral Practice 18, no. 4 (2015), pp. 851–72CrossRefGoogle Scholar.

4 Norcross, Alastair, “Off Her Trolley? Frances Kamm and the Metaphysics of Morality,” Utilitas 20, no. 1 (2008), p. 65CrossRefGoogle Scholar.

5 To be sure, the possibility of metaphysical indeterminacy in targeting decisions seems to be the impetus for the “responsibility gaps” objection, which Sparrow notes. We have addressed this objection elsewhere. See Purves, Jenkins, and Strawser, “Autonomous Machines.”

6 See Slovic, Paul, “Perception of Risk,” Science 236 (1987), pp. 280–85Google Scholar. See also Starr, Chauncey, “Social Benefit Versus Technological Risk,” Science 165 (1969), p. 1232Google Scholar.

7 Indeed, Sparrow is willing to entertain this possibility. We, in fact, think the outcome is quite likely. Sparrow is worried, and justifiably so, about a machine's ability to understand and appreciate the nature of morality as a meaning-laden and contextual domain of knowledge and behavior. However, the results of recent advances in machine learning, which have been nothing short of staggering, have rendered moot these concerns about machine “understanding.” AlphaGo and Watson have made it clear that machines can outperform humans in domains where we once thought we enjoyed a great privilege and indomitable superiority. And this is true whether these machines understand the context in which they are acting, or the meaning and significance of their choices.

8 The fact that we cannot legitimately demand that AWS minimize civilian casualties seems significant only if it generates a “responsibility gap” that renders attributions of responsibility for the actions of AWS difficult or impossible. But this is not a new problem. For discussions of the problem of responsibility attributions, see Matthias, Andreas, “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata,” Ethics and Information Technology 6, no. 3 (2004), pp. 175–83Google Scholar; Sparrow, Robert, “Killer Robots,” Journal of Applied Philosophy 24, no. 1 (2007), pp. 6277 Google Scholar; and Heather Roff, “Killing in War: Responsibility, Liability, and Lethal Autonomous Robots,” in Fritz Allhoff, Nicholas G. Evans, and Adam Henschke, eds., Routledge Handbook of Ethics and War: Just War Theory in the Twenty-First Century (Milton Park, Oxon: Routledge, 2013). The inability to make moral demands of machines may ultimately count against deploying human soldiers and in favor of deploying AWS. Robillard, Michael and Strawser, Bradley [“The Moral Exploitation of Soldiers,” Public Affairs Quarterly 30, no. 2 (2016)]Google Scholar have argued that soldiers are often victims of “moral exploitation” by having moral responsibility “outsourced” to them in virtue of their vulnerable position. Replacing human soldiers with AWS holds the potential to resolve this deontological worry about exploitation.

9 Jeff McMahan, Killing in War (New York: Oxford University Press, 2009).

10 Ryan Jenkins, “Cyberwarfare as Ideal War,” in Adam Henschke, Fritz Allhoff, and Bradley Strawser, eds., Binary Bullets: The Ethics of Cyberwarfare (New York: Oxford University Press, 2016).

11 Purves, Jenkins, and Strawser, “Autonomous Machines.”