Hostname: page-component-7479d7b7d-767nl Total loading time: 0 Render date: 2024-07-13T20:16:42.105Z Has data issue: false hasContentIssue false

The Politics of Robot Autonomy

Published online by Cambridge University Press:  20 January 2017

Thomas Burri*
Affiliation:
International law and European law, University of St. Gallen (HSG), contact: ,

Abstract

The autonomy robots enjoy is understood in different ways. On the one hand, a technical understanding of autonomy is firmly anchored in the present and concerned with what can be achieved now by means of code and programming; on the other hand, a philosophical understanding of robot autonomy looks into the future and tries to anticipate how robots will evolve in the years to come. The two understandings are at odds at times, occasionally they even clash. However, not one of them is necessarily truer than the other. Each is driven by certain real-life factors; each rests on its own justification. This article discusses these two “views of robot autonomy” in depth and witnesses them at work at two of the most relevant events of robotics in recent times, namely the Darpa Robotics Challenge, which took place in California in June 2015, and the ongoing process to address lethal autonomous weapons in humanitarian Geneva, which is spurred on by a “Campaign to Stop Killer Robots”.

Type
Special Issue on the Man and the Machine
Copyright
Copyright © Cambridge University Press 2016

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1 The details of the incident involving Warner's foot were explained by a member of team Worcester Polytechnic Institute/Carnegie Mellon University, in a personal conversation held at the Meet the Robots booth at DRC, 5 June 2015, ca. 5.00 pm, PST (account on file with the author).

3 Ugo Pagallo, The Law of Robots – Crimes, Contracts, Torts (Springer: Dordrecht, 2013), at p. xiii, notes “recurring stalemates on definitional issues” (such as robot autonomy). While I see in this statement confirmation of the need for critical deconstruction, Ugo Pagallo concluded for his book that a pragmatic focus on the law was the right way to go.

4 The main frame of reference used for deconstruction in this article was Balkin, J. M., “Deconstructive Practice and Legal Theory”, 96 Yale Law Journal (1987), pp. 743 et sqq. CrossRefGoogle Scholar

5 All quotes from <http://theroboticschallenge.org/overview> (last visit: 5 January 2016). At the time this article was submitted the webpage of the Darpa Robotics Challenge seemed no longer operable. However, the original page has been archived multiple times by Web Archive, so that access via <http://www.web.archive.org>. is still possible. The link indicated at the beginning of this footnote, for instance, is available via <http://web.archive.org/web/20160312141837/http://theroboticschallenge.org/overview>..

6 Darpa, “Team MIT Feature Video”, 5 June 2015, available on the internet at: <http://www.youtube.com/watch?v=NuH8jsBXoAg>. at 0:17 min.

7 Personal conversation held at the Meet the Robots booth at DRC, 5 June 2015, ca. 5.15 pm, PST (account on file with the author). For a brief explanation based on another task, see Patrick Tucker, “Here's What The Military's Top Roboticist Is Afraid Of (It's Not Killer Robots) [Interview with Gill Pratt]”, 30 August 2015, available on the internet at: <http://www.defenseone.com/technology/2015/08/militarys-top-roboticist-afraid-not-killer-robots/119786/?oref=DefenseOneTC>. in the answer to the second question.

8 For a more technical explanation of how the team made Atlas walk, see Scott Kuindersma, Robin Deits, Maurice Fallon, et al., “Optimization-based Locomotion Planning, Estimation, and Control Design for the Atlas Humanoid Robot” (currently under review) (2015), pp. 42 et sqq., available on the internet at: <http://groups.csail.mit.edu/robotics-center/public_papers/Kuindersma14.pdf>.. Here is a passage from the introduction (p. 2): “Our approach to walking combines an efficient footstep planner with a simple dynamic model of the robot to efficiently compute desired walking trajectories. To plan a sequence of safe footsteps, we decompose the problem into two steps. First, a LIDAR terrain scan is used to identify obstacles in the vicinity of the robot. Given this obstacle map, we solve a sequence of optimization problems to compute a set of convex safe regions in the configuration space of the foot. Next, a mixed-integer convex optimization problem is solved to find a feasible sequence of footsteps through these regions. Finally, a desired center of pressure trajectory through these steps is computed and input to the controller.”

9 Other teams than MIT, obviously, also put supervised autonomy into practice at DRC. See for instance team Tartan Rescue of Carnegie Mellon University and its robot CHIMP: “CHIMP uses a blend of high-level operator commands and low-level autonomy to perform tasks quickly and compensate for communications limitations. Operators give CHIMP general instructions; CHIMP then autonomously plans and carries them out. Input from CHIMP's full, 360° sensor suite feeds into a 3D model of its surroundings. It uses this model to autonomously plan and execute joint and limb movements and grasps. The 3D model also gives CHIMP's operators better situational awareness.” ( <http://theroboticschallenge.org/finalist/tartan-rescue>., last visit: 16 January 2016, see <http://web.archive.org/web/20160312085530/http://www.theroboticschallenge.org/finalist/tartan-rescue>., see supra note 6.) Tartan team leader Tony Stentz put it in the following words in an interview screened at DRC: “Since the trials we’ve mostly worked on improvements to the software. We have been working on adding more autonomy to the system so that the robot will be faster doing more things itself so that it will be more robust to restrictions on communications.” (Darpa, “Team Tartan Rescue Feature Video”, 5 June 2015, available on the internet at: <http://www.youtube.com/watch?v=KzJY7OjjynM>., at 0:25 min.) Contrast this with the following two teams. (i) Team RoboSimian of the Jet Propulsion Labs worked less with autonomy, as the following quote from a conversation with a team member illustrates: “RoboSimian is not equipped with so much autonomy, perhaps less than some of the other robots. It was more built to cross rubble safely. Unfortunately, it turned out that this was not so much required in the Finals.” (Personal conversation held at the Meet the Robots booth at DRC in the afternoon of 5 June 2015, ca. 5.30 pm, PST; account on file with the author.) (ii) Team NimbRo Rescue of University of Bonn, Germany, equipped their robot Momaro with “very little autonomy, for this [was] hardly rewarded by the competition” (personal conversation held at the Meet the Robots booth at DRC in the afternoon of 5 June 2015, ca. 5.45 pm, PST; account on file with the author). The story of team NimbRo Rescue as such – how they transported the entire equipment, including the robot, from Germany in the team members’ personal luggage, had had hardly any time for practicing driving, taped the robot together minutes before the start, and still managed to finish first place on the first day, completing 7 out of 8 task in mere 34 minutes – would merit a newspaper article. (Ultimately, team NimbRo Rescue finished fourth, after three teams had managed to complete all tasks in the second run on day 2.)

10 As Richards, Neil M. and Smart, William D., “How should the law think about robots?”, in Calo, Ryan, Froomkin, Michael A., and Kerr, Ian (eds), Robot Law, (Cheltenham: Edward Elgar, 2016), pp. 3 et sqq.Google Scholar, at p. 21, footnote 52 put it: “Instead, they will give higherlevel direction, such as selecting an object to grasp, and rely on lower-level autonomous software to carry out these commands. Thus, although they have good control over what the robot does, they have only loose control over how it does it.” (This passage is unrelated to DRC.)

11 Most, though not all, robots competing in DRC resembled human beings to some extent. See the pictures of the robots at <http://theroboticschallenge.org/teams>., last visit: 7 January 2016, see <http://web.archive.org/web/20160313212007/http://www.theroboticschallenge.org/teams>., see supra note 5. For more on anthropomorphism, see below IV.4.

12 For pictures and films, see <http://www.robo-team.com>..

13 Presentation given at DRC Expo, First Responder Area, 6 June, 11.40 PST (account on file with the author).

14 Personal conversation at Re2's booth no. 8 at DRC expo, 6 June, 2.30 pm, PST (account on file with the author).

15 See <http://biomimetics.mit.edu>. See the video Spiegel Online, “Springender Raubkatzenroboter: Hüpfen auf der Hindernisbahn”, 1 May 2015, available on the internet at: <http://www.spiegel.de/video/roboter-cheetah-springt-autonom-ueber-hindernisse-video-video-1581131.html>.

16 See <http://www.bostondynamics.com>. Boston Dynamics also developed the Atlas robot which six teams used for DRC.

17 Personal conversation at MIT Biomimetic Robotics Lab's booth no. 28 at DRC expo, 6 June, 12.00 pm, PST (account on file with the author). See the video: <http://www.youtube.com/watch?v=_luhn7TLfWU>. As a further demonstration of their work, Sangbae Kim's group showed at their booth at DRC expo how a robot was controlled by a human operator via a sort of exoskeleton which provided feedback. It was thus capable of punching through thick cardboard. (Punching is very hard for robots for various reasons.)

19 Personal conversation held at iRobot's booth no. 4 at DRC expo, 6 June, 2.45 pm, PST (account on file with the author).

20 For the perils of using Roomba, see Justin McCurry, “South Korean woman's hair ‘eaten’ by robot vacuum cleaner as she slept”, 9 February 2015, available on the internet at: <http://www.theguardian.com/world/2015/feb/09/south-korean-womans-hair-eaten-by-robot-vacuum-cleaner-as-she-slept>.

22 Personal conversation held at Softbank's booth no. 10 at DRC expo, 6 June, 2.15 pm, PST (account on file with the author).

24 Personal conversation held with one of VideoRay's representatives at the unmanned underwater vehicles tank, no. 60, at DRC expo, 6 June, 10.45 pm, PST (account on file with the author).

25 The Kingfisher robot is no longer available. It has apparently been replaced by Heron USV; see <http://www.clearpathrobotics.com>. The webpage advertising Kingfisher can still be found at <http://web.archive.org/web/20150906183304/ http://www.clearpathrobotics.com/kingfisher/>.

26 The company mainly focuses on unmanned land vehicles (personal conversation held at Clearpath's booth no. 19, at DRC expo, 6 June, 12.30 pm, PST; account on file with the author). See also the company's range of products at its webpage, supra note 25.

27 Personal conversation, supra note 26.

29 Personal conversation held at Boston Engineering's booth no. 11, at DRC expo, 6 June, 1.45 pm, PST (account on file with the author).

30 Compare Mark Prigg, “US Navy reveals its latest recruit: ‘Silent Nemo’ robofish can swim into enemy territory undetected – and is designed to look exactly like a tuna”, 9 June 2015, available on the internet at: <http://www.dailymail.co.uk/sciencetech/article-2871907/US-Navy-reveals-latest-recruit-Project-Silent-Nemo-robofish-set-swim-enemy-territory-undetected-designed-look-exactly-like-tuna-fish.html>.

32 Personal conversation held at Ctrl.me's booth no. 64 at DRC expo, 6 June, 10.00 am, PST (account on file with the author).

33 For a text which uses autonomy language with regard to drones see Will Knight, “Sorry, Shoppers: Delivery Drones Might Not Fly for a While”, MIT Technology Review, 30 March 2016, available on the internet at: <http://www.technologyreview.com/s/601117/sorry-shoppers-delivery-drones-might-not-fly-for-a-while/#/set/id/601150/>.

35 Personal conversation held at Flyability's booth no. 62 at DRC expo, 6 June, 9.30 am, PST (account on file with the author).

37 Personal conversation held at Mark Costello's booth no. 25 at DRC expo, 6 June, 12.45 pm, PST (account on file with the author).

38 Personal conversation (account on file with the author; the roboticist preferred not to be named.) For some contrast to the quote, see the robo-chef who uses a kitchen to cook meals: BBC World Service, Global News podcast, 15 April 2015 (AM edition); see also “Armar-IIIa bringt ein Glas Saft”, Süddeutsche Zeitung, 26 February 2016, at p. 11, on the kitchen robot Armar- IIIa developed at Karlsruher Institut für Anthropomatik und Robotik.

39 Personal conversation, supra note 14; the representatives of iRobot, Flyability, and, to a more limited extent, Ctrl.me expressed beliefs along the same lines (supra notes 19, 32, and 35). The armed forces, for instance, do not apparently want any autonomy near an improvised exploding device that needs to be disarmed (personal conversation at iRobot's booth, supra note 19).

40 Curtis E.A. Karnow, “The application of traditional tort theory to embodied machine intelligence”, in Calo, Froomkin, and Kerr (eds), Robot Law, supra note 10, pp. 51 et sqq., at p. 52, considers only robots which learn on their own as “autonomous”. He adds: “Interesting robots […] are those that are not simply autonomous in the sense of not being under real–time control of a human, but autonomous in the sense that the methods selected by the robots to accomplish the human-generated goal are not predictable by the human.” (p. 53).

41 The courses Coursera offers serve as an excellent entry point to machine learning, e.g. the course “Machine Learning” offered by Andrew Ng at Stanford in late 2015. Similar courses are offered by Udacity. (See The Economist, “Teaching tomorrow”, 5 September 2015 (Technology Quarterly, print edition), available on the internet at: <http://www.economist.com/news/technology-quarterly/21662654-sebastian-thrun-pioneer-googles-autonomous-cars-wants-teach-people-how>). On machine learning software going open-source, see Eike Kühl, “Künstlich, intelligent und frei”, Zeit online, 14 December 2015, available on the internet at: <http://www.zeit.de/digital/internet/2015-12/maschinelles-lernen-facebook-google-tensorflow/komplettansicht>.

42 Volodymyr Mnih, Koray Kavukcuoglu, David Silver, et al., “Human–level control through deep reinforcement learning”, 518 Nature (26 February 2015) (2015), pp. 529 et sqq. See also the comment: Schölkopf, Bernhard, “Learning to see and act”, 518 Nature (26 February 2015) (2015), pp. 486 et sqq. Google Scholar For Go: Silver, David, Huang, Aja, Maddison, Chris J., et al., “Mastering the game of Go with deep neural networks and tree search”, 529 Nature (28 January 2016) (2016), pp. 484 et sqq.CrossRefGoogle ScholarPubMed

43 The Economist describes DeepMind as “a secretive artificialintelligence company acquired by Google in 2014” (The Economist, 30 January 2016, at p. 6). See <http://www.deepmind.com>.

44 See Dominik Herrmann, “Maschinelles Lernen: Wie lernen Autos fahren?”, Talk given at conference “Intelligente Agenten und Recht: Zur Verantwortlichkeit beim Einsatz von Robotern”, University of Basel, 15 January 2017, available on the internet at: <http://svs.informatik.uni-hamburg.de/publications/2016/2016-01-15-Herrmann-Machine-Learning-autonomes-Fahrzeug.pdf>. On autonomous vehicles, Ronald Leenes and Federica Lucivero, “Laws on Robots, Laws by Robots, Laws in Robots: Regulating Robot Behaviour by Design”, 6 Law, Innovation and Technology (2) (2014), pp. 193 et sqq., is instructive. The learning process has begun for lorries, too: , “Freightliner Inspiration: Daimler testet selbstfahrenden Lkw im Verkehr”, Spiegel online, 6 May 2015, available on the internet at: <http://www.spiegel.de/auto/aktuell/freightliner-daimler-testet-selbstfahrenden-lkw-in-nevada-a-1032293.html#utm_source=panorama%23utm_medium=medium%23utm_campaign=plista&ref=plista>.

45 Tucker, “Here's What The Military's Top Roboticist Is Afraid Of (It's Not Killer Robots) [Interview with Gill Pratt]”, supra note 7: “We’re talking about lab prototypes […]. In terms of when you would deploy this thing, it's still many, many years off. First the cost has to come down, the effectiveness has to come up, and the reliability has to come up.” (Answer to the fifth question.) On a different note, not all teams developed the robots they used to compete in DRC all by themselves. Boston Dynamics (i. e. Google/Alphabet) notably developed the Atlas robot which six teams used. See Samuel Gibbs, “What is Boston Dynamics and why does Google want robots?”, 17 December 2013, available on the internet at: <http://www.theguardian.com/technology/2013/dec/17/google-boston-dynamics-robots-atlas-bigdog-cheetah/print>. It seems, however, that Google is no longer that interested in robotics: Stefan Betschon, “Das vierte Robotergesetz”, Neue Zürcher Zeitung, 22 March 2016, at p. 13. Rumour has it that the video about Atlas mentioned infra, in note 112, made Google lose interest. For a principled statement on artificial intelligence coming from Alphabet/Google, see Eric Schmidt and Jared Cohen, “Inventive artificial intelligence will make all of us better”, Time, 28 December 2015, at p. 20.

46 See Daniela Rus, “The Robots are Coming”, Foreign Affairs (July/August) (2015), pp. 2 et sqq., at p. 4: “[P]roblems remain in three important areas. It still takes too much time to make new robots, today's robots are still quite limited in their ability to perceive and reason about their surroundings, and robotic communication is still quite brittle.” (Emphasis added.) See also Gill Pratt's statement supra note 45. In this context consider the very interesting research being conducted in Cambridge and Zurich on a “mother robot” creating a number of “children robots” and improving them: Evan Ackerman, “Mother Robots Build Children Robots to Experiment with Artificial Evolution”, 21 July 2015, available on the internet at: <http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/mother-robots-build-children-robots-to-experiment-with-artificial-evolution> (note the anthropomorphic terms). The scientific publication is: Brodbeck, Luzius, Hauser, Simon and Iida, Fumiya, “Morphological Evolution of Physical Robots through Model–Free Phenotype Development”, 10 PLoS ONE (6) (2015), available on the internet at: <http://dx.doi.org/10.1371/journal.pone.0128444>.CrossRefGoogle ScholarPubMed

47 A notable exception is Lev Grossman, “Iron Man”, Time, 8 June 2015, at p. 73-74. (The article was written before DRC, but based on the journalist's visit at one of the labs.)

48 More on the distinction between robotics and artificial intelligence infra IV.5.

49 Mnih, Kavukcuoglu, Silver, et al., “Human–level control through deep reinforcement learning”, supra note 42, at p. 529. On this, listen to BBC World Service, Global News podcast, 15 April 2015 (AM edition).

50 For a good introduction to statistics for social scientists see Agresti, Alan and Finlay, Barbara, Statistical Methods for the Social Sciences, 4th. ed. (Pearson: Essex, 2014).Google Scholar

51 On how to incorporate decision–making in ethically ambivalent situations, see Anderson, Michael and Leigh, Susan Anderson, “Towards Ensuring Ethical Behaviour from Autonomous Systems: A Case–Supported Principle–Based Paradigm”, 42 Industrial Robot: An International Journal (4) (2015), pp. 324 et sqq.CrossRefGoogle Scholar See also Arkin, Roland C., Governing Lethal Behavior in Autonomous Robots (Boca Raton: CRC Press, 2009).CrossRefGoogle Scholar For an explanation of the case–based and the principle–based approaches to programming robots to behave ethically, see Deng, Boer, “Machine ethics: The robot's dilemma”, 523 Nature (2 July 2015) (2015), pp. 24 et sqq.CrossRefGoogle ScholarPubMed The difficulties of even an ethical assessment of robots and their autonomy becomes obvious in Robolaw, a project funded by the European Union to the tune of hundreds of thousands of Euros, with limited outcome: Erica Palmerini, Federico Azzarri, Fiorella Battaglia, et al., “Guidelines on Regulating Robotics”, 22 September 2014, available on the internet at: <http://www.robolaw.eu>; similarly for a legal assessment: Christophe Leroux, Roberto Labruto, Chiara Boscarato, et al., “Suggestion for a green paper on legal issues in robotics”, 31 December 2012, available on the internet at: <http://www.eu-robotics.net/cms/upload/PDF/euRobotics_Deliverable_D.3.2.1_Annex_Suggestion_GreenPaper_ELS_IssuesInRobotics.pdf>. For a brief, accessible piece on ethics in autonomous driving, see Martin Kolmar and Martin Booms, “Keine Algorithmen für ethische Fragen”, 26 January 2016, available on the internet at: <http://www.nzz.ch/meinung/kommentare/keine-algorithmen-fuer-ethische-fragen-ld.4483>.

52 The learning process is not just taking place in California, but also in Switzerland: Bundesamt für Strassen ASTRA, “UVEK bewilligt Pilotprojekt für Tests mit autonomem Fahrzeug”, 28 April 2015, available on the internet at: <http://www.astra.admin.ch/dokumentation/00109/00113/00491/index.html?lang=de&msg-id=57035>. Going off-road with autonomous vehicles is hard, though: Zackary Canepari, Drea Cooper and Emma Cott, “Building the Autonomous Machine”, available on the internet at: <http://www.nytimes.com/video/technology/100000003668539/navy-tests-autonomous-future.html>. 53 Darpa, “The Robots Come to California June 5-6”, 2 April 2015, available on the internet at: <http://www.youtube.com/watch?v=eYBV-a7Qmyk>.

54 Darpa, “Darpa Robotics Challenge Day 1 Compilation”, 5 June 2015, available on the internet at: <http://www.youtube.com/watch?v=eZTKLpvSAqo>.

55 Darpa, “Darpa Robotics Challenge Final Event Compilation”, 16 June 2015, available on the internet at: <http://www.youtube.com/watch?v=FRkYOFR7yPA>.

56 Darpa, “A Celebration of Risks (a.k.a. Robots Take a Spill)”, 6 June 2015, available on the internet at: <http://www.youtube.com/watch?v=7A_QPGcjrh0>.

57 Supra note 55, at minutes 0.55 and 2.10, respectively.

58 See video Thomas Burri, “Burri_Darpa_1”, 26 April 2016, available on the internet at: <http://sites.google.com/site/thomasburrihome/home/bad-robot/darpa-videos>. (My apologies for the quality. It is poorer than in the DRC videos published by Darpa.)

59 See videos Thomas Burri, “Burri_Darpa_2” and “Burri_Darpa_3”, 26 April 2016, available via the link supra note 58 – please disregard the author's appearance in the middle of the video “Burri_Darpa_2”. Note instead how nimbly the spectators are walking around in the foreground in the video. See also the video Thomas Burri, “Burri_Darpa_4”, 26 April 2016, also available via the link supra note 58, of Running Man taking the final steps over the rubble. Previously, Running Man took several minutes to take a few steps over the rubble; this is not shown in the video. (For a portrait of Running Man see the Time article, supra note 47).

60 See for instance the somewhat sceptical report ahead of the DRC in Grossman, “Iron Man”, supra note 47. It begins as follows: “Let me correct an impression you may have: Robots are pretty much idiots. They can't do very much, and they do it with a slowness that would try the patience of a saint who was also an elephant. Samuel Beckett would have made a good roboticist. It is a science of boredom, disappointment, and despair.” P. 26. Compare with: John Markoff, “Relax, the Terminator Is Far Away”, 25 May 2015, available on the internet at: <http://www.nytimes.com/2015/05/26/science/darpa-robotics-challenge-terminator.html?_r=1> (published as “A Reality Check for A.I.”, New York Times (New York edition), 26 May 2015, at p. D2, ahead of DRC); and “After the fall”, The Economist, 13 June 2015, at p. 73-74 (published after DRC).

61 Human Rights Watch and Harvard International Human Rights Clinic, Losing Humanity: The Case against Killer Robots (New York, November 2012), available on the internet at: <http://www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-robots>. Three years before that: Singer, Peter W., Wired for War – The Robotics Revolution and Conflict in the Twenty–first Century (New York: Penguin Press, 2009)Google Scholar. For an immediate reaction on “Losing Humanity”, see Thurnher, Jeffrey S., “The Law That Applies to Autonomous Weapon Systems”, 17 American Society of International Law Insights (4) (18 January 2013).Google Scholar

62 Note that some deaths of humans already occur as a consequence of robot-human interaction: Eliana Dockterman, “Robot Kills Man at Volkswagen Plant”, 1 July 2015, available on the internet at: <http://time.com/3944181/robot-kills-man-volkswagen-plant/>. 63 Human Rights Watch and Harvard International Human Rights Clinic, Losing Humanity: The Case against Killer Robots, supra note 61, at p. 1.

65 Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns, A/HRC/23/47, 9 April 2013, para. 33: “This report is a call for a pause, to allow serious and meaningful international engagement with this issue.” The report recommended to the United Nations Human Rights Council “to call on all States to declare and implement national moratoria on at least the testing, production, assembly, transfer, acquisition, deployment and use of L[ethal] A[utonomous] R[obotic]s until such time as an internationally agreed upon framework […] has been established”, para. 113. This report had been preceded by an interim report by the previous UN Special Rapporteur on extrajudicial, summary or arbitrary execution, Philip Alston. The interim report diagnosed a lack of discussion in civil society about the employment of robots in warfare (Interim Report by UN Special Rapporteur on extrajudicial, summary or arbitrary executions, Philip Alston, A/65/321, 23 August 2010, at p. 16: “Although robotic or unmanned weapons technology has developed at astonishing rates, the public debate over the legal, ethical and moral issues arising from its use is at a very early stage, and very little consideration has been given to the international legal framework necessary for dealing with the resulting issues.”) The interim report in turn had relied on Singer, Wired for War – The Robotics Revolution and Conflict in the Twenty–first Century, supra note 61, which had already broken some ground for a broader discussion about robots in warfare in general. Note that robot autonomy was not the main focus of the interim report. It noted, though, that trials had been held in which helicopters had carried out fully autonomous flights (p. 13, with further reference) and that “[e]ach robot within a swarm would fly autonomously to a designated area, and “detect” threats and targets through the use of artificial intelligence, sensory information and image processing” (p. 13, with further reference).

66 Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Heyns, supra note 65, para. 35.

68 See <http://www.unog.ch/80256EE600585943/(httpPages)/6CE049BE22EC75A2C1257C8D00513E26?OpenDocument>. A report of the 2015 meeting, submitted by the chairperson, is on file with the author.

69 Advanced version of Final Report, Meeting of the High Contracting Parties to CCW, 2015, para. 35, available on the internet at: <http://www.unog.ch/80256EDD006B8954/(httpAssets)/BCB47D7E5EB64BC9C1257F47004FB59D/$file/AdvancedVersion_FinalDocument_2015MSP.pdf>. The third informal meeting of experts proposed to elevate the informal meeting of experts to a “Group of Governmental Experts” (see Reaching Critical Will, CCW Report vol. 3, no. 5, 15 April 2016, available on the internet at: <http://www.reachingcriticalwill.org/disarmament-fora/ccw/2016/laws/ccwreport>, at p. 1. The International Conference of the Red Cross and the Red Crescent also considers autonomous weapons: International humanitarian law and the challenges of contemporary armed conflicts, 32nd International Conference of the Red Cross and Red Crescent, October 2015, available on the internet at: <http://rcrcconference.org/wp-content/uploads/sites/3/2015/10/32IC-Report-on-IHL-and-challenges-of-armed-conflicts.pdf>, at pp. 44-47.

70 Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons which may be deemed to be Excessively Injurious or to have Indiscriminate Effects (with Protocols I, II and III), 1342 UNTS 163 (engl.), 10 October 1980. See Final Report, Meeting of the High Contracting Parties to CCW, CCW/MSP/2013/10, 16 December 2013, para. 32, for the establishment of the first informal meeting of experts on lethal autonomous weapons systems; for the second meeting: Final Report, Meeting of the High Contracting Parties to CCW, CCW/MSP/2014/9, 27 November 2014, para. 36.

71 On unmanned warfare more generally, see Galliot, Jai, Military Robots – Mapping the Moral Landscape (Farnham: Ashgate, 2015);Google Scholar on responsibility for lethal autonomous weapons, see Bothmer, Fredrik von, “Robots in Court – Responsibility for Lethal Autonomous Weapons Systems”, in Brändli, Sandra, Harasgama, Rehana, Schister, Roman and Tamò, Aurelia (eds), Mensch und Maschine –Symbiose oder Parasitismus? (Bern: Stämpfli, 2014), pp. 102-124.Google Scholar

73 David Ignatius, “In Munich, a frightening preview of the rise of killer robots”, The Washington Post (online), 16 February 2016, available on the internet at: <http://www.washingtonpost.com/opinions/in-munich-a-frightening-preview-of-the-rise-of-killer-robots/2016/02/16/d6282a50-d4d4-11e5-9823-02b905009f99_story.html>.

74 “Killer robots” have also reached tabloids: Nico Lumma, “Kampfroboter sind schlimmer als das Pannengewehr G36”, 23 April 2014, available on the internet at: <http://www.bild.de/geld/wirtschaft/nico-lumma/kampfroboter-40660582.bild.html>.

75 Human Rights Watch and Harvard International Human Rights Clinic, Losing Humanity: The Case against Killer Robots, supra note 61.

76 Human Rights Watch (Bonnie Docherty), Shaking the Foundations: The Human Rights Implications of Killer Robots (New York, 12 May 2014), available on the internet at <http://www.hrw.org/report/2014/05/12/shaking-foundations/human-rights-implications-killer-robots>.

77 Human Rights Watch (Bonnie Docherty), Mind the Gap – the Lack of Accountability for Killer Robots (New York, 9 April 2015), available on the internet at: <http://www.hrw.org/report/2015/04/09/mind-gap/lack-accountability-killer-robots>.

78 Human Rights Watch (Bonnie Docherty), Precedent for Preemption: The Ban on Blinding Lasers as a Model for a Killer Robots Prohibition (New York, 8 November 2015), available on the internet at: <http://www.hrw.org/news/2015/11/08/precedent-preemption-ban-blinding-lasers-model-killer-robots-prohibition>.

79 For a humorous counter-argument more generally, see Rosa Brooks, “In Defense of Killer Robots”, 18 May 2015, available on the internet at: <http://foreignpolicy.com/2015/05/18/in-defense-of-killer-robots/>, who argues that the track record of humans is worse than that of killer robots could ever possibly be. On a more serious note, Ian Kerr and Katie Szilagyi, “Asleep at the switch? How killer robots become a force mutliplier of military necessity”, in Calo, Froomkin, and Kerr (eds), Robot Law, supra note 10, pp. 334 et sqq., argue that lethal autonomous weapon systems – “killer robots” as they call them – would have an impact on international humanitarian law. They would notably result in a changed understanding of what would be considered militarily necessary. According to the authors, international humanitarian law contributes to this change in the idea of necessity through its neutrality towards new technology. Kenneth Anderson and Matthew C. Waxman, “Law and Ethics for Autonomous Weapon Systems: Why a Ban Won”t Work and How the Laws of War Can” Stanford University, The Hoover Institution (Jean Perkins Task Force on National Security and Law Essay Series); American University, WCL Research Paper; Columbia Public Law Research Paper (2013-11 / 13-351) (2013 (April 10)), pp. (33) et sqq., available on the internet at: <http://ssrn.com/abstract=2250126>, argue against a ban of lethal autonomous weapon systems, instead opting for an incremental approach by gradually evolving existing codes of conduct; Michael N. Schmitt and Jeffrey S. Thurnher, “‘Out of the Loop’: Autonomous Weapon Systems and the Law of Armed Conflict” Harvard National Security Journal (4) (2013 (February 5)), pp. 231 et sqq., available on the internet at: <http://ssrn.com/abstract=2212188>, argue that a ban would be “insupportable as a matter of law, policy, and operational good sense” (at p. 233).

80 Human Rights Watch and Harvard International Human Rights Clinic, Losing Humanity: The Case against Killer Robots, supra note 61, at p. 2.

81 All quotes on p. 2, Human Rights Watch and Harvard International Human Rights Clinic, Losing Humanity: The Case against Killer Robots, supra note 61.

82 Human Rights Watch and Harvard International Human Rights Clinic, Losing Humanity: The Case against Killer Robots, supra note 61, at p. 3.

83 See the “Call to Action”: <http://www.stopkillerrobots.org/call-to-action/>.

84 Interim Report by UN Special Rapporteur on extrajudicial, summary or arbitrary executions, Alston, supra note 65, had previously diagnosed the lack of a uniform definition of robot autonomy. The report went on: “Confusion can result, for example, from differences over whether ‘autonomous’ describes the ability of a machine to act in accordance with moral and ethical reasoning ability, or whether it might simply refer to the ability to take action independent of human control (e.g. a programmed drone that can take off and land without human direction; a thermometer that registers temperatures)” (at para. 32, with further reference).

85 Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Heyns, supra note 65, at pp. 7-8.

86 All quotes in Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Heyns, supra note 65, at p. 8.

87 Report of the 2015 Informal Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS) (Draft version, on file with the author), Chairperson of the Informal Meeting of Experts, at paras 19, 33, 35, 36, and 42. Contra to some extent: Rebecca Crootof, “The Meaning of ‘Meaningful Human Control’”, 30 Temple International and Comparative Law Journal (2016 (forthcoming)), pp. (9) et sqq., available on the internet at: <http://ssrn.com/abstract=2705560>), who describes the notion of meaningful human control as “immensely popular” (at p. 1).

88 In a similar vein, Crootof, “The Meaning of ‘Meaningful Human Control’”, supra note 87, at p. 2: “But this broad support comes at a familiar legislative cost: there is no consensus as to what “meaningful human control” actually requires.” Note, however, that a similar argument could be made with regard to the element of “independence” in the definition of autonomy in Crootof, Rebecca, “The Killer Robots Are Here: Legal and Policy Implications”, 36 Cardozo Law Review (2015), pp. 1837 et sqq. Google Scholar, available on the internet at: <http://ssrn.com/abstract=2534567>), at p. 1854: “An ‘autonomous weapon system’ is a weapon system that, based on conclusions derived from gathered information and preprogrammed constraints, is capable of independently selecting and engaging targets.”(All italics removed).

89 Human Rights Watch (Bonnie Docherty), Mind the Gap – the Lack of Accountability for Killer Robots, supra note 77, at title I., first and third sentence.

90 Human Rights Watch, “Interview: Holding Killer Robots to Account (with Bonnie Docherty)”, 29 April 2015, available on the internet at: <http://www.hrw.org/print/news/2015/04/29/interview-holding-killer-robots-account>.

91 Human Rights Watch (Bonnie Docherty), Precedent for Preemption: The Ban on Blinding Lasers as a Model for a Killer Robots Prohibition, supra note 78, at p. 1. 92 Peter Asaro, “Jus nascendi, robotic weapons and the Martens Clause”, in Calo, Froomkin, and Kerr (eds), Robot Law, supra note 10, pp. 367 et sqq., at p. 385, further discusses what “meaningful” means. Thompson Chengeta, “Towards a ‘targeted’ definition of ‘meaningful human control’”, available on the internet at: <http://ssrn.com/abstract=2754995> , structures the notion of “meaningful human control”: “[T]he term [meaningful human control] should zero in on each actor, producing separate definitions and standards to which the different actors should adhere to. In this regard, separate ‘focused’ questions should be asked: What does meaningful control of AWS by the state entail? In what ways can a manufacturer, a programmer, or roboticist influence MHC of AWS? What does MHC of AWS by a fighter or combatant mean?” (p. 4). While it is obviously necessary to explore these questions, they also reveal the similarity of the notions of meaningful human control and autonomy.

93 Interestingly, Kersten, Jens, “Menschen und Maschinen – Rechtliche Konturen instrumenteller, symbiotischer und autonomer Konstellationen”, 70 Juristen Zeitung (1) (2015), pp. 1 et sqq., at p. 4CrossRefGoogle Scholar, sees the symbiosis between robot and human being as standing in contrast to the situation where the robot is autonomous.

94 Karnow, “The application of traditional tort theory to embodied machine intelligence”, supra note 40, at p. 53, calls this “confusion in this area”.

95 See supranote 38.

96 Kaplan, Jerry, Humans Need Not Apply – A Guide to Wealth and Work in the Age of Artificial Intelligence (New Haven: Yale University Press, 2015), at p. 47 Google ScholarPubMed [brackets added].

97 Compare Bryant Walker Smith, “Lawyer and engineers should speak the same robot language”, in Calo, Froomkin, and Kerr (eds), Robot Law, supra note 10, pp. 78 et sqq., at p. 84, who discusses diverging notions of control over a robot: “An engineer might picture a real–time control loop with sensors and actuators, a lawyer might envision a broad grant of authority from human to machine analogous to a principal–agent relationship and the public might imagine runaway cars and killer robots.” (Smith seems to focus more on the persons having diverging views of a notion, while this article focuses more on the views as such.)

98 Another example is perhaps John MacBride, “A Veteran's Perspective on ‘Killer Robots’”, Just Security, 28 May 2015, available on the internet at: <http://justsecurity.org/23287/veterans-perspective-killer-robots/>. (The author is a former lieutenant colonel who co-founded the Campaign to Stop Killer Robots.) Examples of authors whom one would expect to adhere to the observers’ view, while they in fact lean towards the operators’ view are less common, but they do exist. See for instance, Markoff, “Relax, the Terminator Is Far Away”, supra note 60.

99 Alain Zucker, “Meet Mr. Robot”, Tagesanzeiger (online), 21 January 2016. The article reports on Mr. Juhn–Ho Oh, the Korean creator of Hubo whose team won the Darpa Robotics Challenge. In the account, Mr Jun–Ho Oh draws a clear distinction between those who are optimist about full robot autonomy and those who are sceptic about it. An example of a sceptical view is Martin Wolf, “Same as It Ever Was” Foreign Affairs (July/August) (2015), pp. 15 et sqq. He is of the view that the progress made currently is relatively minor and far less significant than progress driving previous industrial revolutions. Wolf, at p. 16, (and others), names Erik Brynjolfsson and Andrew MacAfee as optimists (see infra note 103).

100 Human Rights Watch and Harvard International Human Rights Clinic, Losing Humanity: The Case against Killer Robots, supra note 61, at p. 3. See Human Rights Watch (Bonnie Docherty), Shaking the Foundations: The Human Rights Implications of Killer Robots, supra note 76, under title I.: “Fully autonomous weapon systems have yet to be created, but technology is moving rapidly in that direction.” Human Rights Watch (Bonnie Docherty), Mind the Gap – the Lack of Accountability for Killer Robots, supra note 77, under title I.: “Fully autonomous weapons do not yet exist, but technology is moving in their direction, and precursors are already in use or development.” Human Rights Watch (Bonnie Docherty), Precedent for Preemption: The Ban on Blinding Lasers as a Model for a Killer Robots Prohibition, supra note 78, at p. 1: “Although they do not exist yet, the development of precursors and military planning documents indicate that technology is moving rapidly in that direction and is years, not decades, away.” (Footnote omitted.)

101 Bostrom, Nick, Superintelligence: Paths, Dangers, Strategies (Oxford: Oxford University Press, 2014).Google Scholar

102 Lin, Patrick, “Introduction to Robot Ethics”, in Lin, Patrick, Abney, Keith and Bekey, George A. (eds), Robot Ethics – The Ethical and Social Implications of Robotics (Cambridge MA: MIT Press, 2012), pp. 3 et sqq., at p. 12 Google Scholar, sees “scenarios, in which robots – through the super-artificial intelligence – subjugate humanity” as “highly speculative scenarios that continually overshadow more urgent and plausible issues”.

103 Brynjolfsson, Erik and McAfee, Andrew, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (New York: Norton, 2014);Google Scholar see also Brynjolfsson, Erik and MacAfee, Andrew, “Will Humans Go the Way of HorsesForeign Affairs (July/August) (2015), pp. 8 et sqq.;Google Scholar and Amy Bernstein and Anand Raman, “The Great Decoupling [interview with Erik Brynjolfsson and Andrew McAfee]” Harvard Business Review (June 2015), pp. 66 et sqq. Compare with Carr, Nicholas, The Glass Cage: Automation and Us (New York: Norton, 2014)Google Scholar and Kaplan, Humans Need Not Apply – A Guide to Wealth and Work in the Age of Artificial Intelligence, supra note 96. See also most recently Gordon, Robert J., The Rise and Fall of American Growth: the U.S. Standard of Living Since the Civil War (Princeton: Princeton University Press, 2016).Google Scholar

104 Schwab, Klaus, The Fourth Industrial Revolution (Geneva: World Economic Forum, 2016).Google Scholar Davenport, Thomas H. and Kirby, Julia, “Beyond Automation”, Harvard Business Review (June 2015) (2015), pp. 59 et sqq. Google Scholar, look into how the relationship between robots and human workers might look in the future. Broadly speaking, they have doubts that robots will replace workers. For a step towards less automated robotics in production and more human workforce, see , “Mercedes-Werk in Sindelfingen “entlässt” Roboter”, 26 February 2016, available on the internet at: <http://www.welt.de/wirtschaft/article152696844/Mercedes-Werk-in-Sindelfingen-entlaesst-Roboter.html>.

105 Future of Life Institute, “An Open Letter: Research Priorities for Robust and Beneficial Artificial Intelligence”, available on the internet at: <http://futureoflife.org/ai-open-letter/>. In a similar vein, though much earlier and therefore a little more generic: Engineering and Physical Sciences Research Council, “Principles of Robotics”, 2010, available on the internet at: <http://www.epsrc.ac.uk/research/ourportfolio/themes/engineering/activities/principlesofrobotics/> (focussing on robotics).

106 On Hawking's view, listen to BBC World Service, Global News podcast, 3 December 2014 (AM edition).

107 The research priorities do mention autonomy, but the term somehow seems to creep into the second section without much context, while the text centers on artificial intelligence (“[t]he development of systems that embody significant amounts of intelligence and autonomy leads to important legal and ethical questions […]”); it then re-emerges only with regard to vehicles and weapon systems and in a curious “outburst” on several occasions in the introductory part to the section 2.3 on “Computer Science Research and Robust AI” (see pp. 2-3). The rest of the document is silent on autonomy. 108 Right after the DRC, Gill Pratt, who had been in charge of organizing the DRC at Darpa, accepted the position as head of the new research entity created by Toyota: John Markoff, “Toyota Invests $1 Billion in Artifical Intelligence in U.S.”, The New York Times (online edition), 6 November 2015 (appeared at p. B3 of the print edition under “Toyota Planning an Artificial Intelligence Research Center in California”). See also Clive Thompson, “Uber Would Like to Buy Your Robotics Department”, 11 September 2015, available on the internet at: <http://mobile.nytimes.com/2015/09/13/magazine/uber-would-like-to-buy-your-robotics-department.html?referrer=&_r=0>, and The Economist, “Milliondollar babies”, 2 April 2016, available on the internet at: <http://www.economist.com/news/business/21695908-silicon-valley-fights-talent-universities-struggle-hold-their?fsrc=scn/fb/te/pe/ed/milliondollarbabies>.

109 Personal Conversation with Clearpath, supra note 26.

110 Personal conversation with Boston Engineering, supra note 29.

111 See Bekey, George A., Autonomous Robots: From Biological Inspiration to Implementation and Control (Cambridge, Mass.: MIT Press, 2005), at p. 1;Google Scholar reiterated almost identically with reference in George A. Bekey, “Current Trends in Robotics: Technology and Ethics”, in Lin, Abney and Bekey (eds), Robot Ethics, supra note 102, pp. 17 et sqq., at p. 18. Compare with Marra, William C. and McNeil, Sonia K., “Understanding “The Loop“: Regulating the Next Generation of War Machines”, 36 Harvard Journal of Law & Public Policy (3) (2013), pp. 1139 et sqq. Google Scholar, who come up with a detailed conception of autonomy.

112 The uncertainty is compounded when robotics companies themselves project the observers’ view for advertisement purposes. The German robotics company Kuka, e. g., published a video in which a Kuka robot played table tennis against the famous German player, Timo Boll. (It lost narrowly.) See Kuka Robotics, “The Duel: Timo Boll vs. Kuka Robot”, 10 March 2014, available on the internet at: <http://www.youtube.com/watch?v=tIIJME8-au8>. Note in particular the ambiguous text beneath the video on Youtube. It only emerged later that the video was a fake. No game had taken place. Holger Dambeck, “Japan: Roboter lernt Samurai-Schwertkampf”, Spiegel Online, 17 June 2015, available on the internet at: <http://www.spiegel.de/wissenschaft/technik/japan-roboter-lernt-samurai-schwertkampf-a-1037936.html>, explains this (in German, at para. 7) and injects some of the opera tors’ view. In a similar vein, an (in)famous video of Boston Dynamics about its Atlas robot also needs to be carefully interpreted: Martin Robbins, “How real is that Atlas robot video?”, 25 February 2016, available on the internet at: <http://www.theguardian.com/science/the-lay-scientist/2016/feb/25/how-real-is-that-atlas-robot-boston-dynamics-video>. The video is also available directly: Boston Dynamics, “Atlas, the Next Generation”, 23 February 2016, available on the internet at: <http://www.youtube.com/watch?v=rVlhMGQgDkY&app=desktop>.

113 Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti–Personnel Mines and on their Destruction, 2056 UNTS 241 (engl.), 18 September 1997.

114 Axworthy, Lloyd, “Towards a New Multilateralism”, in Cameron, Maxwell A.,Lawson, Robert J. and Tomlin, Brian W. (ed.), To Walk Without Fear – The Global Movement to Ban Landmines (Oxford: OUP, 1998), pp. 448 et sqq.;Google Scholar Larrinaga, Miguel de and Sjolander, Claire Turenne, “(Re)presenting Landmines from Protector to Enemy: The Discursive Framing of a New Multilateralism”, in ibid., pp. 364 et sqq.Google Scholar In the latter article, two points emerge that are of interest. On the one hand, according to the authors, the Ottawa process held “no single transformative lesson that [could] be applied to other policy issues” (at p. 382) – a statement which should be of interest to Human Rights Watch and the “Campaign”. On the other hand, the authors state the following about land mines: “Machines lie in wait for their human victims, incapable of telling the ‘difference between the footfall of a soldier and that of an old woman gathering firewood … Landmines recognize no cease-fire and, long after the fighting has stopped, they can maim or kill the children and grandchildren of the soldiers who laid them.’ [Referring to Human Rights Watch Arms Project, Still Killing: Landmines in South Africa, New York, Human Rights Watch, May 1997, at p. 3.] Landmines, thus, are seen to be beyond the control of the soldier or the state – in effect they simply ‘are’” (at p. 380). The observers’ view would probably apply this statement to autonomous robots without modifying it.

115 Melinda Florina Müller (Lohmann), “Von vermenschlichten Maschinen und maschinisierten Menschen – Bemerkungen zur Wortsemantik in der Robotik”, in Brändli, Harasgama, Schister and Tamò (eds), Mensch und Maschine, supra note 71 pp. 125 et sqq., distinguishes, in a similar way as the present article, between a technical and a social science understanding of autonomy and explores the language implications regarding robots and consequently occurring anthropomorphic effects.

116 See the pictures of the robots available on the internet at: <http://www.theroboticschallenge.org>, last visit: 16 January 2016, see <http://web.archive.org/web/20160313212007/http://www.theroboticschallenge.org/teams>, see supra note 5.

117 The observing crowd of spectators at DRC also anthropomorphized the robots. The faces of many spectators and, to some extent, the reactions of the crowd when a robot fell down or suffered some kind of fit clearly revealed this.

118 See the photo in Ignatius, “In Munich, a frightening preview of the rise of killer robots”, supra note 73.

119 See, for instance, US Department of Defence, Directive on Autonomy in Weapon Systems, no. 3000.09, 21 November 2012, available on the internet at: <http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf>.

120 See Noel Sharkey, “Killing Made Easy: From Joysticks to Politics”, in Lin, Abney and Bekey (eds), Robot, supra note 102, pp. 111 et sqq., at p. 121: “The anthropomorphic terms create a more interesting narrative, but they only confuse the important safety issues and create false expectations.”

121 For a good introduction to artificial intelligence, see Russell, Stuart and Norvig, Peter, Artificial Intelligence – A Modern Approach, 3rd. ed. (Pearson: Essex, 2014);Google Scholar for robotics, Bekey, Autonomous Robots: From Biological Inspiration to Implementation and Control, supra note 112, proved very helpful.

122 Kaplan, Humans Need Not Apply – A Guide to Wealth and Work in the Age of Artificial Intelligence, supra note 96, p 5.

123 Ibid.

124 Bekey, Autonomous Robots: From Biological Inspiration to Implementation and Control, supra note 112, at p. xiii: “Robots are distinguished from software agents in that they are embodied and situated in the real world.” (Emphasis in original).

125 See for a contrasting view, Pratt, Gill A., “Is a Cambrian Explosion Coming for Robotics?”, 29 Journal of Economic Perspectives (3 (Summer 2015)) (2015), pp. 51 et sqq.;CrossRefGoogle Scholar see also Nourbakhsh, Illah Reza, “The Coming Robot Dystopia”, Foreign Affairs (July/August) (2015), pp. 23 et sqq., at p. 24:Google Scholar “But the pace of change in robotics is far outstripping the ability of regulators and lawmakers to keep up, especially as large corporations pour massive investments into secretive robotics projects that are nearly invisible to government regulators.”

126 Moreover, in the translation from code (artificial intelligence) to the real world (robotics) unexpected behaviour and patterns emerge occasionally. For a wonderful example, see Rubenstein, Michael, Cornejo, Alejandro and Nagpal, Radhika, “Programmable self–assembly in a thousand–robot swarm”, 345 Science (6198) (2014), pp. 795 et sqq., at p. 798CrossRefGoogle Scholar, who noted “interesting emergent behaviors” in a robot swarm, which “[i]dealized mathematical models of robot swarms d[id] not predict” (p. 799). This, in turn, is interesting for predictability, which is a crucial element of legal responsibility. See, e. g. Wagner, Markus, “The Dehumanization of International Humanitarian Law: Legal, Ethical, and Political Implications of Autonomous Weapon Systems”, 47 Vanderbilt Journal of Transnational Law (2014), pp. 1371 et sqq., at p. 1403Google Scholar, who refers to “a fundamental aspect of fully autonomous systems – namely that a system's course of action is not necessarily completely predictable for the operator”. The producers mentioned in the following video would do well to take into account emergent behaviour of robot swarms: Spiegel Online, “Automatische Schwarmdrohne: US-Navy testet neues Waffensystem”, 15 April 2015, available on the internet at: <http://www.spiegel.de/video/us-navy-testet-automatische-schwarmdrohne-video-video-1570079.html>. In a similar vein, unexpected behaviour occurs when making the transition from a simulation of a robot to a real physical robot: “But even a successful simulated robot will only provide you limited insight into how it's going to do when you actually build it: as we’ve seen, even sophisticated simulations don't necessarily reveal how robots will perform in the real world. This fundamental disconnect between simulation and reality becomes especially problematic when you’re dealing with an area of robotics where it's impractical to build physical versions of everything.” (Ackerman, “Mother Robots Build Children Robots to Experiment with Artificial Evolution”, supra note 46.

127 Similarly, cyber–warfare and traditional warfare must be distinguished. The first is led by means of software, the second in addition requires real world weaponry. On the consequences of this distinction in international law, see Walter, Christian, “Cyber Security als Herausforderung für das Völkerrecht”, 70 Juristen Zeitung (14) (2015), pp. 685 et sqq. CrossRefGoogle Scholar

128 Progress in artificial intelligence, moreover, tends to be sectoral. The success of an artificial intelligence in playing Go is not easily transposed to other games, let alone completely different tasks. To put it starkly, an autonomously driving car is a long way off from cooking a meal in a kitchen.

129 See for instance for policy in Switzerland: Automatisierung. Risiken und Chancen (Postulate addressed to the National Council), Mathias Reynard, 15.3854 – Postulat, 16 September 2015.

130 Richards and Smart, “How should the law think about robots?”, supra note 10, at p. 5, note that “[f]ew people have seen an actual robot, so they must draw conclusions from the depictions of robots that they have seen.” [footnotes omitted].