Skip to main content Accessibility help
×
Hostname: page-component-76fb5796d-22dnz Total loading time: 0 Render date: 2024-04-25T09:55:39.464Z Has data issue: false hasContentIssue false

Part VIII - Responsible AI for Security Applications and in Armed Conflict

Published online by Cambridge University Press:  28 October 2022

Silja Voeneky
Affiliation:
Albert-Ludwigs-Universität Freiburg, Germany
Philipp Kellmeyer
Affiliation:
Medical Center, Albert-Ludwigs-Universität Freiburg, Germany
Oliver Mueller
Affiliation:
Albert-Ludwigs-Universität Freiburg, Germany
Wolfram Burgard
Affiliation:
Technische Universität Nürnberg

Summary

Type
Chapter
Information
The Cambridge Handbook of Responsible Artificial Intelligence
Interdisciplinary Perspectives
, pp. 445 - 446
Publisher: Cambridge University Press
Print publication year: 2022
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

26 Artificial Intelligence, Law, and National Security

Ebrahim Afsah
I. Introduction: Knowledge Is Power

The conjecture ‘that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it’Footnote 1 has motivated scientists for more than half a century, but only recently attracted serious attention from political decision-makers and the general public. This relative lack of attention is perhaps due to the long gestation of the technology necessary for that initial conjecture to become a practical reality. For decades merely an aspiration among a small, highly skilled circle engaged in basic research, the past few years have witnessed the emergence of a dynamic, economically and intellectually vibrant field.

From the beginning, national security needs drove the development of Artificial Intelligence (AI). These security needs were motivated in part by surveillance needs, especially code-breaking, and in part by weapons development, in particular nuclear test simulation. While the utilisation of some machine intelligence has been part of national security for decades, the recent explosive growth in machine capability is likely to transform national and international security, consequently raising important regulatory questions.

Fueled by the confluence of at least five factors – the increase in computational capacity; availability of data and big data; revolution in algorithm and software development; explosion in our knowledge of the human brain; and existence of an affluent and risk-affine technology industry – the initial conjecture is no longer aspirational but has become a reality.Footnote 2 The resulting capabilities cannot be ignored by states in a competitive, anarchic international system.Footnote 3 As AI becomes a practical reality, it affects national defensive and offensive capabilities,Footnote 4 as well as general technological and economic competitiveness.Footnote 5

There is a tendency to describe intelligence in an anthropomorphic fashion that conflates it with emotion, will, conscience, and other human qualities. While this makes for good television, especially in the field of national security,Footnote 6 this seems to be a poor analytical or regulatory guideline.Footnote 7 For these purposes, a less anthropocentric definition is preferable, as suggested for instance by Nils Nilsson:

For me, artificial intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment. According to that definition, lots of things – humans, animals, and some machines – are intelligent. Machines, such as ‘smart cameras,’ and many animals are at the primitive end of the extended continuum along which entities with various degrees of intelligence are arrayed. At the other end are humans, who are able to reason, achieve goals, understand and generate language, perceive and respond to sensory inputs, prove mathematical theorems, play challenging games, synthesize and summarize information, create art and music, and even write histories. Because ‘functioning appropriately and with foresight’ requires so many different capabilities, depending on the environment, we actually have several continua of intelligences with no particularly sharp discontinuities in any of them. For these reasons, I take a rather generous view of what constitutes AI.Footnote 8

The influential Stanford 100 Year Study on Artificial Intelligence explicitly endorses this broad approach, stressing that human intelligence has been but the inspiration for an endeavour that is unlikely to actually replicate the brain. It appears that intelligence – whether human, animal, or machineFootnote 9 – is not necessarily one of clearly differentiated kind, but ultimately a question of degree of speed, capability, and adaptability:

Artificial Intelligence (AI) is a science and a set of computational technologies that are inspired by – but typically operate quite differently from – the ways people use their nervous systems and bodies to sense, learn, reason, and take action. … According to this view, the difference between an arithmetic calculator and a human brain is not one of kind, but of scale, speed, degree of autonomy, and generality. The same factors can be used to evaluate every other instance of intelligence – speech recognition software, animal brains, cruise-control systems in cars, Go-playing programs, thermostats – and to place them at some appropriate location in the spectrum.Footnote 10

At its most basic, AI means making sense of data, and can thus be differentiated from cyberspace, which primarily concerns the transmission of data. Collecting data is fairly inconsequential without someone to analyse and make sense of it.Footnote 11 If the purpose of a thought or action can be expressed numerically, it can be turned into coded instructions and thereby cause a machine to achieve that purpose. In order to understand the relationship better, it is helpful to differentiate between data, information, knowledge, and intelligence.

Data is raw, unorganised, factual, sensory observation, collected in either analog or digital form, with single data points unrelated to each other. Already in this raw form, data can be used by simple machines to achieve a purpose, for instance temperature or water pressure readings by a thermostat switching a heater on or off, or a torpedo’s depth sensor guiding its steering system. Observed and recorded facts can take many forms, such as statistics, satellite surveillance photographs, dialed phone numbers, etc. Such data, whether qualitative or quantitative, stands on its own and is not related to external signifiers. In this form, it is not very informative and fairly meaningless. Where analog storage is logistically limited, the recording of observational data in electronic, machine-readable form is no longer physically limited.

Information, by contrast, depends on an external mental model through which data acquires meaning, context, and significance. Data becomes information through analysis and categorisation; it acquires significance only through the imposition of order and structure. Information is, therefore, data that has been processed, organised according to meaningful criteria, given context, and thereby made useful towards achieving outcomes according to predetermined needs. This process is dependent on the existence of conceptional models created in response to these needs.Footnote 12 Significance, meaning, and usefulness are, therefore, qualities not inherent in the data, but external impositions to sift, categorise, and ‘clean’ data from extraneous ‘noise’. Data that has been transformed into information has ‘useless’ elements removed and is given context and significance according to an external yardstick of ‘usefulness’. To follow the earlier example, linking temperature readings in different rooms at different times, with occupancy readings and fluctuating electricity prices could be used by a ‘smart’ thermostat to make ‘intelligent’ heating choices.

Knowledge is to make sense of information, being aware of the limitations of the underlying data and theoretical models used to classify it, being able to place that information into a wider context of meaning, purpose, and dynamic interactions, involving experience, prediction, and the malleability of both purpose and model. Knowledge refers to the ability to understand a phenomenon, theoretically or practically, and to use such understanding for a deliberate purpose. It can be defined as ‘justified true belief’.Footnote 13 This process complements available information with inferences from past experience and intuition, and responds to feedback, including sensory, cognitive, and evaluative.

Intelligence refers to the ability to ‘function appropriately and with foresight’, thus AI presumes that the act of thinking that turns (sensory) data into information and then into knowledge, and finally into purposeful action is not unique to humans or animals. It posits that the underlying computational process is formally deducible, can be scientifically studied and replicated in a digital computer. Once this is achieved, all the inherent advantages of the computer come to bear: speed, objectivity (absence of bias, emotion, preconceptions, etc.), scalability, permanent operation, etc. In the national security field, some have compared this promise to the mythical figure of the Centaur, who combined the intelligence of man with the speed and strength of the horse.Footnote 14

The development of the Internet concerned the distribution of data and information between human and machine users.Footnote 15 AI, by contrast, does not primarily refer to the transmission of raw or processed data, the exchange of ideas, or the remote control of machinery (Internet of things, military command and control, etc.), but the ability to detect patterns in data, process data into information, and classify that information in order to predict outcomes and make decisions. Darrell M. Allen and John R. West suggest three differentiating characteristics of such systems: intentionality, intelligence, and adaptability.Footnote 16

The Internet has already transformed our lives, but the enormous changes portended by AI are just beginning to dawn on us. The difficulty of predicting that change, however, should not serve as an excuse for what James Baker deemed ‘a dangerous nonchalance’ on behalf of decision-makers tasked with managing this transformation.Footnote 17 Responsible management of national security requires an adequate and realistic assessment of the threats and opportunities presented by new technological developments, especially their effect on the relative balance of power and on global public goods, such as the mitigation of catastrophic risks, arms races, and societal dislocations. In modern administrative states, such management is inevitably done through law, both nationally and internationally.Footnote 18

In this chapter, I will begin by contrasting the challenge posed by AI to the related but distinct emergence of the cyber domain. I then outline six distinct implications for national security: doomsday scenarios, autonomous weapons, existing military capabilities, reconnaissance, economics, and foreign relations. Legal scholarship often proposes new regulation when faced with novel societal or technological challenges. But it appears unlikely that national actors will forego the potential advantages offered by a highly dynamic field through self-restraint by international convention. Still, even if outright bans and arms control-like arrangements are unlikely, the law serves three important functions when dealing with novel challenges: first, as the repository of essential values guiding action; second, offering essential procedural guidance; and third, by establishing authority, institutional mandates, and necessary boundaries for oversight and accountability.

II. Cyberspace and AI

The purpose of this sub-section is not to outline the large literature applying the principles of general international law, and especially the law of armed conflict, to cyber operations. Rather, it seeks to highlight the distinctive elements of the global communication infrastructure, especially how AI is distinct from some of the regulatory and operationalFootnote 19 challenges that characterise cybersecurity.Footnote 20 The mental image conjured by early utopian thinkers and adopted later by realist and military policy-makers rests on the geographical metaphor of ‘cyberspace’ as a non-corporeal place of opportunity and risk.Footnote 21 This place needs to be defended and thus constitutes an appropriate area of military operations.

As technical barriers eventually fell, the complexity of the network receded behind increasingly sophisticated but simple to operate graphical user-interfaces, making networked information-sharing first a mainstream, and eventually a ubiquitous phenomenon, affecting almost all aspects of human life almost everywhere. This has led to an exponential increase in the availability of information, much of it of a sensitive nature, often voluntarily relinquished. This has created a three-pronged challenge: data protection, information management, and network security.Footnote 22

Much early civilian, especially academic, thinking focused on the dynamic relationship between technology and culture, stressing the emergence of a new, virtual habitat: ‘A new universe, a parallel universe created and sustained by the world’s computers and communication lines.’Footnote 23 But as the novelty wore off while its importance grew, the Internet became ‘re-territorialised’ as nation-states asserted their jurisdiction, including in the hybrid, multi-stakeholder regulatory fora that had developed initially under American governmental patronage.Footnote 24 Perhaps more importantly, this non-corporeal realm created by connected computers, came to be seen not as a parallel universe following its own logic and laws, but as an extension of existing jurisdictions and organisational mandates:

Although it is a man-made domain, cyberspace is now as relevant a domain for DoD [Department of Defence] activities as the naturally occurring domains of land, sea, air, and space. Though the networks and systems that make up cyberspace are man-made, often privately owned, and primarily civilian in use, treating cyberspace as a domain is a critical organizing concept for DoD’s national security missions. This allows DoD to organize, train, and equip for cyberspace as we do in air, land, maritime, and space to support national security interests.Footnote 25

This is reflected in the United States (US) National Security Strategy, which observes: ‘Cybersecurity threats represent one of the most serious national security, public safety, and economic challenges we face as a nation.’Footnote 26 Other countries treat the issue with similar seriousness.Footnote 27

Common to the manner in which diverse nations envisage cybersecurity is the emphasis on information infrastructure, in other words, on the need to keep communication channels operational and protected from unwanted intrusion. This, however, is distinct from the specific challenge of AI, which concerns the creation of actionable knowledge by a machine.

The initial ideas that led to the creation of the Internet sought to solve two distinct problems: the civilian desire to use expensive time-share computing capacity at academic facilities more efficiently by distributing tasks, and the military need to establish secure command and control connections between installations, especially to remote nuclear weapons facilities.Footnote 28 In both cases, it was discovered that existing circuit switched telephone connections were unreliable. The conceptional breakthrough consisted in the idea of package switched communication, which permitted existing physical networks to be joined non-hierarchically, permitting a non-hierarchical, decentralised architecture that is resilient, scalable, and open.Footnote 29

The Internet is, therefore, not one network, but a set of protocols specifying data formats and rules of transmission, permitting local, physical networks to communicate along dynamically assigned pathways.Footnote 30 The technology, the opportunities, and the vulnerabilities it offered came to be condensed in the spatial analogy of cyberspace. This ‘foundational metaphor’ was politically consequential because the use of certain terminology implied, rather than stated outright, particular understandings of complex issues at the expense of others, thus shaping policy debates and outcomes.Footnote 31 Denounced later by himself as merely an ‘effective buzzword’ chosen because ‘it seemed evocative and essentially meaningless’, the definition offered by William Gibson highlights the problematic yet appealing character of this spatial analogy: ‘Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators, in every nation … A graphic representation of data abstracted from the banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding.’Footnote 32 The term combined the non-physical nature of a world being dynamically created by its denizens in their collective imagination, but relying behind the graphical user-interface on a complex physical infrastructure.Footnote 33 The advantages of open communications have eventually led military and civilian installations in all nations to become accessible through the Internet, creating unique vulnerabilities due to opportunity costs of communication disruption, physical damage to installations, and interruptions of critical public goods like water or electricity.Footnote 34 What the American military defines as its key challenge in this area applies likewise to most other nations:

US and international businesses trade goods and services in cyberspace, moving assets across the globe in seconds. In addition to facilitating trade in other sectors, cyberspace is itself a key sector of the global economy. Cyberspace has become an incubator for new forms of entrepreneurship, advances in technology, the spread of free speech, and new social networks that drive our economy and reflect our principles. The security and effective operation of US critical infrastructure – including energy, banking and finance, transportation, communication, and the Defense Industrial Base – rely on cyberspace, industrial control systems, and information technology that may be vulnerable to disruption or exploitation.Footnote 35

Some have questioned the definitional appropriation of ‘cyberspace’ as a ‘domain’ for military action through ‘linguistic and ideational factors [which] are largely overlooked by the prevailing approach to cybersecurity in IR [international relations], which has productively emphasized technical and strategic aspects’ at the expense of alternative ways of thinking about security in this field.Footnote 36 Without prejudice to the theoretical contributions such investigations could make to political science and international relations,Footnote 37 the legal regulation of defensive and offensive networked operations has, perhaps after a period of initial confusion,Footnote 38 found traditional concepts to be quite adequate, perhaps because the spatial analogy facilitates the application of existing legal concepts.

The central challenges posed by the increasing and unavoidable dependence on open-architecture communication are both civilian and military. They concern primarily three distinct but related operational tasks: prevent interruptions to the flow of information, especially financial transactions; prevent disruptions to critical command and control of civilian and military infrastructure, especially energy, water, and nuclear installations; and prevent unauthorised access to trade and military secrets.Footnote 39 These vulnerabilities have, of course, corresponding opportunities for obtaining strategic information, striking at long distance while maintaining ‘plausible deniability’,Footnote 40 and establishing credible deterrence.Footnote 41 Again, how the American military describes its own mandate applies in equal measure to other nations, not least its chief competitors Russia and China:

American prosperity, liberty, and security depend upon open and reliable access to information. The Internet empowers us and enriches our lives by providing ever-greater access to new knowledge, businesses, and services. Computers and network technologies underpin US military warfighting superiority by enabling the Joint Force to gain the information advantage, strike at long distance, and exercise global command and control.

The arrival of the digital age has also created challenges for the Department of Defense (DoD) and the Nation. The open, transnational, and decentralized nature of the Internet that we seek to protect creates significant vulnerabilities. Competitors deterred from engaging the US and our allies in an armed conflict are using cyberspace operations to steal our technology, disrupt our government and commerce, challenge our democratic processes, and threaten our critical infrastructure.Footnote 42

Crucially important as these vulnerabilities and opportunities are for national security, defensive and offensive operations occurring on transnational communication networks raise important regulatory questions,Footnote 43 including the applicability of the law of armed conflict to so-called cyber-operations.Footnote 44 Yoram Dinstein dismisses the need for a revolution in the law of armed conflict necessitated by the advent of cyber warfare: ‘this is by no means the first time in the history of LOAC that the introduction of a new weapon has created the misleading impression that great legal transmutations are afoot. Let me remind you of what happened upon the introduction of another new weapon, viz., the submarine.’Footnote 45 Dinstein recounts how the introduction of the submarine in World War I led to frantic calls for international legal regulation. But instead of comprehensive new conventional law, states eventually found the mere restatement that existing rules must also be observed by submarines sufficient. He concludes that were an international convention on cyber warfare to be concluded today, ‘it would similarly stipulate in an anodyne fashion that the general rules of LOAC must be conformed with.’Footnote 46 Gary Solis likewise opens the requisite chapter in his magisterial textbook by stating categorically: ‘This discussion is out of date. Cyber warfare policy and strategies evolve so rapidly that is difficult to stay current.’ But what is changing are technologies, policies, and strategies, not the law: ‘Actually, cyber warfare issues may be resolved in terms of traditional law of war concepts, although there is scant demonstration of its application because, so far, instances of actual cyber warfare have been unusual. Although cyber questions are many, the law of war offers as many answers.’Footnote 47 Concrete answers will depend on facts that are difficult to ascertain, due to inherent technical difficulties to forensic analysis in an extremely complex, deliberately heterogeneous network composed of a multitude of actors, both private and public, benign and malign. Legal assessments likewise rely on definitional disputes and normative interpretations that reflect shifting, often short-term, policies and strategies. Given vastly divergent national interests and capabilities, no uniform international understanding, let alone treaty regulation has emerged.Footnote 48

In sum, while AI relies heavily on the same technical infrastructure of an open, global information network, its utilisation in the national security field poses distinct operational and legal challenges not fully encompassed by the law of ‘cyber warfare’.Footnote 49 That area of law presents the lawyer primarily with the challenge of applying traditional legal concepts to novel technical situations, especially the evidentiary challenges of defining and determining an armed attack, establishing attribution, the scope of the right to self-defence and proportionality, as well as thorny questions of the treatment of non-state or quasi-state actors, the classification of conflicts, and not least the threshold of the ‘use of force’.Footnote 50 AI sharpens many of the same regulatory conundra, while creating novel operational risks and opportunities.Footnote 51

III. Catastrophic Risk: Doomsday Machines

In the latest instalment of the popular Star Wars movie franchise, there is a key scene where the capabilities of truly terrible robotic fighting machines are presented. The franchise’s new hero, the eponymous Mandalorian, manages only with considerable difficulty to defeat but one of these robots, of which, however, an entire battalion is waiting in the wings. The designers of the series have been praised for giving audiences ‘finally an interesting stormtrooper’, that is a machine capable of instilling fear and respect in the viewer.Footnote 52

Whatever the cineastic value of these stormtroopers, in a remarkable coincidence a real robotics company simultaneously released a promotional video of actual robots that made these supposedly frightening machines set in a far distant future look like crude, unsophisticated toys. The dance video released by Boston Dynamics in early 2021 to show off several of its tactical robots jumping, dancing, pirouetting elegantly to music put everything Hollywood had come up with to shame: these were no prototypes, but robots that had already been deployed to police departmentsFootnote 53 and the military,Footnote 54 doing things that one previously could only have imagined in computer generated imagery.Footnote 55 Impressive and fearsome as these images are, these robots do exhibit motional ‘intelligence’ in the sense that they are able to make sense of their surroundings and act purposefully in it, but they are hardly able to replicate, let alone compete with human action, yet.

The impressive, even elegant capabilities showcased by these robots show that AI has made dramatic strides in recent years, bringing to mind ominous fears. In an early paper written in 1965, one of the British Bletchley Park cryptographers, the pioneering computer scientist and friend of Alan Turing, Irving John ‘Jack’ Good warned that an ‘ultra-intelligent machine’ would be built in the near future that could prove to be mankind’s ‘last invention’ because it would lead to an ‘intelligence explosion’, that is an exponential increase in self-generating machine intelligence.Footnote 56 While highly agile tactical robots conjure tropes of dangerous machines enslaving humanity, the potential risk posed by the emergence of super-intelligence is unlikely to take either humanoid form or motive but constitutes both incredible opportunity and existential risk, as Good pointed out half a century ago:

The survival of man depends on the early construction of an ultra-intelligent machine. … Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus, the first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.Footnote 57

Good would have been pleased to learn that both the promise and premonition of AI are no longer the preserve of science fiction, but taken seriously at the highest level of political decision-making. In a well-reported speech, President Vladimir Putin of Russia declared in 2017 that leadership in AI: ‘is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.’Footnote 58 Very similar statements guide official policy in all great powers, raising the spectre of what has been termed an ‘arms race’ in AI,Footnote 59 as a result of which ‘super-intelligent’ machines (i.e. those with capabilities higher than humans across the board), might endanger mankind.Footnote 60

It is interesting to note that the tone of the debate has changed significantly. Writing in a popular scientific magazine in 2013, Seth Baum asked rhetorically whether his readers should even take the topic seriously: ‘After all, it is essentially never in the news, and most AI researchers don’t even worry. (AGI today is a small branch of the broader AI field.) It’s easy to imagine this to be a fringe issue only taken seriously by a few gullible eccentrics.’Footnote 61 Today, these statements are no longer true. As Artificial General Intelligence, and thus the prospect of super-intelligence, is becoming a prominent research field, worrying about its eventual security implications is no longer the preserve of ‘a few gullible eccentrics’. Baum correctly predicted that the relative lack of public and elite attention did not mean that the issue was unimportant.

Comparing it to the issue of climate change that likewise took several decades to evolve from a specialist concern to an all-consuming danger, he predicted that the trend was clear that given the exponential development of technology, the issue would soon become headline news. The same point was made roughly at the same time by the co-founder of the Centre for the Study of Existential Risk (CSER) at the University of Cambridge, Huw Price. Summing up the challenge accurately, Price acknowledged that some of these concerns might seem far-fetched, the stuff of science fiction, which is exactly part of the problem:

The basic philosophy is that we should be taking seriously the fact that we are getting to the point where our technologies have the potential to threaten our own existence – in a way that they simply haven’t up to now, in human history. We should be investing a little of our intellectual resources in shifting some probability from bad outcomes to good ones. To the extent – presently poorly understood – that there are significant risks, it’s an additional danger if they remain for these sociological reasons outside the scope of ‘serious’ investigation.Footnote 62

There are two basic options: either to design safe AI with appropriate standards of transparency and ethical grounding as inherent design features, or not to design dangerous AI.Footnote 63 Given the attendant opportunities and the competitive international and commercial landscape, this latter option remains unattainable. Consequently, there has been much scientific thinking on devising ethical standards to guide responsible further technological development.Footnote 64 International legal regulation, in contrast, has so far proven elusive, and national efforts remain embryonic.Footnote 65

Some serious thinkers and entrepreneurs argue that the development of super-intelligence must be abandoned due to inherent, incalculable, and existential risks.Footnote 66 Prudence would indicate that even a remote risk of a catastrophic outcome should keep all of us vigilant. Whatever the merits of these assessments, it appears unlikely that an international ban of such research is likely. Moreover, as Ryan Calo and others have pointed out, there is a real opportunity cost in focusing too much on such remote but highly imaginative risks.Footnote 67

While the risks of artificial super-intelligence, which is defined as machine intelligence that surpasses the brightest human minds, are still remote, they are real and may quickly threaten human existence by design or indifference. Likewise, general AI or human-level machine intelligence remains largely aspirational, referring to machines that can emulate human beings at a range of tasks, switching fluidly between them, training themselves on data and their own past performance, and re-writing their operating code. In contrast, concrete policy and regulatory challenges need to be addressed now as a result of the exponential development of the less fearsome but concrete narrow AI, defined as machines that are as good or better than humans at particular tasks, such as interpreting x-ray or satellite images.

These more mundane systems are already operational and rapidly increase in importance, especially in the military field. Here, perhaps even more than in purely civilian domains, Pedro Domingos’ often quoted adage seems fitting: ‘People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.’Footnote 68 Without belittling the risk of artificial general or super-intelligence, Calo is thus correct to stress that focusing too much attention on this remote risk will reduce necessary attention from pressing societal needs and thereby risk ‘an AI Policy Winter’ in which necessary regulation limps behind rapid technical development.Footnote 69

IV. Autonomous Weapons System

Automated weapons have been in use for a long time; how long depends largely on the degree of automation informing one’s definition. A broad definition of a robot, under which we can subsume autonomous weapons systems, is a physical system that senses, processes, and acts upon the world. We can thus differentiate between ‘disembodied AI’ which collects, processes, and outputs data and information, but whose effect in the physical world is mediated; and robotics which leverage AI to itself physically act upon the world.Footnote 70

In order to ascertain the likely impact of AI on autonomous weapons systems, it is helpful to conceive of them and the regulatory challenges they pose as a spectrum of capabilities rather than sharply differentiated categories, with booby traps and mines on one end; improvised explosive devices (IEDs), torpedoes, and self-guided rockets somewhere in the middle; drones and loitering munition further towards the other end; and automated air defence and strategic nuclear control systems at or beyond the other polar end. It appears that two qualitative elements are crucial: the degree of processing undertaken by the system,Footnote 71 and the amount of human involvement before the system acts.Footnote 72

It follows that the definition of ‘autonomous’ is not clear-cut, nor is it likely to become so. Analytically, one can distinguish four distinct levels of autonomy: human operated, human delegated, human supervised, and fully autonomous.Footnote 73 These classifications, however, erroneously ‘imply that there are discrete levels of intelligence and autonomous systems’,Footnote 74 downplaying the importance of human–machine collaboration.Footnote 75 Many militaries, most prominently that of the US, insist that a human operator must remain involved, including ‘fail safe’ security precautions:

Semi-autonomous weapons systems that are onboard or integrated with unmanned platforms must be designed such that, in the event of degraded or lost communications, the system does not autonomously select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator. It is DoD policy that … autonomous and semi-autonomous weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment of the use of force.Footnote 76

In contrast to the assumptions underlying the discussion in the previous section, even fully autonomous systems currently always involve a human being who ‘makes, approves, or overrides a fire/don’t fire decision’.Footnote 77 Furthermore, such systems have been designed by humans, who have programmed them within specified parameters, which include the need to observe the existing law of armed conflict.Footnote 78 These systems are deployed into battle by human operators and their commanders,Footnote 79 who thus carry command responsibility,Footnote 80 including the possible application of strict liability standards known from civil law.Footnote 81

Given the apparent military benefits of increased automation and an extremely dynamic, easily transferable civilian field, outright bans of autonomous weapon systems, robotics, and unmanned vehicles appear ‘insupportable as a matter of law, policy, and operational good sense’.Footnote 82 To be sure, some claim that the principles of distinction, proportionality, military necessity, and the avoidance of unnecessary suffering, which form the basis of the law of armed conflict,Footnote 83 in conjunction with general human rights law,Footnote 84 somehow impose a ‘duty upon individuals and states in peacetime, as well as combatants, military organizations, and states in armed conflict situations, not to delegate to a machine or automated process the authority or capability to initiate the use of lethal force independently of human determinations of its moral and legal legitimacy in each and every case.’Footnote 85 Without restating the copious literature on this topic, it is respectfully suggested that such a duty for human determination cannot be found in existing international, and only occasionally in national,Footnote 86 law. Solis’ textbook begins discussing the war crime liability of autonomous weapons by stating the obvious counter-factual: ‘Any lawful weapon can be employed unlawfully.’ He proceeds to devise a number of hypothetical scenarios in which autonomous weapons could indeed be used or deliberately designed unlawfully, to conclude:

The likelihood of an autonomous weapon system being unlawful in and of itself is very remote; it would not meet Article 36 testing requirements and thus would not be put into use. And the foregoing four scenarios involving possible unlawful acts by operators or manufacturers are so unlikely, so phantasmagorical, that they are easily lampooned. … While acts such as described in the four scenarios are unlikely, they are possible.Footnote 87

As stated, Article 36 of the 1977 Additional Protocol I to the Geneva Conventions imposes on the contracting parties the obligation to determine prior to the deployment of any new weapon that it conforms with the existing law of armed conflict and ‘any other rule of international law applicable’. For states developing new weapons, this obligation entails a continuous review process from conception and design, through its technological development and prototyping, to production and deployment.Footnote 88

Given the complexity and rapid continuous development of autonomous weapons systems, especially those relying on increasingly sophisticated AI, such a legally mandatory review will have to be continuous, rigorous, and overcome inherent technical difficulties, given the large number of sub-systems from a large number of providers. Such complexity notwithstanding, autonomous weapons, including those relying on AI, are not unlawful in and of themselves.

In principle, the underlying ethical conundra and proportional balancing of competing values that need to inform responsible robotics generally,Footnote 89 need to inform the conception, design, deployment, and use of autonomous weapons system, whether or not powered by AI: ‘I reject the idea that IHL [international humanitarian law] is inadequate to regulate autonomous weapons. … However far we go into the future and no matter how artificial intelligence will work, there will always be a human being at the starting point … This human being is bound by the law.’Footnote 90 The most likely use scenarios encompass so-called narrow AI where machines have already surpassed human capabilities. The superior ability to detect patterns in vast amounts of unstructured (sensory) data has for many years proven indispensable for certain advanced automated weapons systems. Anti-missile defence systems, like the American maritime Aegis and land-based Patriot, the Russian S300 and S400 or the Israeli ‘Iron Dome’, all rely on the collection and processing of large amounts of radar and similar sensor data, and the ability to respond independently and automatically. This has created unique vulnerabilities: their susceptibility to cyber-attacks ‘blinding’ them,Footnote 91 the dramatic shortening of warning and reaction time even where human operators remain ‘in the loop’,Footnote 92 and the possibility to render these expensive, highly sophisticated systems economically unviable by targeting them with unconventional countermeasures, such as very cheap, fairly simple commercial drones.Footnote 93

V. Existing Military Capabilities

Irrespective of the legal and ethical questions raised, AI is having a transformative effect on the operational and economic viability of many sophisticated weapons systems. The existing military technology perhaps most immediately affected by the rise of AI are unmanned vehicles of various kinds, so-called drones and ‘loitering munitions’.Footnote 94 Currently relying on remote guidance by human operators or relatively ‘dumb’ automation, their importance and power is likely to increase enormously if combined with AI. Simultaneously, certain important legacy systems, for instance large surface ships such as aircraft carriers, can become vulnerable and perhaps obsolete due to neurally linked and (narrowly) artificially intelligent ‘swarms’ of very small robots.Footnote 95

The ready availability of capable and affordable remotely operated vehicles, plus commercial satellite imagery and similar information sources has put long-range power-projection capabilities in the hands of a far larger group of state and non-state actors. This equalisation of relative power is further accelerated by new technology rendering existing weapon systems vulnerable or ineffective. Important examples include distributed, swarm-like attacks on ships or permeating expensive air defence systems with cheap, easily replaceable commercial drones.Footnote 96

The recent war over Nagorno-Karabakh exposed some of these general vulnerabilities, not least the inability of both Armenia and Azerbaijan’s short-range air defense (SHORAD) arsenals, which admittedly were limited in size and quality, to protect effectively against sophisticated drones. While major powers like the US, China, and Russia are developing and deploying their own drone countermeasures,Footnote 97 certain existing systems, for instance aircraft carriers, have become vulnerable. This portends potential realignments in relative power where large numbers of low-cost expendable machines can be used to overwhelm an otherwise superior adversary.Footnote 98

There has been much academic speculation about the perceived novelty of drone technology and the suggested need to update existing legal regulations.Footnote 99 It needs to be stated from the outset that remotely piloted land-, air-, or sea-crafts have been used since the 1920s,Footnote 100 and thus cannot be considered either new or unanticipated by the existing law of armed conflict.Footnote 101 Likewise, it is difficult to draw a sharp technical distinction between certain drones and some self-guided missiles, which belong to a well-established area of military operations and regulation.Footnote 102

The novelty lies less in the legal or ethical assessment, than in the operational challenge of the dispersal of a previously highly exclusive military capability. The US has twice before responded to such a loss of its superior competitive edge by embarking on an ‘offset’ strategy meant to avoid having to match capabilities, instead seeking to regain superiority through an asymmetric technological advantage.Footnote 103

The ‘First Offset’ strategy successfully sought to counter Soviet conventional superiority through the development and deployment of, especially tactical, nuclear weapons.Footnote 104 The ‘Second Offset’ strategy was begun towards the end of the Vietnam War and reached its successful conclusion during the Iraq War of 1991. It meant to counter the quantitative equalisation of conventional assets, especially airpower, not by increasing the number of assets but their quality. Mustering American socio-economic advantages in technological sophistication, the key to the strategy was the development of previously unimaginable strike precision. As with any other military technology, it was anticipated that the opponent would eventually catch up, at some point neutralising this advantage. Given the economic near-collapse of the Soviet Union and its successor Russia, the slow rise of China, and the relative absence of other serious competitors, the technological superiority the US had achieved in precision strike capability surprisingly endured far longer than anticipated:

Perhaps the most striking feature of the evolution of non-nuclear (or conventional) precision strike since the Cold War ended in 1991 has been what has not happened. In the early 1990s, there was growing anticipation that for major powers such as the United States and Russia, ‘long-range precision strike’ would become ‘the dominant operational approach.’ The rate at which this transformation might occur was anyone’s guess but many American observers presumed that this emerging form of warfare would proliferate rather quickly. Not widely foreseen in the mid-1990s was that nearly two decades later long-range precision strike would still be a virtual monopoly of the US military.Footnote 105

Written in 2013, this assessment is no longer accurate. Today, a number of states have caught up and dramatically improved both the precision and range of their power projection. The gradual loss of its relative monopoly with respect to precision strike capability, remote sensing, and stealth, while simultaneously exclusive assets like aircraft carrier groups are becoming vulnerable, ineffective, or fiscally unsustainable,Footnote 106 led the US to declare its intention to respond with a ‘Third Offset’ strategy. It announced in 2014 that it would counter potential adversaries asymmetrically, rather than system by system:

Trying to counter emerging threats symmetrically with active defenses or competing ‘fighter for fighter’ is both impractical and unaffordable over the long run. A third offset strategy, however, could offset adversarial investments in A2/AD [anti-access/area denial] capabilities in general – and ever-expanding missile inventories in particular – by leveraging US core competencies in unmanned systems and automation, extended-range and low-observable air operations, undersea warfare, and complex system engineering and integration. A GSS [global surveillance and strike] network could take advantage of the interrelationships among these areas of enduring advantage to provide a balanced, resilient, globally responsive power projection capability.Footnote 107

The underlying developments have been apparent for some time, ‘disruptive technologies and destructive weapons once solely possessed by advanced nations’ have proliferated and are now easily and cheaply available to a large number of state and non-state opponents, threatening the effectiveness of many extremely expensive weapon systems on which power-projection by advanced nations, especially the US, had relied.Footnote 108 One of these disruptive technologies has been unmanned vehicles, especially airborne ‘drones’. While these have been used for a century and have been militarily effective for half a century,Footnote 109 the explosion in surveillance and reconnaissance capability afforded by AI, and the dramatic miniaturisation and commercialisation of many of the underlying key components have transformed the global security landscape by making these capabilities far more accessible.Footnote 110

Drones have proven their transformative battlefield impact since the 1973 Yom Kippur War and 1982 Israeli invasion of Lebanon.Footnote 111 Whatever their many operational and strategic benefits, unmanned aircraft were initially not cheaper to operate than conventional ones: ‘higher costs for personnel needed to monitor and analyze data streams that do not exist on manned platforms, as well as the costs for hardware and software that go into the sensor packages,’Footnote 112 to say nothing of the considerable expense of training their pilots,Footnote 113 left drones and the long-range precision targeting capability they conferred out of the reach of most armies, primarily due to economic costs, skilled manpower shortages, and technological complexity.

The recent conflict between Azerbaijan and Armenia has decisively shown that these conditions no longer hold. Both are relatively poor nations with fairly unsophisticated armed forces, with the crucial suppliers being the medium powers of Turkey and Israel. This highlighted the dramatic availability and affordability of such technology,Footnote 114 much of it off-the-shelf and available through a number of new entrants in the market, raising important questions of export controls and procurement.Footnote 115 Drone technology and their transformational impact on the battlefield are no longer the prerogative of rich industrial nations. While AI does not appear to have played a large role in this conflict yet,Footnote 116 the decisiveness of the precision afforded by long-range loitering munition, unmanned vehicles, and drastically better reconnaissance,Footnote 117 has not been lost on more traditional great powers.Footnote 118

This proliferation of precision long-range weaponry portends the end of the enormous advantages enjoyed by the US as a result of its ‘Second Offset’ strategy. Following the Vietnam War, the US successfully sought to counteract the perceivedFootnote 119 numerical superiority of the Soviet UnionFootnote 120 in air and missile power by investing in superior high-precision weaponry, harnessing the country’s broad technological edge.Footnote 121 These investments paid off and conferred a surprisingly long-lasting dominance. The loss of its main adversary and the inability of other adversaries to match its technological capabilities, meant that the unique advantages conferred to the US – primarily the ability to essentially eliminate risk to one’s own personnel by striking remotely and to reduce political risk from ‘collateral damage’ by striking precisely – created an enduring willingness to deploy relatively unopposed in a vast number of unconventional conflict scenarios, sometimes dubbed a ‘New American Way of War’.Footnote 122

In principle, ‘combat drones and their weapons systems are lawful weapons’.Footnote 123 Moreover, given inherent technical differences, especially their drastically higher loitering ability, lack of risk to personnel and higher precision, can actually improve observance of the law of armed conflict by making it easier to distinguish and reduce ‘collateral damage’,Footnote 124 having led some to claim that not to use drones would actually be unethical.Footnote 125 Given vastly better target reconnaissance and the possibility for much more deliberate strike decisions, convincing arguments can be made that remotely operated combat vehicles are not only perfectly lawful weapons but have the potential to increase compliance with humanitarian objectives: ‘While you can make mistakes with drones, you can make bigger mistakes with big bombers, which can take out whole neighborhoods. A B-2 [manned bomber] pilot has no idea who he is hitting; a drone pilot should know exactly who he is targeting.’Footnote 126 These very characteristics – the absence of risk to military personnel and vastly better information about battlefield conditions – have also made drone warfare controversial, aspects that are heightened but not created by the addition of AI. The relative absence of operational and political risk led to a greater willingness to use armed force as a tool of statecraft, in the process bending or breaking traditional notions of international law and territorial integrity.Footnote 127 Some have argued that remote warfare with little to no risk to the operator of the weapon is somehow unethical, somehow incompatible with the warrior code of honour, concerns that should, if anything, apply even more forcefully to machines killing autonomously.Footnote 128 Whatever the merits of the conception of fairness underlying such conceptions, such ‘romantic and unrealistic views of modern warfare’ do not reflect a legal obligation to expose oneself to risk.Footnote 129

There is a legal obligation, however, to adequately balance risks resulting from obtaining military advantages, which include reducing exposing service-members to risk, and the principle of distinction meant to protect innocent civilians. Many years ago, Stanley Hoffmann denounced the perverse doctrine of ‘combatant immunity’ in the context of high altitude bombing by manned aircraft staying above the range of air defences despite the obvious costs in precision and thus civilian casualties this would entail.Footnote 130 In some respects, the concerns Hoffmann expressed have been addressed by unmanned aircraft, which today permit unprecedented levels of precision, deliberation, and thus observance of the principle of distinction:

Drones are superior to manned aircraft, or artillery, in several ways. Drones can gather photographic intelligence from geographic areas too dangerous for manned aircraft. Drones carry no risk of friendly personnel death or capture. Drones have an operational reach greater than that of aircraft, allowing them to project force from afar in targets far in excess of manned aircraft. The accuracy of drone-fired munitions is greater than that of most manned aircraft, and that accuracy allows them to employ munitions with a kinetic energy far less than artillery or close air support require, thus reducing collateral damage.Footnote 131

At the same time, however, the complete removal of risk to one’s own personnel has reduced traditional inhibitions to engage in violence abroad,Footnote 132 including controversial policies of ‘targeted killings’.Footnote 133 Many of the ethical and legal conundra, as well as operational advantages that ensured are heightened if the capability of remotely operated vehicles is married with AI, which can improve independent or pre-authorised targeting by machines.Footnote 134

VI. Reconnaissance

The previous section showed that the rapid development of AI is transforming existing military capabilities, leading to considerable adjustments in relative strength. As in the civilian field, the main driver is the removal of a key resource constraint, namely the substitution of skilled, thus expensive and often rare, manpower by machines no longer constrained by time, availability, emotions, loyalty, alertness, etc. The area where these inherent advantages are having the largest national security impact is reconnaissance and intelligence collection.Footnote 135

It is not always easy to distinguish these activities clearly from electronic espionage, sabotage, and intellectual property theft discussed above, but it is apparent that the capabilities conferred by automated analysis and interpretation of vast amounts of sensor data is raising important regulatory questions related to privacy, territorial integrity, and the interpretation of classical ius in bello principles on distinction, proportionality, and military necessity.

The advantages of drones outlined just aboveFootnote 136 have conferred unprecedented abilities to pierce the ‘fog of war’ by giving the entire chain of command, from platoon to commander in chief, access to information of breathtaking accuracy, granularity, and actuality.Footnote 137 Such drone-supplied information is supplemented by enormous advances in ‘signal and electronic intelligence’, that is eavesdropping into communication networks to obtain information relevant for tactical operations and to make strategic threat assessments. But all this available information would be meaningless without someone to make sense of it. Just like in civilian surveillance,Footnote 138 the limiting factor has long been the human being needed to watch and interpret the video or data feed.Footnote 139 As this limiting factor is increasingly being removed by computing power and algorithms, real-time surveillance at hitherto impractical levels becomes possible.Footnote 140

Whether the raw data is battlefield reconnaissance, satellite surveillance, signal intelligence, or similar sensor data, the functional challenge, regulatory difficulty, and corresponding strategic opportunity are the same: mere observation is relatively inconsequential – from both a regulatory and operational point of view – unless the information is recorded, classified, interpreted, and thereby made ‘useful’.Footnote 141 This reflects a basic insight made already some forty years ago by Herbert Simon:

in an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.Footnote 142

In systems design, whether military or civilian, the main design problem is often seen as acquiring and presenting more information, following the traditional mental model that information scarcity is the chief constraint. As Simon and others correctly pointed out, however, these design parameters fundamentally mistake the underlying transformation brought about by technological change that is the ever-decreasing cost of collecting and transmitting data leading to the potential for ‘information overload’. In other words, the real limiting factor was attention, defined as ‘focused mental engagement on a particular item of information. Items come into our awareness, we attend to a particular item, and then we decide whether to act.’Footnote 143

The true distinguishing, competitive ability is, therefore, to design systems that filter out irrelevant or unimportant information and to identify among a vast amount of data those patterns likely to require action. AI is able to automatise this difficult, taxing, and time-consuming process, by spotting patterns of activity in raw data and bringing it to the attention of humans. The key to understanding the transformation wielded by AI, especially machine learning, is the revolutionary reversal of the role of information. For most of human history, information was a scarce resource, which had to be obtained and transmitted at great material and human cost. Technological advances during the latter half of the twentieth century reversed that historic trajectory, making information suddenly over-abundant. Today, the limiting factor is no longer the availability of information as such, but our ability to make sense of its sheer amount. The ability to use computing power to sift through that sudden information abundance thus becomes a chief competitive ability, in business just as on the battlefield: ‘Data mining is correctly defined as the nontrivial process of identifying valid, novel, potentially useful and ultimately understandable patterns in data.’Footnote 144 The key to performance, whether military or economic, is to derive knowledge from data, that is the ability to search for answers in complex and dynamic environments, to spot patterns of sensitive activity among often unrelated, seemingly innocuous information and to bring it to the attention of human decision-makers or initiate automated responses. Drastic advances in AI, made possible by the triple collapse in the price of sensor data collection, data storage, and processing power,Footnote 145 finally seem to offer a solution to the problem of information over-abundance by substituting machine attention for increasingly scarce human mental energy.

These long-gestating technological capabilities have suddenly aligned to bring about the maturation of AI. As we saw with respect to unmanned vehicles, one of their key structural advantages consists in their ability to deliver large amounts of sensor data, just like signal intelligence. Traditionally, one of the key constraints consisted in the highly skilled, thus rare and expensive, manpower necessary to make sense of that data: interpreting photographic intelligence, listening in on air control communications in foreign languages, etc.Footnote 146 Most of these tasks can already successfully be carried out by narrow AI, offering three game-changing advantages: first, the complete removal of manpower constraint in classifying and interpreting data, detecting patterns and predicting outcomes; second, machine intelligence is quicker than humans, it doesn’t tire, it isn’t biased,Footnote 147 but perhaps most importantly, it can detect patterns humans wouldn’t be able to see; and third, AI permits disparate data to be fused, permitting otherwise invisible security-relevant connections to be identified.Footnote 148

VII. Foreign Relations

Perhaps more important than the ability to lift the ‘fog of war’ through better reconnaissance might be the transformation of the role of information and trust in the conduct of foreign relations. Again, this aspect of AI overlaps but is distinct from the Internet. To highlight the enormity of the challenges posed by AI, it might be useful to recall the early years of the Internet. The first time I surfed the web was in the autumn of 1995. Email was known to exist but it was not used by anyone I knew; my own first email was only sent two years later in graduate school. That autumn, I had to call and book a time-slot at the central library of the University of London, the websites I managed to find were crude, took a god-awful time to load and one had to know their addresses or look them up in a physical, printed book.Footnote 149

My conclusion after that initial experience seemed clear: this thing would not catch on. I did not use it again for several years. After all, who would want to read a newspaper on a computer, waiting forever and scrambling through terrible layout? In a now-hilarious appearance on an American late-night show that year, the Microsoft founder Bill Gates responded to the host’s thinly-disguised dismissal by giving a fairly enduring definition of that ‘internet thing’: ‘Well, it’s becoming a place where people are publishing information. … It is the big new thing.’Footnote 150 Obviously, Gates was more clairvoyant than me. Indeed, the Internet would be the new big thing, but he understood that it would take some time until normal people like me could see its value.Footnote 151

Even after search-engines made the increasingly graphical web far more user-friendly, by 2000 the internet was still not mainstream and some journalists wondered whether it was ‘just a passing fad’.Footnote 152 Like many new cultural phenomena driven by technological innovation, those ‘in the know’ enjoyed their avant-garde status, as the editor of one of the early magazines serving this new demographic stressed: ‘Internet Underground was this celebration of this relatively lawless, boundless network of ideas we call the Internet. It assumed two things about its audience: 1) You were a fan [and] 2) you knew how to use it. Otherwise, the magazine wouldn’t have made much sense to you.’Footnote 153 The removal of physical, temporal, and pecuniary barriers to the sharing of information indeed created a ‘network of ideas’, opening new vistas to collective action, new interpretations of established civil liberties, and new conceptions of geography.Footnote 154 Early generations of technophiles ‘in the know’ conjured this non-corporeal geography as a utopia of unfettered information-sharing, non-hierarchical self-regulation, and self-realisation through knowledge. Then-prevailing conceptions of ‘cyberspace’ were characterised by scepticism of both government power and commercial interests, often espousing anarchist or libertarian attitudes towards community, seeing information as a commodity for self-realisation, not profit.Footnote 155

Early utopians stressed the opportunities created by this new, non-hierarchical ‘network of ideas’, which many perceived to be some kind of ‘samizdat on steroids’, subversive to authoritarian power and its attempts to control truth:Footnote 156 ‘The design of the original Internet was biased in favor of decentralization of power and freedom to act. As a result, we benefited from an explosion of decentralized entrepreneurial activity and expressive individual work, as well as extensive participatory activity. But the design characteristics that underwrote these gains also supported cybercrime, spam, and malice.’Footnote 157 Civilian internet pioneers extrapolated from these core characteristics of decentralisation and unsupervised individual agency a libertarian utopia in the true meaning of the word, a non-place or ‘virtual reality’ consisting of and existing entirely within a ‘network of ideas’. Here, humans could express themselves freely, assume new identities and interests. Unfettered by traditional territorial regimes, new norms and social mores would govern their activities towards personal growth and non-hierarchical self-organisation. Early mainstream descriptions of the Internet compared the novelty to foreign travel, highlighting emotional, cultural, and linguistic barriers to understanding:

The Internet is the virtual equivalent of New York and Paris. It is a wondrous place full of great art and artists, stimulating coffee houses and salons, towers of commerce, screams and whispers, romantic hideaways, dangerous alleys, great libraries, chaotic traffic, rioting students and a population that is rarely characterized as warm and friendly. … First-time visitors may discover that finding the way around is an ordeal, especially if they do not speak the language.Footnote 158

As the Internet became mainstream and eventually ubiquitous, many did, in fact, learn to ‘speak its language’, however imperfectly.Footnote 159 The advent of AI can be expected to bring changes of similar magnitude, requiring individuals and our governing institutions to again ‘learn its language’. AI is altering established notions of verification and perceptions of truth. The ability to obtain actionable intelligence despite formidable cultural and organisational obstacles,Footnote 160 is accompanied by the ability to automatically generate realistic photographs, video, and text, enabling information warfare of hitherto unprecedented scale, sophistication, and deniability.Footnote 161 Interference in the electoral and other domestic processes of competing nations are not new, but the advent of increasingly sophisticated AI is permitting ‘social engineering’ in novel ways.

First, it has become possible to attack large numbers of individuals with highly tailored misinformation through automated ‘chatbots’ and similar approaches. Secondly, the quality of ‘deep fakes’ generated by sophisticated AI are increasingly able to deceive even aware and skilled individuals and professional gatekeepers.Footnote 162 Thirdly, the well-known ‘Eliza-effect’ of human beings endowing inanimate objects like computer interfaces with human emotions, that is imbuing machines with ‘social’ characteristics permits the deployment of apparently responsive agents at scale, offering unprecedented opportunities and corresponding risks not only for ‘phishing’ and ‘honey trap’ operations,Footnote 163 but especially to circumvent an enemy government by directly targeting its population.Footnote 164

A distinct problem fueled by similar technological advances is the ability to impersonate representatives of governments, thereby undermining trust and creating cover for competing narratives to develop.Footnote 165 Just as with any other technology, it is reasonable to expect that eventually corresponding technological advances will make it possible to detect and defuse artificially created fraudulent information.Footnote 166 It is furthermore reasonable to expect that social systems will likewise adapt and create more sophisticated consumers of such information better able to resist misinformation. Such measures had been devised during wars and ideological conflicts in the past and it is therefore correct to state that ‘deep fakes don’t create new problems so much as make existing problems worse’.Footnote 167 Jessica Silbey and Woodrow Hartzog are, of course, correct that the cure to the weaponisation of misinformation lies in strengthening and creating institution tasked with ‘gatekeeping’ and validation:

We need to find a vaccine to the deep fake, and that will start with understanding that authentication is a social process sustained by resilient and inclusive social institutions. … it should be our choice and mandate to establish standards and institutions that are resilient to the con. Transforming our education, journalism, and elections to focus on building these standards subject to collective norms of accuracy, dignity, and democracy will be a critical first step to understanding the upside of deep fakes.Footnote 168

The manner in which this is to be achieved goes beyond the scope of this chapter, but it is important to keep in mind that both accurate information itself, as well as misinformation have long been part of violent and ideological conflict.Footnote 169 Their transformation by the advent of AI must, therefore, be taken into account for a holistic assessment of its impact on national security and its legal regulation. This is particularly pertinent due to the rise of legal argumentation not only as a corollary of armed conflict but as its, often asymmetric, substitute in the form of ‘lawfare’,Footnote 170 as well as the evident importance of legal standards for such societal ‘inocculation’ to be successful.Footnote 171

VIII. Economics

National security is affected by economic competitiveness, which supplies the fiscal and material needs of military defence. The impact of the ongoing revolution in AI on existing labour markets and productive patterns is likely to be transformational.Footnote 172 The current debate is reminiscent of earlier debates about the advent of robotics and automation in production. Where that earlier debate focused on the impact on the bargaining power and medium-term earning potential of blue-collar workers, AI is also threatening white-collar workers, who hitherto seemed relatively secure from cross-border wage arbitrage as well as automation.Footnote 173 In a competitive arena, whether capitalism for individual firms or anarchy for nations, the spread of innovation is not optional but a logical consequence of the ‘socialising effect’ of any competitive system:Footnote 174 ‘Machine learning is a cool new technology, but that’s not why businesses embrace it. They embrace it because they have no choice.’Footnote 175

This embrace of AI has at least three important national security implications, with corresponding regulatory challenges and opportunities. First, dislocations resulting from the substitution of machines for human labour has destabilising effects for social cohesion and political stability, both domestic and international.Footnote 176 These dislocations have to be managed, including through the use of proactive regulation meant to further positive effects while buffering negative consequences.Footnote 177 The implications of mass unemployment resulting from this new wave of automation is potentially different from earlier cycles of technological disruption because it could lead to permanent unemployability of large sectors of the population, rendering them uncompetitive at any price. This could spell a form of automation-induced ‘resource curse’ affecting technologically advanced economies,Footnote 178 suddenly suffering from the socio-economic-regulatory failings historically associated with underdeveloped extractive economies.Footnote 179

Second, the mastery of AI has been identified by all major economic powers as central to maintaining their relative competitive posture.Footnote 180 Consequently, the protection of intellectual property, the creation of a conducive regulatory, scientific, and investment climate to nurture the sector has itself increasingly become a key area of competition between nations and trading blocs.Footnote 181

Third, given the large overlap between civilian and military sectors, capabilities in AI developed in one are likely to affect the nation’s position in the other.Footnote 182 Given inherent technological characteristics, especially scalability and the drastic reduction of marginal costs, and the highly disruptive effect AI can have on traditional military capabilities, the technology has the potential to drastically affect the relative military standing of nations quite independent of conventional measures such as size, population, hardware, etc.: ‘Small countries that develop a significant edge in AI technology will punch far above their weight.’Footnote 183

IX. Conclusion

Like many previous innovations, the transformational potential of AI has long been ‘hyped’ by members of the epistemic communities directly involved in its technical development. There is a tendency among such early pioneers to overstate potential, minimise risk, and alienate those not ‘in the know’ by elitist attitudes, incomprehensible jargon, and unrealistic postulations. As the comparison with cyberspace has shown, it is difficult to predict with accuracy what the likely impact of AI will be. Whatever its concrete form, AI is almost certain to transform many aspects of our lives, including national security.

This transformation will affect existing relative balances of power and modes of fighting and thereby call into question the existing normative acquis, especially regarding international humanitarian law. Given the enormous potential benefits and the highly dynamic current stage of technological innovation and intense national competition, the prospects for international regulation, let alone outright bans are slim. This might appear to be more consequential than it is, because much of the transformation will occur in operational, tactical, and strategic areas that can be subsumed under an existing normative framework that is sufficiently adaptable and broadly adequate.

The risk of existential danger by the emergence of super-intelligence is real but perhaps overdrawn. It should not detract from the laborious task of applying existing international and constitutional principles to the concrete regulation of more mundane narrow AI in the national security field.

27 Morally Repugnant Weaponry? Ethical Responses to the Prospect of Autonomous Weapons

Alex Leveringhaus
I. Introduction

In 2019, the United Nations (UN) Secretary General Antonio Guterres labelled lethal autonomous weapons ‘as political unacceptable and morally repulsive’.Footnote 1 ‘Machines’, Guterres opined, ‘with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law’.Footnote 2 The Secretary General’s statement seems problematic. Just because something is morally repugnant does not entail that it should be banned by law. Further, it is not clear what exactly renders autonomous weapons systems (AWS hereinafter) morally abhorrent.Footnote 3 The great danger is that statements such as the Secretary General’s merely rely on the supposed ‘Yuck’ factor of AWS.Footnote 4 But Yuck factors are notoriously unreliable guides to ethics. While individuals might find things ‘yucky’ that are morally unproblematic, they might not be repulsed by things that pose genuine moral problems.

In response to the Secretary General’s statement, the purpose of this chapter is twofold. First, it seeks to critically survey different ethical arguments against AWS. Because it is beyond the scope of this chapter to survey every ethical argument in this context, it outlines three prominent ones, (1) that AWS create so-called responsibility gaps; (2) that the use of lethal force by an AWS is incompatible with human dignity; and (3) that AWS replace human agency with artificial agency. The chapter contends that neither of these arguments is sufficient to show that AWS are morally repugnant. Second, drawing upon a more realistic interpretation of the technological capacities of AWS, the chapter outlines three alternative arguments as to why AWS are morally problematic, as opposed to morally repugnant.

In the second part of the chapter, I write more about definitional issues in the debate on AWS. In the third part, I critically analyse, respectively, the notion of a responsibility gap, the relationship between AWS and human dignity, and role of human agency in war. In the fourth part, I outline a brief alternative account of why AWS might be morally problematic and explain how this intersects with other key issues in contemporary armed conflict.

Before I do so, I need to raise three general points. First, the chapter does not discuss the legal status of AWS. The focus of this chapter is on ethical issues only. The question whether, as suggested by the Secretary General, the alleged moral repugnancy of AWS justifies their legal prohibition is best left for a different occasion. Second, the chapter approaches AWS from the perspective of contemporary just war theory as it has developed since the publication of Michael Walzer’s seminal “Just and Unjust Wars: A Moral Argument with Historical Illustrations” in 1977.Footnote 5 Central to Walzer’s work, and much of just war theory after it, is the distinction between the normative frameworks of jus ad bellum (justice in the declaration of war) and jus in bello (justice in the conduct of war). As we shall see, the ethical debate on AWS has mainly been concerned with the latter, as it has tended to focus on the use of (lethal) force by AWS during armed conflict. Third, in addition to the distinction between jus ad bellum and jus in bello, Walzer, in Just and Unjust Wars, defends the distinction between combatants (who may be intentionally killed) and non-combatants (who may not be intentionally killed) during armed conflict. The former are typically soldiers, whereas the latter tend to be civilians, though he acknowledges the existence of grey zones between these categories.Footnote 6 In recent years, this distinction has come increasingly under pressure, with some theorists seeking to replace it with a different one.Footnote 7 For the sake of convenience and because these terms are widely recognised, the chapter follows Walzer in distinguishing between combatants and non-combatants. However, many of the issues highlighted in the following sections will also arise for theories that are critical of Walzer’s distinction.

II. What Is an Autonomous Weapon?

Here, I offer a fourfold attempt to define AWS. First, it is self-evident that AWS are weapons. In this sense, they differ from other forms of (military) technology that are not classifiable as weapons. The following analysis assumes that weapons have the following characteristics; (1) they were specifically designed in order to (2) inflict harm on another party.Footnote 8 Usually, the harm is achieved via a weapon’s kinetic effect. The harmful kinetic effect is not random or merely a by-product of the weapon’s operation. Rather, weapons have been intentionally designed to produce a harmful effect. Non-weapons can be used as weapons – you could stab me with the butterknife – but they have not been deliberately designed to inflict harm.

Second, as stated by Secretary General Guterres, the crucial feature of AWS, accounting for their alleged moral repugnancy, is that their kinetic and potentially lethal effect is created by the weapon without human involvement.Footnote 9 However, AWS will require initial mission programming by a human programmer. Hence, there will be human involvement in the deployment of an AWS. The point, though, is that once an AWS has been programmed with its mission parameters, the weapon is capable of operating without any further guidance and supervision by a human individual. Crucially, it can create a harmful and potentially lethal kinetic effect by delivering a payload without direct or real-time human involvement. The technical term for such a weapon is an out-of-the-loop system. Unlike in-the-loop-systems in which the decision to apply kinetic force to a target is made by the weapon’s operator in real-time, or on-the-loop systems where the operator remains on stand-by and can override the weapon, a genuine out-of-the-loop system will not involve an operator once deployed.Footnote 10

Third, the notion of out-of-the-loop systems could be equally applied to automated and autonomous systems. Indeed, the literature is far from clear where the difference between the two lies, and any boundaries between automated and autonomous machine behaviour might be fluid. As a rule of thumb, autonomous systems are more flexible in their response to their operating environment than automated ones.Footnote 11 They could learn from their prior experiences in order to optimise their (future) performance, for example. They might also have greater leeway in translating the orders given via their programming into action. What this means in practice is that, compared to an automated system, any autonomous system (and not just weapons) is less predictable in its behaviour. That said, AWS would be constrained by particular targeting categories. That is, their programming would only allow them to attack targets that fall within a particular category. To illustrate the point, an AWS programmed to search and destroy enemy tanks would be restricted to attacking entities that fall into this category. Yet, compared to an automated weapon, it would be hard to predict where, when, and which enemy tank it would attack.

Fourth, as the quote from Secretary General Guterres suggests, AWS can produce a lethal kinetic effect without any human intervention post-programming. Here, the question is whether the alleged moral repugnancy of AWS only refers to AWS that would be deliberately programmed to attack human individuals. If so, this would potentially leave scope for the development and deployment of AWS that are not used for this purpose, such as the one mentioned in the ‘enemy tank’ example above. Moreover, it is noteworthy that any weapon can kill in two ways, (1) as an intended effect of its operation, and (2) as a side-effect of its operation. Presumably, the earlier quote by Secretary Guterres refers to (1), where a programmer would intentionally programme an AWS in order to attack human individuals, most likely enemy combatants.

The focus on this issue is problematic, for two reasons. First, it neglects lethal harm that might arise as a side effect of the operation of an AWS. As I shall show later, this category of harm is, in the context of AWS, more morally problematic than intended harm. Second, it is doubtful whether the intentional targeting of individuals through AWS is legally and morally permissible. To explain, as was noted in the introduction to this chapter, at the level of jus in bello, contemporary just war theory post-Walzer rests on the distinction between combatants and non-combatants. True, given advances in machine vision, an AWS could, with great reliability, distinguish between human individuals and non-human objects and entities. Yet, what it cannot do, at the present state of technological development at least, is to accurately determine whether an individual is a legitimate target (a combatant) or an illegitimate target (a non-combatant). It is, in fact, hard to see how a machine’s capacity for such a qualitative judgement could ever be technologically achieved. As a result, the deployment of an AWS to deliberately kill human individuals would not be permissible under jus in bello.

If the above observation is true, it has two immediate repercussions for the debate on AWS. First, militaries might not be particularly interested in developing systems whose purpose is the autonomous targeting of human individuals, knowing that such systems would fall foul of jus in bello. Still, militaries may seek to develop AWS that can be programmed to attack more easily identifiable targets – for example, a tank, a missile, or a submarine. In this case, I contend that the ethical debate on AWS misses much of the actual technological development and restricts its own scope unnecessarily. Second, as I have argued elsewhere,Footnote 12 in order to assess whether programming an AWS to kill human individuals is morally repugnant, it is necessary to assume that AWS do not fall down at the normative hurdle of accurately identifying human individuals as legitimate or illegitimate targets. This assumption is a necessary philosophical abstraction and technological idealisation of AWS that may not reflect their actual development and potential uses. Bearing this in mind, the chapter continues by analysing whether it is morally repugnant to deliberately programme an AWS to kill human individuals in war.

III. Programmed to Kill: Three Ethical Responses

The main ethical argument in favour of AWS is essentially humanitarian in nature.Footnote 13 More precisely, the claim is that AWS (1) ensure stricter compliance with jus in bello, and (2) reduce human suffering and casualties as a result.Footnote 14 Interestingly, the ethical counterarguments do not engage with this humanitarian claim directly. Rather, they immediately attack the notion of autonomous uses of force via an AWS. In this part of the chapter, I look at three ethical responses to the prospect of AWS being intentionally programmed to take human lives, (1) the argument that AWS create so-called responsibility gaps, (2) the claim that the intentional use of AWS to kill is incompatible with human dignity, and (3) the argument (made by this author) that, by replacing human agency with artificial agency at the point of force delivery, AWS render humans incapable of revising a decision to kill. As indicated above, the three arguments rely on a technologically-idealised view of AWS.

1. Responsibility Gaps

One of the earliest contributions to the ethical debate on AWS is the argument that these weapons undermine a commitment to responsibility. Put simply, the claim is that, in certain cases, it is not possible to assign (moral) responsibility to a human individual for an event caused by an AWS. This is especially problematic if the event constitutes a violation of jus in bello. In such cases, neither the manufacturer of the AWS, nor its programmer, nor the AWS itself (of course) can be held responsible for the event, resulting in a responsibility gap.Footnote 15 This gap arises from the inherent unpredictability of autonomous machine behaviour. No human programmer, it is claimed, could foresee every facet of emerging machine behaviour. Hence, it is inappropriate, the argument goes, to hold the programmer – let alone the manufacturer – responsible for an unforeseen event caused by an AWS. In a moral sense, no one can be praised or blamed, or even punished, for the event. Why should this pose a moral problem? Here, the claim is that for killing in war to be morally permissible, someone needs to be held responsible for the use of force. Responsibility gaps, thus, undermine the moral justification for killing in war.

Admittedly, the idea of a responsibility gap is powerful. But it can be debunked relatively easily. First, moral responsibility can be backward-looking and forward-looking. The responsibility gap arises from a backward-looking understanding of responsibility, where it is impossible to hold a human agent responsible for an event caused by an AWS in the past. The argument has nothing to say about the forward-looking sense of responsibility, where an agent would be assigned responsibility for supervising, controlling, or caring for someone or something in the future. In the present context, the forward-looking sense of responsibility lends itself to an on-the-loop system, rather than an out-of-the-loop system. Either way, it is not clear whether a gap in backward-looking responsibility is sufficient for the existence of a responsibility gap, or whether there also needs to be a gap in forward-looking responsibility. A backward-looking gap may be a necessary condition here, but not a sufficient one.

Second, it is contested whether killing in war is prima facie permissible if, and only if, someone can be held responsible for the use of lethal force. There are, roughly, two traditions in contemporary moral philosophy for thinking about the issue.Footnote 16 The first, derived from Thomism, is agent-centric in that it focuses on the intentions of the agent using lethal force. The second tradition is target-centric in that it focuses on the moral status of the target of lethal force. That is to say, the permissibility centres on the question whether the target has become liable to attack because it is morally and/or causally responsible for a (unjust) threat. On the target-centric approach, an agent who could not be held responsible for the use of lethal force may be allowed to kill if the target was liable to attack. In short, then, if the link between (agent) responsibility and the moral permission to use force is far weaker than assumed, the idea of a responsibility gap loses its normative force.

Third, the idea of a responsibility gap lets those who deployed an AWS off the hook far too easily.Footnote 17 True, given that autonomous systems tend to show unpredictable emerging behaviours, the individual (or group of individuals) who deploys an AWS by programming it with its mission parameters cannot know in advance that, at t5, the AWS is going to do x. Still, the programmer and those in the chain of command above him know that the AWS they deploy is likely to exhibit unforeseen behaviour, which might, in the most extreme circumstances, result in the misapplication of force. Notwithstanding that risk, they choose to deploy the weapon. In doing so, they impose a significant risk on those who might come into contact with the AWS in its area of operation, not least non-combatants. Of course, the imposition of that risk may either be reasonable and permissible under the circumstances or unreasonable and reckless – more on this shortly. But generally, the claim that those deploying an AWS are not responsible for any unforeseen damage resulting from its operation appears counterintuitive.

Finally, even if it is hard to hold individuals responsible for the deployment of an AWS, it is worthwhile remembering that armed conflicts are (usually) fought by states. In the end, the buck stops there. Needless to say, this raises all sorts of difficult issues which the chapter cannot go into. For now, it suffices to note that states have made reparations for the (wrongful) damage they caused in armed conflict. Most recently, for instance, the United States (US) compensated Afghan civilians for the deaths of (civilian) family members in the course of US military operations in the country as part of the so-called War on Terror.Footnote 18 The most notorious case is that of Staff Sergeant Robert Bales who, after leaving his base without authorisation, went on a shooting rampage and was later charged with the murder of seventeen Afghan civilians, as well as causing injury to a number of others. The US paid compensation to those affected by Sergeant Bales’ actions, even though Sergeant Bales acted out of his own volition and outside the chain of command.Footnote 19

In sum, the notion of a responsibility gap does not prove that AWS are morally repugnant. Either the existence of a (backward-looking) responsibility gap is insufficient to show that the deployment of AWS would be morally unjustifiable or there is no responsibility gap as such. Yet, there are elements of the responsibility gap that could be salvaged. The argument that it is necessary to be able to hold someone responsible for the use of force is motivated by a concern for human dignity or respect for individuals. It might, therefore, be useful to focus on the relationship between AWS and human dignity. That is the purpose of the next section.

2. Dignity

Are AWS morally repugnant because, as has been suggested by some contributors to the debate, they are an affront to human dignity?Footnote 20 This question is difficult to answer because just war theorists have tended to eschew the concept of human dignity. Perhaps for good reason. Appeals to dignity often do not seem to decisively resolve difficult moral issues. For instance, the case for, as well as against, physician-assisted suicide could be made with reference to the concept of dignity. That said, the concept enters into contemporary just war thinking, albeit in an indirect way. This has to do with the aforementioned distinction between combatants and non-combatants. The former group is seen as a legitimate target in armed conflict, which means that combatants lack a moral claim against other belligerent parties not to intentionally kill them. Non-combatants, by contrast, are immune to intentional attack, which means that they hold a negative moral claim against combatants not to intentionally kill them. However, jus in bello does not grant non-combatants immunity against harm that would be unintentionally inflicted. Here, the Doctrine of Double Effect and its conceptual and normative distinction between intended and foreseen harm comes into play. In his classic discussion of non-combatant immunity, Walzer argues that it is permissible to kill or harm non-combatants if, and only if, the harm inflicted on them is (1) not intended, (2) merely foreseen (by the belligerent), (3) not used as a (bad) means to a good effect, (4) proportionate (not excessive to the good achieved), and (5) consistent with a belligerent’s obligations of ‘due care’.Footnote 21

Granted, but why should the distinction between intended and foreseen harm have any normative significance? According to the Kantian view, the Doctrine of Double Effect protects the dignity of innocent individuals by ensuring that belligerents comply with the second formulation of Kant’s categorical imperative, which obliges them to treat (innocent) individuals not merely as means to an end but always also as ends-in-themselves.Footnote 22 To illustrate the point, if Tim intentionally bombs non-combatants in order to scare the enemy into surrender, Tim violates their status as ends-in-themselves, instrumentalising their deaths in order to achieve a particular goal (the end of the war). By contrast, if Tom bombs a munitions factory and unintentionally kills non-combatants located in its vicinity as a foreseen side-effect of his otherwise permissible (and proportionate) military act, Tom does not instrumentalise their deaths for his purposes. Counterfactually, Tom could destroy the munitions factory, even if no non-combatant was harmed. Unlike Tim, Tom does not need to kill non-combatants to achieve his goals. Tom’s actions would not violate the ends-not-means principle – or so one might argue.

According to the Kantian View of the Doctrine of Double Effect, then, if Tam intentionally programmed an AWS to kill non-combatants, he would violate their dignity. Note, though, that there is no moral difference between Tam’s and Tim’s actions. The only difference is the means they use to kill non-combatants. As a result, this example does not show that AWS pose a unique threat to human dignity. Any weapon could be abused in the way Tam abuses the AWS. Hence, in the example, the use of the AWS is morally repugnant, not the weapon as such.

What about combatants? If Tam intentionally programmed an AWS to kill enemy combatants, would he violate their dignity? That question is hard to answer conclusively. First, because combatants lack a moral claim not to be killed, Tam does not violate their moral rights by deploying an AWS against them. Second, unlike non-combatants, it is usually morally permissible and necessary to instrumentalise combatants. One does not need to go quite as far as Napoleon who remarked that ‘soldiers are made to be killed’.Footnote 23 But Walzer is right when he observes that combatants are the human instruments of the state.Footnote 24 As a result, combatants enjoy far lower levels of protection against instrumentalization than non-combatants. In a nutshell, it needs to be shown that, although combatants (1) lack a moral claim not to be intentionally attacked [during combat], and (2) do not enjoy the same level of protection against instrumentalization as non-combatants, the use of an AWS in order to kill them would violate their dignity.

The dignity of combatants, critics of AWS may argue, is violated because a machine should not be left to decide who lives or dies. At the macro-level of programming the argument is certainly wrong. Tam, the programmer in the above example, makes the decision to programme an AWS to detect and eliminate enemy combatants. In this sense, the machine Tam deploys does not make a decision to take life. Tam does. At the micro-level of actual operations, though, the argument has some validity. Here, the machine has some leeway in translating Tam’s instructions into actions. Within the target category of enemy combatants, it could ‘decide’ to attack Combatant1 rather than Combatant2 or Combatant3. It might, further, not be possible to ascertain why the machine chose to attack Combatant1 over Combatant2 and Combatant3. The resulting question is whether the machine’s micro-choice, rather than Tam’s macro-choice, violates Combatant1’s dignity.

Arguably not. This is because killing in war tends to be impersonal and to some extent morally arbitrary. Why did a particular combatant die? Often, the answer will be that he was a combatant. Armed conflict, as Walzer observes, is not a personal relationship. To wit, combatants are not enemies in a personal sense, which would explain the choices they make. They are the human instruments of the state. They kill and die because they are combatants. And often because they are in the wrong place at the wrong time. That is the brutal reality of warfare. Consider a case where an artillery operator fires a mortar shell in the direction of enemy positions. Any or no enemy combatant located in the vicinity might die as a result. We might never know why a particular enemy combatant died. We only know that the artillery operator carried out his orders to fire the mortar shell. By analogy, the reason for an AWS’s micro-choice to target Combatant1 over Combatant2 and Combatant3 is, ultimately, that Combatant1 is a combatant. Combatant1 was simply in the wrong place at the wrong time. It is not clear why this micro-choice should be morally different from the artillery operator’s decision to fire the mortar shell. Just as the dignity of those combatants who were unlucky enough to be killed by the artillery operator’s mortar shell is not violated by the artillery operator’s actions, Combatant1’s dignity is not violated because a machine carried out its pre-programmed orders by micro-choosing him over another combatant. So, the argument that human dignity is violated if a machine makes a micro-choice over life and death seems morally dubious.

But perhaps critics of AWS may concede that the micro-choice as such is not the problem. To be sure, killing in war, even under orders, is to some extent random. The issue, they could reply, is that the artillery operator and those whom he targets have equal skin in the game, while the AWS that kills Combatant1 does not. In other words, the artillery operator has an appreciation of the value of (his own) life, which a machine clearly lacks. He is aware of the deadly effects of his actions, whereas a machine is clearly not. Perhaps this explains the indignity of being killed as a result of a machine’s micro-choice.

This argument takes us back to the Thomistic or agent-centric tradition in the ethics of killing outlined previously. Here, the internal states of the agent using force, rather than the moral status of the target, determines the permissibility of killing. To be allowed to kill in war, a combatant needs to have an appreciation of the value of life or at least be in a similar situation to those whom he targets. Naturally, if one rejects an agent-centric approach to the ethics of killing, this argument does not hold much sway.

More generally, it is unclear whether such a demanding condition – that an individual recognises the value of life – could be met in contemporary armed conflict. Consider the case of high altitude bombing during NATO’s war in Kosovo. At the time, Michael Ignatieff observed that NATO was fighting a ‘virtual war’ in which NATO did the fighting while most of the Serbs ‘did the dying’.Footnote 25 It is hard to imagine that NATO’s bomber pilots, flying at 15,000 ft and never seeing their targets, would have had the value of human life at the forefront of their minds, or would have even thought of themselves as being in the same boat as those they targeted. The pilots received certain target coordinates, released their payloads once they had reached their destination, and then returned to their base. In short, modern combat technology, in many cases, has allowed combatants to distance themselves from active theatres, as well as the effects of their actions, to an almost unprecedented degree. These considerations show that the inability of a machine to appreciate the value of life does not pose a distinctive threat to human dignity. The reality of warfare has already moved on.

But there may be one last argument available to those who seek to invoke human dignity against AWS. To be sure, combatants, they could concede, do not hold a moral claim against other belligerents not to attack them. Nor, as instruments of the state, do they enjoy the same level of protection against instrumentalization as non-combatants. Still, unless one adopts Napoleonic cynicism, there must be some moral limits on what may permissibly be done to combatants on the battlefield. There must be some appreciation that human life matters, and that humans are not merely a resource that can be disposed of in whatever way necessary. Otherwise, why would certain weapons be banned under international law, such as blinding lasers, as well as chemical and biological weapons?

Part of the answer is that these weapons are likely to have an indiscriminate and disproportionate effect on non-combatants. But intuitively, as the case of blinding lasers illustrates, there is a sense that combatants deserve some protection. Are there certain ways of killing that are somehow cruel and excessive, even if they were aimed at legitimate human targets? And if that is the case, would AWS fall into this category?

There is a comparative and a non-comparative element to these questions. Regarding the comparative element, as macabre as it sounds, it would certainly be excessive to burn a combatant to death with a flamethrower if a simple shot with a gun would eliminate the threat he poses. That is common-sense. With regard to the non-comparative element, the issue is whether there are ways of killing which are intrinsically wrong, regardless of how they compare to alternative means of killing. That question is harder to answer. Perhaps it is intrinsically wrong to use a biological weapon in order to kill someone with a virus. That said, it is hard to entirely avoid comparative judgements. Given the damage that even legitimate weapons can do; it is not clear that their effects are always morally more desirable than those of illegitimate weapons. One wonders if it is really less ‘cruel’ for someone to bleed to death after being shot or to have a leg blown off from an explosive than to be poisoned. Armed conflict is brutal and modern weapons technology is shockingly effective, notwithstanding the moral (and legal) limits placed on both.

Although, within the scope of this chapter, it is impossible to resolve the issues arising from the non-comparative element, the above discussion provides two main insights for the debate on AWS. First, if AWS are equipped with payloads whose effects were either comparatively or non-comparatively excessive or cruel, they would certainly violate relevant moral prohibitions against causing excessive harm. For example, an autonomous robot with a flamethrower that would incinerate its targets or an autonomous aerial vehicle that would spray target areas with a banned chemical substance would indeed be morally repugnant. Second, it is hard to gauge whether the autonomous delivery of a legitimate – that is, not disproportionately harmful – payload constitutes a cruel or excessive form of killing. Here, it seems that the analysis is increasingly going in circles. For, as I argued above, many accepted forms of killing in war can be seen analogous to, or even morally on a par with, autonomous killing. Either all of these forms of killing are a threat to dignity, which would lend succour to ethical arguments for pacifism, or none are.

To sum up, AWS pose a threat to human dignity if they were deliberately used to kill non-combatants, or were equipped with payloads that caused excessive or otherwise cruel harm. However, even in such cases, AWS would not pose a distinctive threat. This is because some of the features of autonomous killing can also be found in established forms of killing. The moral issues AWS raise with regard to dignity are not unprecedented. In fact, the debate on AWS might provide a useful lens through which to scrutinise established forms of killing in war.

3. Human and Artificial Agency

If the earlier arguments are correct, the lack of direct human involvement in the operation of an AWS, once programmed, is not a unique threat to human dignity. Yet, intuitively, there is something morally significant about letting AWS kill without direct human supervision. This author has sought to capture this intuition via the Argument from Human Agency.Footnote 26 I argue that AWS have artificial agency because they interact with their operating environment, causing changes within it. According to the Argument from Human Agency, the difference between human and artificial agency is as follows. Human agency consists in refusing to carry out an order. As history shows, soldiers have often not engaged the enemy, even when under orders to do so. An AWS, by contrast, will kill once it has ‘micro-chosen’ a human target. We might not know when, where, and whom it will kill, but it will carry out its programming. In a nutshell, by removing human agents from the point of payload delivery, out-of-the-loop systems make it impossible to revise a decision to kill.

While the Argument from Human Agency captures intuitions about autonomous forms of killing, it faces three challenges. First, as was observed above, combatants do not hold a moral claim not to be killed against other belligerent parties and enjoy lower levels of protection against instrumentalization than non-combatants. Why, then, critics of the Argument from Human Agency might wonder, should combatants sometimes not be killed? The answer is that rights do not always tell the whole moral story. Pity, empathy, or mercy are sometimes strong motivators not to kill. Sometimes (human) agents might be permitted to kill, but it might still be morally desirable for them not to do so. This argument does not depend on an account of human dignity. Rather, it articulates the common-sense view that killing is rarely morally desirable even if it is morally permissible. This is especially true during armed conflict where the designation of combatant status is sufficient to establish liability to attack. Often, as noted above, combatants are killed simply because they are in the wrong place at the wrong time, without having done anything.

The second challenge to the Argument from Human Agency is that it delivers too little too late. As the example of high-altitude bombing discussed earlier showed, modern combat technology has already distanced individuals from theatres in ways that make revising a decision to kill difficult. The difference, though, between more established weapons and out-of-the-loop systems is that the latter systems remove human agency entirely once the system has been deployed. Even in the case of high-altitude bombing, the operator has to decide whether to ‘push the button’. Or, in the case of an on-the-loop system, the operator can override the systems’ attack on a target. Granted; in reality, an operator’s ability to override an on-the-loop system might be vanishingly small. If that is the case, there might be, as the Argument from Human Agency would concede, fewer reasons to think that AWS were morally unique. Rather, from the perspective of the Argument from Human Agency, many established forms of combat technology are more morally problematic than commonly assumed.

The third challenge is a more technical one for moral philosophy. If, according to the Argument from Human Agency, not killing is not strictly morally required because killing an enemy combatant via an AWS does not violate any moral obligations owed to that combatant, there could be strong reasons in favour of overriding the Argument from Human Agency. This would especially be the case when the deployment of AWS, as their defenders claim, led to significant reductions in casualties. Here, the Argument from Human Agency is weaker than dignity-based objections to AWS. In non-consequentialist or deontological moral theory, any trade-offs between beneficial aggregate consequences and dignity would be impermissible. The Argument from Human Agency, though, does not frame the issue in terms of human dignity. There might, thus, be some permissible trade-offs between human agency (deployment of human soldiers), on the one hand, and the aggregate number of lives saved via the deployment of AWS, on the other. Still, the Argument from Human Agency illustrates that there is some loss when human agency is replaced with artificial agency. And that loss needs to clear a high justificatory bar. Here, the burden of proof falls on defenders of AWS.

To conclude, while the Argument from Human Agency captures intuitions about autonomous killing, it is not sufficient to show that it is categorically impermissible to replace human with artificial agency. It merely tries to raise the justificatory bar for AWS. The humanitarian gains from AWS must be high for the replacement of human agency with artificial agency to be morally legitimate. More generally, neither of the three positions examined above – the responsibility gap, human dignity, and human agency – serve as knockdown arguments against AWS. This is partly because, upon closer inspection, AWS are not more (or less) morally repugnant than established, and more accepted, weapons and associated forms of killing in war. In this light, it makes sense to shift the focus from the highly idealised scenario of AWS being deliberately programmed to attack human targets to different, and arguably more realistic, scenarios. Perhaps these alternative scenarios provide a clue as to why AWS might be morally problematic. The fourth and final part of the chapter looks at these scenarios in detail.

IV. Three Emerging Ethical Problems with AWS

As was emphasised earlier, for technological reasons, it is hard to see that the intentional programming of AWS in order to target combatants could be morally (or legally) permissible. As a result, the intended killing of combatants via AWS is not the main ethical challenge in the real world of AWS. Rather, AWS will be programmed to attack targets that are more easily and reliably identifiable by a machine. It is not far-fetched, for instance, to imagine an autonomous submarine that hunts other submarines, or an autonomous stealth plane programmed to fly into enemy territory and destroy radar stations, or a robot that can detect and eliminate enemy tanks. While these types of AWS are not deliberately programmed to attack human individuals, they still raise important ethical issues. In what follows, I focus on three of these.

First, the availability of AWS, some critics argue, has the potential to lead to more wars. Surely, in light of the destruction and loss of life that armed conflicts entail, this is a reason against AWS. If anything, we surely want fewer wars, not more. Yet, in the absence of counterfactuals, it is difficult to ascertain whether a particular form of weapons technology necessarily leads to more wars. If, for instance, the Soviet Union and US had not had access to nuclear weapons, would they have gone to war after 1945? It is impossible to tell. Moreover, it is noteworthy that a mere increase in armed conflict does not tell us anything about the justness of the resulting conflicts. Of course, if the availability of AWS increased the willingness of states to violate jus ad bellum by pursuing unjust wars, then these weapons are not normatively desirable. If, by contrast, the effect of AWS on the frequency of just or unjust wars was neutral, or if they increased the likelihood of just wars, they would, ceteris paribus, not necessarily be morally undesirable.

Yet, while it is not self-evident that AWS lead to an increase in unjust wars, their availability potentially lends itself to more covert and small-scale uses of force. Since the US’s targeted killing campaign against suspected terrorists in the late 2000s, just war theorists have increasingly been concerned with uses of force that fall below the threshold for war and thus outside the regulatory frameworks provided jus ad bellum and jus in bello. Using the US-led War on Terror as a template, force is often used covertly and on an ad hoc basis, be it through the deployment of special forces or the targeting of alleged terrorists via remote-controlled aerial vehicles (‘drones’), with few opportunities for public scrutiny and accountability. AWS might be ideal for missions that are intended to fall, literally, under the radar. Once deployed, an AWS in stealth mode, without the need for further communication with a human operator, could enter enemy territory undetected and destroy a particular target, such as a military installation, a research facility, or even dual-use infrastructure. Although AWS should not be treated differently from other means used in covert operations, they may reinforce trends towards them.

Second, there is an unnerving analogy between AWS, landmines, and unexploded munitions, which often cause horrific damage in post-war environments. As we just saw, AWS can operate stealthily and without human oversight. With no direct human control over AWS, it is unclear how AWS can be deactivated after hostilities have been concluded. Rather unsettlingly, AWS, compared to landmines and unexploded munitions, could retain a much higher level of combat readiness. The moral issue is trivial and serious at the same time: does the very presence of autonomy in a weapon and the fact that it is an out-of-the-loop system make it difficult to switch it off? In other words, the central question is how, once human control over a weapon is ceded, it can be reasserted. How, for example, can a human operator re-establish control over an autonomous submarine operating in an undisclosed area of the high seas? There might eventually be technological answers to this question. Until then, the worry is that AWS remain a deadly legacy of armed conflict.

Third, while just war theorists have invested considerable energy into disambiguating the distinction between intended harm and unintended but foreseen harm, unintended and unforeseen harms, emanating from accidents and other misapplications of force, have received less attention. These harms are more widespread than assumed, leading to significant losses of life among non-combatants. Naturally, the fact that harm is unintended and unforeseen does not render it morally unproblematic. To the contrary, it raises questions about negligence and recklessness in armed conflict. One hypothesis in this respect, for instance, is that precision-weaponry has engendered reckless behaviour among belligerents.Footnote 27 Because these weapons are seen as precise, belligerents deploy them in high-risk theatres where accidents and misapplications of force are bound to happen. Here, abstention or the use of non-military alternatives seem more appropriate. For example, the use of military-grade weaponry, even if it is precise, over densely populated urban areas is arguably so risky that it is morally reckless. Belligerents know the risks but go ahead anyway because they trust the technology.

The conceptual relationship between precision-weaponry and AWS is not straightforward, but the question of recklessness is especially pertinent in the case of AWS.Footnote 28 After all, AWS not only create a significant kinetic effect, but they are unpredictable in doing so. As the saying goes, accidents are waiting to happen. True, in some cases, it might not be reckless to deploy AWS – for example, in extremely remote environments. But in many instances, and especially in the kinds of environments in which states have been conducting military operations over the last twenty-five years, it is morally reckless to deploy an inherently unpredictable weapon. Even if such a weapon is not deliberately programmed to directly attack human individuals, the threat it poses to human life is all too real. Can it really be guaranteed that an autonomous tank will not run over a civilian when speeding towards its target? What assurances can be given that an autonomous submarine does not mistake a boat carrying refugees for an enemy vessel? How can we be certain that a learning mechanism in a robotic weapon’s governing software does not ‘learn’ that because a child once threw a rock at the robot during a military occupation, children in general constitute threats and should therefore be targeted? These worries are compounded by the previous point about re-establishing control over an AWS. After control is ceded, it is not clear how it can be re-established, especially when it becomes apparent that the system does not operate in the way it should.

Advocates of AWS could mount two replies here. First, eventually there will be technological solutions that reduce the risk of accidents. Ultimately, this necessitates a technological assessment that ethicists cannot provide. The burden of proof, though, lies with technologists. Second, humans, defenders of AWS could point out, are also unpredictable, as the occurrence of war crimes or reckless behaviour during armed conflict attests. But the reply has three flaws. The first is that AWS will not be capable of offering a like-for-like replacement for human soldiers in armed conflict, especially when it comes to operations where the targets are enemy combatants (who would need to be differentiated from non-combatants). In this sense, the scope for human error, as well as wrongdoing, in armed conflict remains unchanged. The second flaw is that, although human individuals are unquestionably error-prone and unpredictable, AWS are unlikely, at the present stage of technological development, to perform any better than humans. The final flaw in the response is that, in the end, a fully armed weapons system has the capacity to do far more damage than any single soldier. For this reason alone, the deployment of AWS is, with few exceptions, morally reckless.

Taking stock, even if one turns from the highly abstract debate on AWS in contemporary philosophy to a more realistic appreciation of these weapons, moral problems and challenges do not magically disappear. Far from it, AWS potentially reinforce normatively undesirable dynamics in contemporary armed conflict, not least the push towards increasingly covert operations without public scrutiny, as well as the tendency for high-tech armies to (sometimes) take unreasonable, if not reckless, risks during combat operations. The key question of how control can be re-established over an out-of-the-loop system has not been satisfactorily answered, either. While these observations may not render AWS morally distinctive, they illustrate their prima facie undesirability.

V. Conclusion

Perhaps more than any other form of emerging weapons technology, AWS have been met with moral condemnation. As the analysis in this chapter shows, it is hard to pin down why they should be ‘morally repugnant’. Some of the central ethical arguments against AWS do not withstand critical scrutiny. In particular, they fail to show that AWS are morally different from more established weapons and methods of warfighting. Still, the chapter concludes that AWS are morally problematic, though not necessarily morally repugnant. The main point here is that, for the foreseeable future, AWS are not safe enough to operate in what is often a complex and chaotic combat environment. This is not to say that their technological limitations might not eventually be overcome. But for now, the deployment of a weapon whose behaviour is to some extent unpredictable, without sufficient and on-going human oversight and the ability to rapidly establish operator control over it, seems morally reckless. True, other types of weapons can be used recklessly in armed conflict, too. The difference is that the technology underpinning AWS remains inherently unpredictable, and not just the use of these weapons. Furthermore, while AWS do not appear to raise fundamentally new issues in armed conflict, they seem to reinforce problematic dynamics in the use of force towards ever more covert missions. AWS might make it considerably easier for governments to avoid public scrutiny over their uses of force. Hence, for democratic reasons, and not just ethical ones, the arrival of AWS and the prospect of autonomous war fighting should be deeply troubling.

28 On ‘Responsible AI’ in War Exploring Preconditions for Respecting International Law in Armed Conflict

Dustin A. Lewis
I. Introduction

In this chapter, I seek to help strengthen cross-disciplinary linkages in discourse concerning ‘responsible Artificial Intelligence (AI)’. To do so, I explore certain aspects of international law pertaining to uses of AI-related tools and techniques in situations of armed conflict.

At least five factors compel increasingly urgent consideration of these issues by governments, scientists, engineers, ethicists, and lawyers, among many others. One aspect concerns the nature and the growing complexity of the socio-technical systems through which these technologies are configured. A second factor relates to the potential for more frequent – and possibly extensive – use of these technologies in armed conflicts. Those applications may span such areas as warfighting, detention, humanitarian services, maritime systems, and logistics. A third issue concerns potential challenges and opportunities concerning the application of international law to employments of AI-related tools and techniques in armed conflicts. A fourth dimension relates to debates around whether or not the existing international legal framework applicable to armed conflicts sufficiently addresses ethical concerns and normative commitments implicated by AI – and, if it does not, how the framework ought to be adjusted. A fifth element concerns a potential ‘double black box’ in which humans encase technical opacity in military secrecy.

One way to seek to help identify and address potential issues and concerns in this area is to go ‘back to the basics’ by elaborating some key elements underpinning legal compliance, responsibility, and agency in armed conflict. In this chapter, I aim to help illuminate some of the preconditions arguably necessary for respecting international law with regard to employments of AI-related tools and techniques in armed conflicts. By respecting international law, I principally mean two things: (1) applying and observing international law with regard to relevant conduct and (2) facilitating incurrence of responsibility for violations arising in connection with relevant conduct. (The latter might be seen either as an integral element or a corollary of the former.) Underlying my exploration is the argument that there may be descriptive and normative value in framing part of the discussion related to ‘responsible AI’ in terms of discerning and instantiating the preconditions necessary for respecting international law.

I proceed as follows. In Section II, I frame some contextual aspects of my inquiry. In Section III, I sketch a brief primer on international law applicable to armed conflict. In Section IV, I set out some of the preconditions arguably necessary to respect international law. In Section V, I briefly conclude.

Two caveats ought to be borne in mind. The first caveat is that the bulk of the research underlying this chapter drew primarily on English-language materials. The absence of a broader examination of legal materials, scholarship, and other resources in other languages narrows the study’s scope. The second caveat is that this chapter seeks to set forth, in broad-brush strokes, some of the preconditions arguably underpinning respect for international law.Footnote 1 Therefore, the analysis and the identification of potential issues and concerns are far from comprehensive. Analysis in respect of particular actors, armed conflicts, or AI-related tools and techniques may uncover (perhaps numerous) additional preconditions.

II. Framing

In this section, I frame some contextual aspects of my inquiry. In particular, I briefly outline some elements concerning definitions of AI. I also enumerate some existing and anticipated uses for AI in armed conflict. Next, I sketch the status of international discussions on certain military applications of possibly related technologies. And, finally, I highlight issues around technical opacity combined with military secrecy.

1. Definitional Parameters

Terminological inflation may give rise to characterizations of various technologies as ‘AI’ even where those technologies do not fall into recognized definitions of AI. Potentially complicating matters further is that there is no agreed definition of AI expressly laid down in an international legal instrument applicable to armed conflict.

For this chapter, I will assume a relatively expansive definition of AI, one drawn from my understanding – as a non-scientific-expert – of AI science broadly conceived.Footnote 2 It may be argued that AI science pertains in part to the development of computationally-based understandings of intelligent behaviour, typically through two interrelated steps. One step relates to the determination of cognitive structures and processes and the corresponding design of ways to represent and reason effectively. The other step concerns developing (a combination of) theories, models, data, equations, algorithms, or systems that ‘embody’ that understanding. Under this approach, AI systems are sometimes conceived as incorporating techniques or using tools that enable systems to ‘reason’ more or less ‘intelligently’ and to ‘act’ more or less ‘autonomously.’ The systems might do so by, for example, interpreting natural languages and visual scenes; ‘learning’ (in the sense of training); drawing inferences; or making ‘decisions’ and taking action on those ‘decisions’. The techniques and tools might be rooted in one or more of the following methods: those rooted in logical reasoning broadly conceived, which are sometimes also referred to as ‘symbolic AI’ (as a form of model-based methods); those rooted in probability (also as a form of model-based methods); or those rooted in statistical reasoning and data (as a form of data-dependent or data-driven methods).

2. Diversity of Applications

Certain armed forces have long used AI-related tools and techniques. For example, in relation to the Gulf War of 1990–91, the United States employed a program called the Dynamic Analysis and Replanning Tool (DART), which increased efficiencies in scheduling and making logistical arrangements for the transportation of supplies and personnel.Footnote 3

Today, existing and contemplated applications of AI-related tools and techniques related to warfighting range widely.Footnote 4 With the caveat concerning terminological inflation noted above in mind, certain States are making efforts to (further) automate targeting-related communications support,Footnote 5 air-to-air combat,Footnote 6 anti-unmanned-aerial-vehicle countermeasures,Footnote 7 so-called loitering-attack munitions,Footnote 8 target recognition,Footnote 9 and analysis of intelligence, reconnaissance, and surveillance sources.Footnote 10 Armed forces are developing machine-learning techniques to generate targeting data.Footnote 11 Prototypes of automated target-recognition heads-up displays are also under development.Footnote 12 Rationales underlying these efforts are often rooted in military doctrines and security strategies that place a premium on enhancing speed and agility in decision-making and tasks and preserving operational capabilities in restricted environments.Footnote 13

In the naval context, recent technological developments – including those related to AI – afford uninhabited military maritime systems, whether on or below the surface, capabilities to navigate and explore with less direct ongoing human supervision and interaction than before. Reportedly, for example, China is developing a surface system called the JARI that, while remotely controlled, purports to use AI to autonomously navigate and undertake combat missions once it receives commands.Footnote 14

The likelihood seems to be increasing that AI-related tools and techniques may be used to help make factual determinations as well as related evaluative decisions and normative judgements around detention in armed conflict.Footnote 15 Possible antecedent technologies include algorithmic filtering of data and statistically-based risk assessments initially created for domestic policing and criminal-law settings. Potential applications in armed conflict might include prioritizing military patrols, assessing levels and kinds of threats purportedly posed by individuals or groups, and determining who should be held and when someone should be released. For example, authorities in Israel have reportedly used algorithms as part of attempts to obviate anticipated attacks by Palestinians through a process that involves the filtering of social-media data, resulting in over 200 arrests.Footnote 16 (It is not clear whether or not the technologies used in that context may be characterized as AI.)

It does not seem to strain credulity to anticipate that the provision of humanitarian services in war – both protection and relief activitiesFootnote 17 – may rely in some contexts on AI-related tools and techniques.Footnote 18 Applications that might be characterized as relying on possible technical antecedents to AI-related tools and techniques include predictive-mapping technologies used to inform populations of outbreaks of violence, track movements of armed actors, predict population movements, and prioritize response resources.Footnote 19

3. International Debates on ‘Emerging Technologies in the Area of Lethal Autonomous Weapons Systems’

Perhaps especially since 2013, increased attention has been given at the international level to issues around autonomous weapons. Such weapons may or may not involve AI-related tools or techniques. A significant aspect of the debate appears to have reached a kind of normative deadlock.Footnote 20 That impasse has arisen in the recent main primary venue for intergovernmental discourse: the Group of Governmental Experts on emerging technologies in the area of lethal autonomous weapons systems (GGE), which was established under the Convention on Certain Conventional Weapons (CCW)Footnote 21 in 2016.

GGE debates on the law most frequently fall under three general categories: international humanitarian law/law of armed conflict (IHL/LOAC) rules on the conduct of hostilities, especially on distinction, proportionality, and precautions in attacks; reviews of weapons, means, and methods of warfare;Footnote 22 and individual and State responsibility.Footnote 23 (The primary field of international law developed by States to apply to conduct undertaken in relation to armed conflict is now often called IHL/LOAC; this field is sometimes known as the jus in bello or the laws of war.)

Perhaps the most pivotal axis of the current debate concerns the desirability (or not) of developing and instantiating a concept of ‘meaningful human control’ or a similar formulation over the use of force, including autonomy in configuring, nominating, prioritizing, and applying force to targets.Footnote 24 A close reading of States’ views expressed in the GGE suggests that governments hold seemingly irreconcilable positions beyond some generically formulated principles, at least so far, on whether existing law is fit for purpose or new law is warranted.Footnote 25 That said, there might be a large enough contingent to pursue legal reform, perhaps outside of the CCW.

4. Technical Opacity Coupled with Military Secrecy

Both inside and outside of the GGE, armed forces continue to be deeply reluctant to disclose how they configure sensors, algorithms, data, and machines, including as part of their attempts to satisfy legal rules applicable in relation to war. In a nutshell, a kind of ‘double black box’ may emerge where human agents encase technical opacity in military secrecy.Footnote 26

The specific conduct of war as well as military-technological capabilities are rarely revealed publicly by States and non-state parties to armed conflicts. Partly because of that, it is difficult for people outside of armed forces to reliably discern whether new technological affordances create or exacerbate challenges (as critics allege) or generate or amplify opportunities (as proponents assert) for greater respect for the law and more purportedly ‘humanitarian’ outcomes.Footnote 27 It is difficult to discern, for example, how and to what extent the human agents composing a party to an armed conflict in practice construct and correlate proxies for legally relevant characteristics – for example, those concerning direct participation in hostilities as a basis for targetingFootnote 28 or imperative reasons of security as a ground for detentionFootnote 29 – involved in the collection of data and the operation of algorithms. Nor do parties routinely divulge what specific dependencies exist within and between the computational components that their human agents adopt regarding a particular form of warfare. Instead, by and large, parties – at most – merely reaffirm in generic terms that their human agents strictly respect the rules.

III. Overview of International Law Applicable to Armed Conflict

International law is the only binding framework agreed to by States to regulate acts and omissions related to armed conflict. In this respect, international law is distinguishable from national legal frameworks, corporate codes of conduct, and ethics policies.

The sources, or origins, of international law applicable in relation to armed conflict include treaties, customary international law, and general principles of law. Several fields of international law may lay down binding rules applicable to a particular armed conflict. As mentioned earlier, the primary field developed by States to apply to conduct undertaken in relation to armed conflict is IHL/LOAC. Other potentially relevant fields may include the area of international law regulating the threat or use of force in international relations (also known as the jus ad bellum or the jus contra bellum), international human rights law, international criminal law, international refugee law, the law of State responsibility, and the law of responsibility of international organizations. In international law, an international organization (IO) is often defined as an organization established by a treaty or other instrument governed by international law and possessing its own international legal personality.Footnote 30 Examples of IOs include the United Nations Organization (UN) and the North Atlantic Treaty Organization (NATO), among many others.

Under contemporary IHL/LOAC, there are two generally recognized classifications, or categories, of armed conflicts.Footnote 31 One is an international armed conflict, and the other is a non-international armed conflict. The nature of the parties most often distinguishes these categories. International armed conflicts are typically considered to involve two or more States as adversaries. Non-international armed conflicts generally involve one or more States fighting together against one or more non-state parties or two or more non-state parties fighting against each other.

What amounts to a breach of IHL/LOAC depends on the content of the underlying obligation applicable to a particular human or legal entity. Depending on the specific armed conflict, potentially relevant legal entities may include one or more States, IOs, or non-state parties. IHL/LOAC structures and lays down legal provisions concerning such thematic areas as the conduct of hostilities, detention, and humanitarian services, among many others.

For example, under certain IHL/LOAC instruments, some weapons are expressly prohibited, such as poisoned weapons,Footnote 32 chemical weapons,Footnote 33 and weapons that injure by fragments that escape detection by X-rays in the human body.Footnote 34 The use of weapons that are not expressly prohibited may be tolerated under IHL/LOAC at least insofar as the use of the weapon comports with applicable provisions. For instance, depending on the specific circumstances of use and the relevant actors, those provisions may include:

  • - the obligation for parties to distinguish between the civilian population and combatants and between civilian objects and military objectives and to direct their operations only against military objectives;Footnote 35

  • - the prohibition on attacks which may be expected to cause incidental loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated;Footnote 36

  • - the obligation to take constant care to spare the civilian population, civilians, and civilian objects in military operations;Footnote 37 and

  • - obligations to take certain precautions concerning attacks.Footnote 38

International law sets out particular standard assumptions of responsibility for the conduct of States and IOs. It is on the basis of those assumptions that specific IHL/LOAC provisions exist and are applied.Footnote 39 In other words, international law pertaining to armed conflict exists and is applied in respect of States and IOs based on the interrelationships between the ‘primary’ substantive IHL/LOAC provisions and the ‘secondary’ responsibility institutions. Regarding both State responsibility and IO responsibility, standard assumptions of responsibility are rooted in underlying concepts of attribution, breach, circumstances precluding wrongfulness, and consequences.Footnote 40 Those assumptions are general in character and are assumed and apply unless excluded, for example through an individual treaty or rule.Footnote 41

A use in an armed conflict of an AI-related tool or technique may (also or separately) give rise to individual criminal responsibility under international law. Such personal criminal responsibility may arise where the conduct that forms the application of an AI-related tool or technique constitutes, or otherwise sufficiently contributes to, an international crime. For example, under the Rome Statute of the International Criminal Court (ICC), the court has jurisdiction over the crime of genocide, crimes against humanity, war crimes, and the crime of aggression.Footnote 42 A use of an AI-related tool or technique may form part or all of the conduct underlying one or more of the crimes prohibited under the ICC Statute.

Concerning imposition of individual criminal responsibility, it may be argued that standard assumptions of responsibility are based (at least under the ICC Statute) on certain underlying concepts.Footnote 43 Those concepts may arguably include jurisdiction;Footnote 44 ascription (that is, attribution of conduct to a natural person);Footnote 45 material elements (in the sense of the prohibited conduct forming the crime);Footnote 46 mental elements (including the requisite intent and knowledge);Footnote 47 modes of responsibility (such as aiding and abetting or command responsibility);Footnote 48 grounds for excluding responsibility;Footnote 49 trial;Footnote 50 penalties (including imprisonment of the responsible person);Footnote 51 and appeal and revision.Footnote 52 It may be argued that it is on the basis of the assumptions related to those concepts that the provisions of the ICC Statute exist and are applied.

IV. Preconditions Arguably Necessary to Respect International Law

In this section, I outline some preconditions underlying elements that are arguably necessary for international law to be respected in relation to a use in an armed conflict of an AI-related tool or technique. I assume that the employment of the technology is governed (at least in part) by international law. By respecting international law, I mean the bringing of a binding norm, principle, rule, or standard to bear in relation to a particular employment of an AI-related tool or technique in a manner that accords with the object and purpose of the relevant provision, that facilitates observance of the provision, and that facilitates incurrence of responsibility in case of breach of the provision.

At least three categories of actors may be involved in respecting international law in relation to a use in an armed conflict of an AI-related tool or technique. Each category is arguably made up, first and foremost, of human agents. In addition to those human agents, the entities to which those humans are attached or through which they otherwise (seek to) implement international law may also be relevant.

The first category is made up in part of the humans who are involved in relevant acts or omissions (or both) that form the employment of an AI-related tool or technique attributable to a State or an IO. This first category of actors also includes the entity or entities – such as the State or the IO or some combination of State(s) and IO(s) – to which the employment is attributable. The human agents may include, for example, software engineers, operators, commanders, and legal advisers engaging in conduct on behalf of the State or the IO.

The second category of actors is made up in part of humans not involved in the employment in an armed conflict of an AI-related tool or technique attributable to a State or an IO but who may nevertheless (seek to) ensure respect for international law in relation to that conduct. This second category of actors also includes entities – such as (other) States, (other) IOs, international courts, and the like – that may attempt, functionally through the humans who compose them, to ensure respect for international law in relation to the conduct.

The third category of actors is made up in part of humans who (seek to) apply international law – especially international law on international crimes – to relevant conduct of a natural person. These humans may include, for example, prosecutors, defense counsel, and judges. This third category of actors also includes entities (mostly, but not exclusively, international or domestic criminal tribunals) that may seek, functionally through the humans who compose them, to apply international law to natural persons.

In the rest of this section, I seek to elaborate some preconditions regarding each of these three respective categories of actors.

1. Preconditions Concerning Respect for International Law by Human Agents Acting on Behalf of a State or an International Organization

In this sub-section, I focus on employments in armed conflicts of AI-related tools or techniques attributable to one or more States, IOs, or some combination thereof. In particular, I seek to outline some preconditions underlying elements that are arguably necessary for the State or the IO to respect international law in relation to such an employment.

Precondition #1: Humans Are Legal Agents of States and International Organizations

The first precondition is that humans are arguably the agents for the exercise and implementation of international law applicable to States and IOs. This precondition is premised on the notion that existing international law presupposes that the functional exercise and implementation of international law by a State or an IO in relation to the conduct of that State or that IO is reserved solely to humans.Footnote 53 According to this approach, this primary exercise and implementation of international law may not be partly or wholly reposed in non-human (artificial) agents.Footnote 54

Precondition #2: Human Agents of the State or the International Organization Sufficiently Understand the Performance and Effects of the Employment

The second precondition is that human agents of the State or the IO that engages in conduct that forms an employment in an armed conflict of an AI-related tool or technique arguably need to sufficiently understand the technical performance and effects of the employed tool or technique in respect of the specific circumstances of the employment and in relation to the socio-technical system through which the tool or technique is employed.Footnote 55 For this precondition to be instantiated, the understanding arguably needs to encompass (among other things) comprehension of the dependencies underlying the socio-technical system, the specific circumstances and conditions of the employment, and the interactions between those dependencies, circumstances, and conditions.

Precondition #3: Human Agents of the State or the International Organization Discern the Law Applicable to the Employment

The third precondition is that human agents of the State or the IO that engages in conduct that forms an employment in an armed conflict of an AI-related tool or technique arguably need to discern the law applicable to the State or the IO in relation to the employment. The applicable law may vary based on (among other things) the specific legal provisions applicable to the State or the IO through different sources, or origins, of international law. (As noted above, those sources may include treaty law, customary international law, and general principles of international law, among others.)

Precondition #4: Human Agents of the State or the International Organization Assess the Legality of the Anticipated Employment Before the Employment

The fourth precondition is that human agents of the State or the IO that engages in conduct that forms an employment in an armed conflict of an AI-related tool or technique assess – before the employment is initiated – whether the anticipated employment would conform with applicable law in relation to the anticipated specific circumstances and conditions of the employment.Footnote 56 In line with this precondition, only those employments that pass this legality assessment may be initiated and only then under the circumstances and subject to the conditions necessary to pass this legality assessment.

Precondition #5: Human Agents of the State or the International Organization Impose Legally Mandated Parameters Before and During the Employment

The fifth precondition is that human agents of the State or the IO that engages in conduct that forms an employment in an armed conflict of an AI-related tool or technique need to impose – before and during the employment – limitations or prohibitions or both as required by applicable law in respect of the employment. To instantiate this precondition, human agents of the State or the IO need to discern and configure the particular limitations or prohibitions by interpreting and applying international law in respect of the employment. Factors that the human agents might need to consider could include (among many others) interactions between the socio-technical system’s dependencies and the specific circumstances and conditions of the employment.Footnote 57

Suppose those dependencies, circumstances, or conditions (or some combination thereof) materially change after the employment is initiated. In that case, the human agents of the State or the IO arguably need to discern and configure the limitations or prohibitions (or both) in light of those changes.

To the extent, if any, required by the law applicable in relation to a specific employment or generally, human agents of the State or the IO may need to facilitate at least partial interaction by one or more humans with the system during the employment. Such interactions may take such forms (among others) as monitoring, suspension, or cancellation of some or all of the employment.Footnote 58

Precondition #6: Human Agents of the State or the International Organization Assess (Il)Legality after the Employment

The sixth precondition is that human agents of the State or the IO that engages in conduct that forms an employment in an armed conflict of an AI-related tool or technique arguably need to assess, after employment, whether or not the employment complied with applicable law. To instantiate this precondition, those human agents need to discern (among other things) which humans engaged in which elements of relevant conduct, the circumstances and conditions pertaining to that conduct, and whether the anticipated and actual performance and effects of the socio-technical system underlying the employment conformed with the legally mandated parameters.

Precondition #7: Human Agents of the State or the International Organization Assess Potential Responsibility for Violations Arising in Connection with the Employment

The seventh precondition concerns suspected violations that may arise in relation to an employment in an armed conflict of an AI-related tool or technique by or on behalf of a State or an IO. The precondition is that human agents of the State or the IO that undertook the conduct assess whether or not the conduct constitutes a violation – and, if they assess a violation occurred, human agents of the State or the IO (also) evaluate whether the international legal responsibility of the State or the IO is engaged. To make the assessment required by this precondition, human agents of the State or the IO need to discern, first, whether or not the conduct that forms the employment is attributable to the State or the IO (or to some combination of one or more State(s) or IO(s) or both).Footnote 59 If attribution is established, human agents of the State or the IO need to discern whether a breach occurred. This exercise entails assessing the conduct against applicable law. Finally, if the occurrence of a breach is established, human agents of the State or the IO evaluate whether or not the circumstances preclude the wrongfulness of the breach.Footnote 60

Precondition #8: Human Agents of the State or the International Organization Facilitate Incurrence of Responsibility

The eighth precondition concerns situations in which a breach – the wrongfulness of which is not precluded by the circumstances – is established. The precondition is that, where such a breach is established, human agents of the State or the IO arguably need to facilitate incurrence of responsibility of the State or the IO concerning the breach. As part of the process to facilitate such incurrence of responsibility, human agents of the State or the IO may arguably need to impose relevant consequences on the State or the IO. Those consequences may relate, for example, to cessation or reparation (or both) by the State or the IO.Footnote 61

Summary

Suppose that the various premises underlying the above-elaborated preconditions are valid. In that case, the absence of one or more of the following conditions may be preclusive of an element integral to respect for international law by the State or the IO:

  1. 1. An exercise and implementation of international law by human agents of the State or the IO in relation to the conduct that forms an employment in an armed conflict of an AI-related tool or technique;

  2. 2. A sufficient understanding by human agents of the State or the IO of the technical performance and effects of the employed AI-related tool or technique in relation to the circumstances of use and the socio-technical system through which the tools or techniques are employed;

  3. 3. Discernment by human agents of the State or the IO of the law applicable to the State or the IO in relation to the employment;

  4. 4. An assessment by human agents of the State or the IO whether the anticipated employment would conform with applicable law in relation to the anticipated specific circumstances and conditions of the employment;

  5. 5. Imposition by human agents of the State or the IO of limitations or prohibitions (or both) as required by applicable law in respect of the employment;

  6. 6. An assessment by human agents of the State or the IO after employment as to whether or not the employment complied with applicable law;

  7. 7. An assessment by human agents of the State or the IO as to whether or not the conduct constitutes a violation, and, if so, (also) an evaluation by human agents of the State or the IO as to whether or not the international legal responsibility of the State or the IO is engaged; or

  8. 8. Facilitation by human agents of the State or the IO of the incurrence of responsibility – including imposition of relevant consequences on the State or the IO – where such responsibility is established.

2. Preconditions Concerning Non-Involved Humans and Entities Related to Respect for International Law by a State or an International Organization

In this sub-section, I seek to outline some preconditions underlying elements that are arguably necessary for non-involved humans and related entities to (help) ensure respect for international law by a State or an international organization whose conduct forms an employment in an armed conflict of an AI-related tool or technique. Such non-involved people might include, for example, legal advisers from another State or another IO or judges on an international court seized with proceedings instituted by one State against another State.

Precondition #1: Humans Are Legal Agents

As with the previous sub-section, the first precondition here is that humans are arguably the agents for the exercise and implementation of international law applicable to the State or the IO whose conduct forms an employment of an AI-related tool or technique.Footnote 62 This precondition is premised on the notion that existing international law presupposes that the functional exercise and implementation of international law to a State or an IO by a human (and by an entity to which that human is connected) not involved in relevant conduct is reserved solely to humans. According to this approach, that primary exercise and implementation of international law may not be partly or wholly reposed in non-human (artificial) agents.

Precondition #2: Humans Discern the Existence of Conduct that Forms an Employment of an AI-Related Tool or Technique

The second precondition is that humans not involved in the conduct of the State or the IO arguably need to discern the existence of the conduct that forms an employment in an armed conflict of an AI-related tool or technique attributable to the State or the IO. To instantiate this precondition, the conduct must be susceptible to being discerned by (non-involved) humans.

Precondition #3: Humans Attribute Relevant Conduct of One or More States or International Organizations to the Relevant Entity or Entities

The third precondition is that humans not involved in the conduct of the State or the IO arguably need to attribute the conduct that forms an employment in an armed conflict of an AI-related tool or technique by or on behalf of the State or the IO to that State or that IO (or to some combination of State(s) or IO(s) or both). To instantiate this precondition, the conduct undertaken by or on behalf of the State or the IO must be susceptible to being attributed by (non-involved) humans to the State or the IO.

Precondition #4: Humans Discern the Law Applicable to Relevant Conduct

The fourth precondition is that humans not involved in the conduct of the State or the IO arguably need to discern the law applicable to the conduct that forms an employment in an armed conflict of an AI-related tool or technique attributable to the State or the IO. To instantiate this precondition, the legal provisions applicable to the State or the IO to which the relevant conduct is attributable must be susceptible to being discerned by (non-involved) humans. For example, where an employment of an AI-related tool or technique by a State occurs in connection with an armed conflict to which the State is a party, humans not involved in the conduct may need to discern whether the State has become party to a particular treaty and, if not, whether a possibly relevant rule reflected in that treaty is otherwise binding on the State, for example through customary international law.

Precondition #5: Humans Assess Potential Violations

The fifth precondition is that humans not involved in the conduct that forms an employment in an armed conflict of an AI-related tool or technique attributable to the State or the IO arguably need to assess possible violations by the State or the IO concerning that conduct.

To make that assessment, (non-involved) humans need to discern, first, whether or not the relevant conduct is attributable to the State or the IO. To instantiate this aspect of the fifth precondition, the conduct forming the employment in an armed conflict of an AI-related tool or technique must be susceptible to being attributed by (non-involved) humans to the State or the IO.

If attribution to the State or the IO is established, (non-involved) humans need to discern the existence or not of the occurrence of a breach. To instantiate this aspect of the fifth precondition, the conduct forming the employment in an armed conflict of an AI-related tool or technique by the State or the IO must be susceptible to being evaluated by (non-involved) humans as to whether or not the conduct constitutes a breach.

If the existence of a breach is established, (non-involved) humans need to assess whether or not the circumstances preclude the wrongfulness of the violation. To instantiate this aspect of the fifth precondition, the conduct forming the employment in an armed conflict of an AI-related tool or technique must be susceptible to being evaluated by (non-involved) humans as to whether or not the specific circumstances preclude the wrongfulness of the breach.

Precondition #6: Humans (and an Entity or Entities) Facilitate Incurrence of Responsibility

The sixth precondition is that humans (and an entity or entities) not involved in the conduct that forms an employment in an armed conflict of an AI-related tool or technique attributable to the State or the IO arguably need to facilitate incurrence of responsibility for a breach the wrongfulness of which is not precluded by the circumstances. In practice, responsibility may be incurred through relatively more formal channels (such as through the institution of State-vs.-State legal proceedings) or less formal modalities (such as through non-public communications between States).

As part of the process to facilitate incurrence of responsibility, (non-involved) humans arguably need to impose relevant consequences on the responsible State or IO. Typically, those humans do so by acting through a legal entity to which they are attached or through which they otherwise (seek to) ensure respect for international law – for example, consider legal advisers of another State, another IO, or judge on an international court. The consequences may relate to (among other things) cessation and reparations.

Regarding cessation, the responsible State or IO is obliged to cease the act, if it is continuing, and to offer appropriate assurances and guarantees of non-repetition, if circumstances so require.Footnote 63 To instantiate this aspect of the sixth precondition, the conduct forming the employment in an armed conflict of an AI-related tool or technique must be susceptible to being evaluated by (non-involved) humans as to whether or not the conduct is continuing; furthermore, the conduct must (also) be susceptible to being subject to an offer of appropriate assurances and guarantees of non-repetition, if circumstances so require.

Regarding reparation, the responsible State or IO is obliged to make full reparation for the injury caused by the internationally wrongful act.Footnote 64 To instantiate this aspect of the sixth precondition, the conduct forming the employment in an armed conflict of an AI-related tool or technique must be susceptible both to a determination by (non-involved) humans of the injury caused and to the making of full reparations in respect of the injury.

Summary

Suppose that the various premises underlying the above-elaborated preconditions are valid. In that case, the absence of one or more of the following conditions may be preclusive of an element integral to (non-involved) humans and entities helping to ensure respect for international law by a State or an IO where the latter’s conduct forms an employment in an armed conflict of an AI-related tool or technique:

  1. 1. An exercise and implementation by (non-involved) humans of international law applicable to the State or IO in relation to the conduct;

  2. 2. Discernment by (non-involved) humans of the existence of the relevant conduct attributable to the State or the IO;

  3. 3. An attribution by (non-involved) humans of the relevant conduct undertaken by or on behalf of the State or the IO;

  4. 4. Discernment by (non-involved) humans of the law applicable to the relevant conduct attributable to the State or the IO;

  5. 5. An assessment by (non-involved) humans of possible violations committed by the State or the IO in connection with the relevant conduct; or

  6. 6. Facilitation by (non-involved) humans of an incurrence of responsibility of the responsible State or the responsible IO for a breach the wrongfulness of which is not precluded by the circumstances.

3. Preconditions Concerning Respect for the ICC Statute

In the above sub-sections, I focused on respect for international law concerning employments in armed conflicts of AI-related tools and techniques by or on behalf of a State or an IO, whether the issue concerns respect for international law by those involved in the conduct (IV 1) or whether it concerns those not involved in the conduct (IV 2). In this sub-section, I seek to outline some preconditions underlying elements that are arguably necessary for respect for the ICC Statute. As noted previously, under the ICC Statute, individual criminal responsibility may arise for certain international crimes, and an employment in an armed conflict of an AI-related tool or technique may constitute, or otherwise contribute to, such a crime. In this section, I use the phrase ‘ICC-related human agents’ to mean humans who exercise and implement international law in relation to an application of the ICC Statute. Such human agents may include (among others) the court’s prosecutors, defense counsel, registrar, and judges.

Precondition #1: Humans Are Legal Agents

The first precondition is that humans are arguably the agents for the exercise and implementation of international law applicable in relation to international crimes – including under the ICC Statute – arising from conduct that forms an employment in an armed conflict of an AI-related tool or technique.Footnote 65 (Of the four categories of crimes under the ICC Statute, strictly speaking only war crimes by definition must necessarily be committed in connection with an armed conflict. Nonetheless, the other three categories of crimes under the ICC Statute may be committed in connection with an armed conflict.) This precondition is premised on the notion that existing international law presupposes that the functional exercise and implementation of international law to the conduct of a natural person is reserved solely to humans (and, through them, to the entity or entities, such as an international criminal tribunal, to which those humans are attached). According to this approach, this primary exercise and implementation of international law may not be partly or wholly reposed in non-human (artificial) agents.

Precondition #2: Humans Discern the Existence of Potentially Relevant Conduct

The second precondition is that ICC-related human agents arguably need to discern the existence of conduct that forms an employment in an armed conflict of an AI-related tool or technique ascribable to a natural person. For this precondition to be instantiated, such conduct must be susceptible to being discerned by relevant ICC-related human agents.

Precondition #3: Humans Determine Whether the ICC May Exercise Jurisdiction

The third precondition is that ICC-related human agents arguably need to determine whether or not the court may exercise jurisdiction in relation to an employment in an armed conflict of an AI-related tool or technique ascribable to a natural person. The court may exercise jurisdiction only over natural persons.Footnote 66 Furthermore, the ICC may exercise jurisdiction only where the relevant elements of jurisdiction are satisfied.Footnote 67 To instantiate the third precondition, conduct that forms an employment in an armed conflict of an AI-related tool or technique ascribable to a natural person must be susceptible to being evaluated by relevant ICC-related human agents as to whether or not the conduct is attributable to one or more natural persons over whom the court may exercise jurisdiction.

Precondition #4: Humans Adjudicate Individual Criminal Responsibility

The fourth precondition is that ICC-related human agents arguably need to adjudicate whether or not an employment in an armed conflict of an AI-related tool or technique ascribable to a natural person subject to the jurisdiction of the court constitutes, or otherwise contributes to, an international crime over which the court has jurisdiction. For the fourth precondition to be instantiated, such conduct must be susceptible to being evaluated by relevant ICC-related human agents – in pre-trial proceedings, trial proceedings, and appeals-and-revision proceedings – as to whether or not (among other things) the conduct satisfies the ‘material’Footnote 68 and ‘mental’Footnote 69 elements of one or more crimes and whether the conduct was undertaken through a recognized mode of responsibility.Footnote 70

Precondition #5: Humans Facilitate the Incurrence of Individual Criminal Responsibility

The fifth precondition is that ICC-related human agents arguably need to facilitate incurrence of individual criminal responsibility for an international crime where such responsibility is established. As part of the process to facilitate the incurrence of such responsibility, relevant ICC-related humans need to (among other things) facilitate the imposition of penalties on the responsible natural person(s).Footnote 71 For the fifth precondition to be instantiated, the conduct underlying the establishment of individual criminal responsibility needs to be susceptible to being subject to the imposition of penalties on the responsible natural person(s).

Summary

Suppose that the various premises underlying the above-elaborated preconditions are valid. In that case, the absence of one or more of the following conditions – in relation to an employment in an armed conflict of an AI-related tool or technique that constitutes, or otherwise contributes to, an international crime – may be preclusive of respect for the ICC Statute:

  1. 1. An exercise and implementation of international law by one or more relevant ICC-related human agents concerning the conduct;

  2. 2. Discernment by one or more relevant ICC-related human agents of the conduct that forms an employment in an armed conflict of an AI-related tool or technique ascribable to a natural person;

  3. 3. A determination by one or more relevant ICC-related human agents whether or not the court may exercise jurisdiction in respect of an employment in an armed conflict of an AI-related tool or technique ascribable to a natural person;

  4. 4. An adjudication by relevant ICC-related human agents whether or not an employment in an armed conflict of an AI-related tool or technique ascribable to a natural person subject to the jurisdiction of the court constitutes, or otherwise contributes to, an international crime over which the court has jurisdiction; or

  5. 5. Facilitation by one or more relevant ICC-related human agents of an incurrence of individual criminal responsibility – including the imposition of applicable penalties on the responsible natural person(s) – where such responsibility is established.

V. Conclusion

An employment in an armed conflict of an AI-related tool or technique that is attributable to a State, an IO, or a natural person (or some combination thereof) is governed at least in part by international law. It is well established that international law sets out standard assumptions of responsibility for the conduct of States and IOs. It is also well established that it is on the basis of those assumptions that specific legal provisions exist and are applied in respect of those entities. International law also arguably sets out particular standard assumptions of criminal responsibility for the conduct of natural persons. It may be contended that it is on the basis of those assumptions that the ICC Statute exists and is applied.

Concerning the use of AI in armed conflicts, at least three categories of human agents may be involved in seeking to ensure that States, IOs, or natural persons respect applicable law. Those categories are the human agents acting on behalf of the State or the IO engaging in relevant conduct; human agents not involved in such conduct but who nevertheless (seek to) ensure respect for international law in relation to that conduct; and human agents who (seek to) ensure respect for the ICC Statute. Each of those human agents may seek to respect or ensure respect for international law in connection with a legal entity to which they are attached or through which they otherwise act.

‘Responsible AI’ is not a term of art in international law, at least not yet. It may be argued the preconditions arguably necessary to respect international law – principally in the sense of applying and observing international law and facilitating incurrence of responsibility for violations – ought to be taken into account in formulating notions of ‘responsible AI’ pertaining to relevant conduct connected with armed conflict. Regarding those preconditions, it may be argued that, under existing law, humans are the (at least primary) legal agents for the exercise and implementation of international law applicable to an armed conflict. It may also be submitted that, under existing law, an employment in an armed conflict of an AI-related tool or technique needs to be susceptible to being (among other things) administered, discerned, attributed, understood, and assessed by one or more human agent(s).Footnote 72

Whether – and, if so, the extent to which – international actors will commit in practice to instantiating the preconditions arguably necessary for respecting international law pertaining to an employment in an armed conflict of an AI-related tool or technique will depend on factors that I have not expressly addressed in this chapter but that warrant extensive consideration.

Footnotes

26 Artificial Intelligence, Law, and National Security

1 As succinctly put in the project proposal to the 1956 Dartmouth Conference; J McCarthy and others, ‘A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955’ (2006) 47 AI Magazine 12.

2 NJ Nilsson, The Quest for Artificial Intelligence: A History of Ideas and Achievements (2010) (hereafter Nilsson, The Quest for Artificial Intelligence).

3 The literature is extremely copious, a good point of departure is H Bull, The Anarchical Society: A Study of Order in World Politics (1977); KA Oye, ‘Explaining Cooperation under Anarchy: Hypotheses and Strategies’ (1985) 38 World Politics 226. Professor Oye was the convener of the talk by Judge James Baker at MIT on 6 March 2018 that initially got me interested in AI, my intellectual debt to his work is gratefully acknowledged. See JE Baker, ‘Artificial Intelligence and National Security Law: A Dangerous Nonchalance’ (2018) 18-01 MIT Starr Forum Report (hereafter Baker, ‘Artificial Intelligence and National Security Law’).

4 M Karlin, ‘The Implications of Artificial Intelligence for National Security Strategy’ in A Blueprint for the Future of AI (Brookings, 1 November 2018) www.brookings.edu/series/a-blueprint-for-the-future-of-ai/; A Polyakova, ‘Weapons of the Weak! Russia and AI-Driven Asymmetric Warfare’ in A Blueprint for the Future of AI (Brookings, 15 November 2018) www.brookings.edu/series/a-blueprint-for-the-future-of-ai/; M O’Hanlon, ‘The Role of AI in Future Warfare’ in A Blueprint for the Future of AI (Brookings, 29 November 2018) www.brookings.edu/series/a-blueprint-for-the-future-of-ai/.

5 Much current attention is given to China’s single-minded pursuit of attaining technological competitiveness by 2025 and leadership by 2035, including in the field of AI. The State Council published in July 2017 a ‘New Generation Artificial Intelligence Development Plan’ that built on the May 2015 ‘Made in China 2025’ plan, which had already listed ‘new information technology’ as the first of ten strategic fields. The two plans are accessible at https://flia.org/notice-state-council-issuing-new-generation-artificial-intelligence-development-plan/ and http://english.www.gov.cn/2016special/madeinchina2025/. For a discussion see inter alia ‘AI in China’ (OECD, 21 February 2020) https://oecd.ai/dashboards/countries/China; ‘AI Policy China’ (Future of Life Institute, February 2020) <https://futureoflife.org/ai-policy-china/; P Mozur and SL Myers, ‘Xi’s Gambit: China Plans for a World without American Technology’ New York Times (11 March 2021) www.nytimes.com/2021/03/10/business/china-us-tech-rivalry.html (hereafter Mozur and Myers, ‘Xi’s Gambit’); X Yu and J Meng, ‘China Aims to Outspend the World in Artificial Intelligence, and XI Jinping Just Green Lit the Plan’ South China Morning Post (18 October 2017) www.scmp.com/business/china-business/article/2115935/chinas-xi-jinping-highlights-ai-big-data-and-shared-economy.

6 Perhaps most enduringly in the 1983 movie ‘WarGames’, where a recently commissioned intelligent central computer is hacked into by a teenager, who inadvertently almost causes nuclear Armageddon. This is only averted when the computer learns, after playing Tic-Tac-Toe with the teenager, that nuclear war cannot have a winner, causing him to rescind the launch command and to comment: ‘A strange game. The only winning move is not to play.’ There are obvious allusions to the doomsday machine scenario discussed further below. Interestingly, simultaneous to the film but unbeknownst to most until much later, the automated early warning system of the Soviet Union on 26 September 1983, at a time of extreme tension between the two countries, falsely indicated an American nuclear attack, almost triggering a catastrophic retaliatory nuclear attack. This was stopped by Lieutenant Colonel Stanislav Petrov, who disobeyed orders because he intuited that it was a false alarm; M Tegmark, ‘A Posthumous Honor for the Man Who Saved the World’ (Bulletin of the Atomic Scientist, 26 September 2018) https://thebulletin.org/2018/09/a-posthumous-honor-for-the-man-who-saved-the-world/.

7 A Chayes, ‘Cyber Attacks and Cyber Warfare: Framing the Issues’ in A Chayes (ed), Borderless Wars: Civil Military Disorder and Legal Uncertainty (2015) (hereafter Chayes, ‘Cyber Attacks and Cyber Warfare’); L DeNardis, ‘The Emerging Field of Internet Governance’ (2010) Yale Information Society Project Working Paper Series https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1678343 (hereafter DeNardis, ‘The Emerging Field of Internet Governance’).

8 Nilsson, The Quest for Artificial Intelligence (Footnote n 2) xiii.

9 Human denial of both intelligence and consciousness in other creatures seems ultimately to be a fairly straightforward case of cognitive dissonance: ‘To me, consciousness is the thing that feels like something,’ said Carl Safina, an ecologist. ‘We’re learning that a lot of animals – dogs, elephants, other primates – have it. … I think it’s because it’s easier to hurt them if you think of them as dumb brutes. Not long ago, I was on a boat with some nice people who spear swordfish for a living. They sneak up to swordfish sleeping near the surface of the water and harpoon them, and then the fish just go crazy and kind of explode. When I asked, ‘Do the fish feel pain?’ the answer was, ‘They don’t feel anything.’ Now, it’s been proven experimentally that fish feel pain. I think they feel, at least panic. They clearly are not having a good time when they are hooked. But if you think of yourself as a good person, you don’t want to believe you’re causing suffering. It’s easier to believe that there’s no pain.’ C Dreifus, ‘Carl Safina Is Certain Your Dog Loves You’ New York Times (21 October 2019) www.nytimes.com/2019/10/21/science/carl-safina-animal-cognition.html.

10 ‘Artificial Intelligence and Life in 2030 – One Hundred Year Study on Artificial Intelligence, Report of the 2015 Study Panel’ (Stanford University, September 2016) 4, 12 https://ai100.stanford.edu/2016-report.

11 T Zarsky, ‘“Mine Your Own Business!”: Making the Case for the Implications of the Data Mining of Personal Information in the Forum of Public Opinion’ (2003) 5 Yale J L & Tech 1, 4 et seq (hereafter Zarsky, ‘Mine Your Own Business!’).

12 On this point, see generally E Derman, Models Behaving Badly, Why Confusing Illusion with Reality Can Lead to Disaster, on Wall Street and in Life (2011); I Hacking, Representing and Intervening, Introductory Topics in the Philosophy of Natural Science (1983).

13 J Jenkins, M Steup, ‘The Analysis of Knowledge’ in E N Zalta (ed) The Stanford Encyclopedia of Philosophy (Summer 2021 ed.) https://plato.stanford.edu/entries/knowledge-analysis/.

14 JE Baker, The Centaur’s Dilemma – National Security Law for the Coming AI Revolution (2021) (hereafter Baker, The Centaur’s Dilemma).

15 See generally, BM Leiner and others, Brief History of the Internet (1997) (hereafter Leiner and others, Brief History of the Internet); M Waldrop, ‘DARPA and the Internet Revolution’ (DARPA, 2015) www.darpa.mil/about-us/timeline/modern-internet (hereafter Waldrop, ‘DARPA and the Internet Revolution’).

16 DM West and JR Allen, ‘How Artificial Intelligence Is Transforming the World’ in A Blueprint for the Future of AI (Brookings, 24 April 2018) www.brookings.edu/series/a-blueprint-for-the-future-of-ai/.

18 The literature on the administrative state is too copious to list, disparate discussions that helped guide my own thinking on this matter include S Cassese, ‘Administrative Law without the State? The Challenge of Global Regulation’ (2005) 37 NYU Journal of International Law & Politics 663; PD Feaver, ‘The Civil-Military Problematique: Huntington, Janowitz, and the Question of Civilian Control’ (1996) 23 Armed Forces & Society 149; SJ Kaufman, ‘The Fragmentation and Consolidation of International Systems’ (1997) 51 IO 173; A Chayes, ‘An Inquiry into the Workings of Arms Control Agreements’ (1972) 85 Harvard Law Review 905; AH Chayes and A Chayes, ‘From Law Enforcement to Dispute Settlement: A New Approach to Arms Control Verification and Compliance’ (1990) 14 IS 147.

19 Good overviews can be found in GD Brown, ‘Commentary on the Law of Cyber Operations and the DoD Law of War Manual’ in MA Newton (ed), The United States Department of Defense Law of War Manual (2019); WH Boothby, ‘Cyber Capabilities’ in WH Boothby (ed), New Technologies and the Law in War and Peace (2018) (hereafter Boothby, ‘Cyber Capabilities’); MN Schmitt (ed), Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations (2017) 401 et seq.; JC Woltag, Cyber Warfare: Military Cross-Border Computer Network Operations under International Law (2014) (hereafter Woltag, Cyber Warfare); C Droege, ‘Get Off My Cloud: Cyber Warfare, International Humanitarian Law, and the Protection of Civilians’ (2012) 94 Int’l Rev of the Red Cross 533.

20 With respect to cyber warfare, see also Chayes, ‘Cyber Attacks and Cyber Warfare’ (Footnote n 7); M Finnemore and DB Hollis, ‘Constructing Norms for Global Cybersecurity’ (2016) 110 AJIL 425. Regarding the specific impact of AI, see Baker, The Centaur’s Dilemma (Footnote n 14).

21 See further J Branch, ‘What’s in a Name? Metaphors and Cybersecurity’ (2021) 75 IO 39 (hereafter Branch, ‘Metaphors and Cybersecurity’).

22 See generally GD Solis, The Law of Armed Conflict. International Humanitarian Law in War (2016) 673–709 (hereafter Solis, The Law of Armed Conflict).

23 ML Benedikt, Cyberspace: First Steps (1991) 1.

24 See inter alia U Kohl, The Net and the Nation State: Multidisciplinary Perspectives on Internet Governance (2017) (hereafter Kohl, The Net and the Nation State); J Nocetti, ‘Contest and Conquest: Russia and Global Internet Governance’ (2015) 91 Int’l Aff 111; DeNardis, ‘The Emerging Field of Internet Governance’ (Footnote n 7); ML Mueller, Networks and States: The Global Politics of Internet Governance (2010).

25 Department of Defence, ‘Strategy for Operating in Cyberspace’ (July 2011) https://csrc.nist.gov/CSRC/media/Projects/ISPAB/documents/DOD-Strategy-for-Operating-in-Cyberspace.pdf 5, referring to the 2010 Quadrennial Defence Review. Outer space has been an area of great power competition since the Sputnik satellite, but it has received added impetus in recent years with the creation of dedicated Space Commands in the US and other countries, see WJ Broad, ‘How Space Became the Next ‘Great Power’ Contest between the US and China’ New York Times (24 January 2021) www.nytimes.com/2021/01/24/us/politics/trump-biden-pentagon-space-missiles-satellite.html.

26 Department of Defence, ‘Strategy for Operating in Cyberspace’ (July 2011) https://csrc.nist.gov/CSRC/media/Projects/ISPAB/documents/DOD-Strategy-for-Operating-in-Cyberspace.pdf 1, referring to the 2010 National Security Strategy. Similar language can be found in previous and subsequent national security strategies, both American and others, including the current 2021 interim one issued by the Biden Administration.

27 E Afsah, ‘Country Report Denmark’ in M Kilching and C Sabine (eds), Economic and Industrial Espionage in Germany and Europe (2016).

28 The need to ensure reliable communication after sustaining a devastating first strike was a key ingredient of credible nuclear deterrence. The Soviet ‘Dead Hand’ system (Mertvaya Ruka, officially: Systema Perimetr) was an alternative, ‘fail-deadly’ method of solving that practical problem: meant as a backup to the Kazbek communication system, Perimetr was to fully automatically trigger nuclear retaliation if it detected an attack, even if command structures and human personnel had been destroyed. US Defence Intelligence Agency, ‘Russia Military Power: Building a Military to Support Great Power Aspirations’ (2017) https://www.hsdl.org/?view&did=801968 26–28; N Thompson, ‘Inside the Apocalyptic Soviet Doomsday Machine’ Wired (21 September 2009) www.wired.com/2009/09/mf-deadhand/; WJ Broad, ‘Russia Has ‘Doomsday’ Machine, US Expert Says’ New York Times (8 October 1993) www.nytimes.com/1993/10/08/world/russia-has-doomsday-machine-us-expert-says.html.

29 This means that data to be transmitted will be split into several packets, based on various criteria including size. The packets will be sent independently from each other, usually along different pathways, and re-assembled at the destination. They contain the actual data to be sent, destination and source address, and other information necessary for reliable transmission. The idea was simultaneously but independently developed at MIT in Cambridge, Massachusetts (1961–1967), RAND in Santa Monica, California (1962–1965) and the British National Physical Laboratory (NPL) in London (1964–1967). This genesis is well described by several of its key protagonists themselves in Leiner and others, Brief History of the Internet (Footnote n 15); Waldrop, ‘DARPA and the Internet Revolution’ (Footnote n 15).

30 This reliance on a conceptional, rather than physical architecture is reflected in the definition laid down in US law: ‘The term “Internet” means collectively the myriad of computer and telecommunications facilities, including equipment and operating software, which comprise the interconnected world-wide network of networks that employ the Transmission Control Protocol/Internet Protocol, or any predecessor or successor protocols to such protocol, to communicate information of all kinds by wire or radio.’15 USC § 6501(6), www.law.cornell.edu/uscode/text/15/6501#6.

See also Woltag, Cyber Warfare (Footnote n 19) 9.

31 Branch, ‘Metaphors and Cybersecurity’ (Footnote n 21).

32 W Gibson, Neuromancer (1984) 69, emphasis added. Gibson makes the disparaging remarks about his term in the documentary film M Neale, ‘No Maps for these Territories’ (2000).

33 ‘Gibson’s networked artificial environment anticipated the globally internetworked technoculture (and its surveillance) in which we now find ourselves. The term has gone on to revolutionize popular culture and popular science, heralding the power and ubiquity of the information age we now regard as common as iPhones. Since its invention, ‘cyberspace’ has come to represent everything from computers and information technology to the Internet and “consensual hallucinations” as different as The Matrix, Total Information Awareness, and reality TV.’ March 17, 1948: W Gibson, ‘Father of Cyberspace’ Wired (16 March 2009) www.wired.com/2009/03/march-17-1948-william-gibson-father-of-cyberspace-2/.

34 DE Sanger, ‘China Appears to Warn India: Push Too Hard and the Lights Could Go Out’ New York Times (28 February 2021) www.nytimes.com/2021/02/28/us/politics/china-india-hacking-electricity.html.

35 US Department of Defence, ‘Strategy for Operating in Cyberspace’ (July 2011) 1, https://csrc.nist.gov/CSRC/media/Projects/ISPAB/documents/DOD-Strategy-for-Operating-in-Cyberspace.pdf.

36 Branch, ‘Metaphors and Cybersecurity’ (Footnote n 21) 41.

37 See for instance M Finnemore and DB Hollis, ‘Constructing Norms for Global Cybersecurity’ (2016) 110 AJIL 425.

38 A Chayes, ‘Implications for Civil-Military Relations in Cyber Attacks and Cyber Warfare’ in A Chayes (ed), Borderless Wars: Civil Military Disorder and Legal Uncertainty (2015).

39 Politiets Efterretningstjeneste, ‘Trusler mod Danmark: Spionage’ (2015), https://pet.dk/spionage; JUO Nielsen, ‘Erhvervshemmelighedsværnet i Norden og EU’ (2014) Erhvervsjuridisk Tidsskrift 1.

40 See further L Arimatsu, ‘The Law of State Responsibility in Relation to Border Crossings: An Ignored Legal Paradigm’ (2013) 89 Int’l L Stud 21; P Margulies, ‘Networks in Non-International Armed Conflicts: Crossing Borders and Defining “Organized Armed Groups”’ (2013) 89 Int’l L Stud 54.

41 Y Benkler, ‘Degrees of Freedom, Dimensions of Power’ (2016) Daedalus 18 (hereafter Benkler, ‘Degrees of Freedom’). Unlike in classical military spheres, it is important to note that in the cyber-domain effective repulsion and deterrence does not necessarily have to be assumed by the military, see Forsvarsministeriet, ‘Center for Cybersikkerhed’ (18 September 2020) https://www.fmn.dk/da/arbejdsomraader/cybersikkerhed/center-for-cybersikkerhed/.

42 Department of Defence, ‘Cyber Strategy 2018 – Summary’ (2018) https://media.defense.gov/2018/Sep/18/2002041658/-1/-1/1/CYBER_STRATEGY_SUMMARY_FINAL.PDF 1.

43 Kohl, The Net and the Nation State (Footnote n 24); DeNardis, ‘The Emerging Field of Internet Governance’ (Footnote n 7).

44 Boothby, ‘Cyber Capabilities’ (Footnote n 19); WH Boothby, ‘Methods and Means of Cyber Warfare’ (2013) 89 Int’l L Stud 387.

45 RN Chesney, ‘Computer Network Operations and US Domestic Law: An Overview’ (2013) 87 International Law Studies 218, 286.

46 Footnote Ibid, 287.

47 Solis, The Law of Armed Conflict (Footnote n 21) 673.

48 But note the highly representative Tallinn Manual, see W Heintschel von Heinegg, ‘Chapter 1: The Tallinn Manual and International Cyber Security Law’ (2012) 15 YBIHL 3.

49 An excellent overview is provided by Solis, The Law of Armed Conflict (Footnote n 22) 673–709.

50 MN Schmitt, ‘The Law of Cyber Warfare: Quo Vadis?’ (2014) 25 Stanford Law & Policy Review 269, 279.

51 See in Baker, The Centaur’s Dilemma (Footnote n 14) 69–94.

52 J Hellerman, ‘“The Mandalorian” Finally Gives Us an Interesting Stormtrooper’ (No Film School Blog, 18 December 2020) https://nofilmschool.com/storm-troopers-dumb.

53 A Olla, ‘A Dystopian Robo-Dog Now Patrols New York City. That’s the Last Thing We Need’ The Guardian (2 March 2021) www.theguardian.com/commentisfree/2021/mar/02/nypd-police-robodog-patrols.

54 The humanoid Russian FEDOR tactical robot has already been deployed to the International Space Station, L Grush, ‘Russia’s Humanoid Robot Skybot Is on Its Way Home After a Two-Week Stay in Space’ (The Verge, 6 September 2019) www.theverge.com/2019/9/6/20852602/russia-skybot-fedor-robot-international-space-station-soyuz.

55 The video carried a note that these were not digital images but real footage of actual robots. See also E Ackerman, ‘How Boston Dynamics Taught Its Robots to Dance’ (IEEE Spectrum, 7 January 2021) https://spectrum.ieee.org/automaton/robotics/humanoids/how-boston-dynamics-taught-its-robots-to-dance; B Gilbert, ‘Watch a Rare Video of Robots Jumping and Dancing Inside One of America’s Leading Robotics Firms’ Business Insider (29 March 2021) www.businessinsider.com/video-robots-jumping-and-dancing-inside-boston-dynamics-2021-3.

56 IJ Good, ‘Speculations Concerning the First Ultraintelligent Machine’ (1966) 6 Advances in Computers 31.

57 Footnote Ibid, 31, 33, references omitted.

58 J Vincent, ‘Putin says the nation that leads in AI “will be the ruler of the world”’, The Verge (4 September 2017) https://www.theverge.com/2017/9/4/16251226/russia-ai-putin-rule-the-world.

59 The comprehensive study commissioned by the European Parliament on this topic lists existential risk only as the last item of twelve ‘ethical harms and concerns’ currently tackled by national and international regulatory efforts; E Bird and others, ‘The Ethics of Artificial Intelligence: Issues and Initiatives’ (European Parliament, March 2020) www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf> 42–43 (hereafter Bird and others, ‘The Ethics of Artificial Intelligence’).

60 See also the section on ‘Safety and Beneficence of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI)’ in M Bourgon and R Mallah, ‘Ethically Aligned Design – A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems, (1st ed.)’ (IEEE, 2019) https://ethicsinaction.ieee.org (hereafter Bourgon and Mallah, ‘Ethically Aligned Design’).

61 S Baum, ‘Our Final Invention: Is AI the Defining Issue for Humanity?’ Scientific American (11 October 2013) https://blogs.scientificamerican.com/guest-blog/our-final-invention-is-ai-the-defining-issue-for-humanity/.

62 F Lewsey, ‘Humanity’s Last Invention and Our Uncertain Future’ (University of Cambridge, 25 November 2012) www.cam.ac.uk/research/news/humanitys-last-invention-and-our-uncertain-future.

63 To some extent, this debate is already moot because automated strategic nuclear defence systems have existed – and likely remain operational – in both Russia and the United States, see Footnote n 27.

64 The evolving scientific, industry, and governmental consensus about the principles necessary to ensure responsible and safe AI have been outlined inter alia in Bourgon and Mallah, ‘Ethically Aligned Design’ (Footnote n 60): ‘Asilomar Principles on Intelligent Machines and Smart Policies – Research Issues, Ethics and Values, Longer-Term Values’ (Future of Life Institute, 2017) futureoflife.org/ai-principles.

65 For an overview of national efforts, see Bird and others, ‘The Ethics of Artificial Intelligence’ (Footnote n 59).

66 For an approving summary of these arguments, see J Barratt, Our Final Invention (2013).

67 R Calo, ‘Artificial Intelligence Policy: A Primer and Roadmap’ (2017) 51 University of California Davis Law Review 399435 (hereafter Calo, ‘Artificial Intelligence Policy’).

68 P Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (2015) 286.

69 Calo, ‘Artificial Intelligence Policy’ (Footnote n 67) 435.

70 Calo, ‘Artificial Intelligence Policy’ (Footnote n 67) 407. Calo argues that the respective legal assessment is likely to be different; see also HY Liu, ‘Refining Responsibility: Differentiating Two Types of Responsibility Issues Raised by Autonomous Weapons Systems’ in C Kreß and others (eds), Autonomous Weapons Systems: Law, Ethics, Policy (2016) (hereafter Liu, ‘Refining Responsibility’).

71 See further HY Liu, ‘Categorization and Legality of Autonomous and Remote Weapons Systems’ (2012) 94 Int’l Rev of the Red Cross 627; G Sartor and O Andrea, ‘The Autonomy of Technological Systems and Responsibilities for their Use’ in C Kreß and others (eds), Autonomous Weapons Systems: Law, Ethics, Policy (2016).

72 See further N Sharkey, ‘Staying in the Loop: Human Supervisory Control of Weapons’ in C Kreß and others (eds), Autonomous Weapons Systems: Law, Ethics, Policy (2016) (hereafter Sharkey, ‘Staying in the Loop’); GS Corn, ‘Autonomous Weapons Systems: Managing the Inevitability of ‘Taking the Man Out of the Loop’’ in C Kreß and others (eds), Autonomous Weapons Systems: Law, Ethics, Policy (2016) (hereafter Corn, ‘Autonomous Weapons Systems’); D Saxon, ‘A Human Touch: Autonomous Weapons, DoD Directive 3000.09 and the Interpretation of ‘Appropriate Levels of Human Judgment over the Use of Force’’ in C Kreß and others (eds), Autonomous Weapons Systems: Law, Ethics, Policy (2016) (hereafter Saxon, ‘A Human Touch’).

73 G Galdorisi, ‘Keeping Humans in the Loop’ (2015) 141/2/1,344 US Naval Institute Proceedings 36, 38.

74 Department of Defense, Defense Science Board, ‘Task Force Report: The Role of Autonomy in DoD Systems’ (US Department of Defense, July 2012) 4 https://fas.org/irp/agency/dod/dsb/autonomy.pdf.

75 G Galdorisi, ‘Keeping Humans in the Loop’ (2015) 141/2/1,344 US Naval Institute Proceedings 36; Sharkey, ‘Staying in the Loop’ (Footnote n 72).

76 Directive 3000.09: Autonomy in Weapons Systems, Unmanned Systems Integrated Roadmap, FY 2013–2038, US Department of Defence, Washington D.C. (21 November 2012); Solis, The Law of Armed Conflict (Footnote n 22) 537, my emphasis.

77 Solis, The Law of Armed Conflict (Footnote n 22) 537.

78 P Asaro, ‘On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-Making’ (2012) 94 Int’l Rev of the Red Cross 687, 691 (hereafetr Asaro, ‘On Banning Autonomous Weapon Systems’).

79 MN Schmitt and JS Thurnher, ‘‘Out of the Loop’: Autonomous Weapons Systems and the Law of Armed Conflict’ (2013) 4 Harvard National Security Journal 231, 235 (hereafter Schmitt and Thurnher, ‘Out of the Loop’).

80 Sharkey, ‘Staying in the Loop’ (Footnote n 72); Liu, ‘Refining Responsibility’ (Footnote n 70).

81 Calo, ‘Artificial Intelligence Policy’ (Footnote n 67) 418; R Calo, ‘Robotics and the Lessons of Cyberlaw’ (2015) 103 California Law Review 513, 538–545 (hereafter Calo, ‘Robotics and the Lessons of Cyberlaw’).

82 Schmitt and Thurnher, ‘Out of the Loop’ (Footnote n 79) 233.

83 Solis, The Law of Armed Conflict (Footnote n 22) 268–327, 539–541, 551–552.

84 M Milanovic, ‘The Lost Origins of Lex Specialis: Rethinking the Relationship between Human Rights and International Humanitarian Law’ in JD Ohlin (ed), Theoretical Boundaries of Armed Conflict and Human Rights (2015); G Pinzauti, ‘Good Time for a Change: Recognizing Individuals’ Rights under the Rules of International Humanitarian Law on the Conduct of Hostilities’ in A Cassese (ed), Realizing Utopia: The Future of International Law (2012); T Meron, ‘On the Inadequate Reach of Humanitarian and Human Rights Law and the Need for a New Instrument’ (1983) 77 AJIL 589606; T Meron, Human Rights and Humanitarian Norms as Customary Law (1991).

85 Asaro, ‘On Banning Autonomous Weapon Systems’ (Footnote n 78) 687; E Lieblich and B Eyal, ‘The Obligation to Exercise Discretion in Warfare: Why Autonomous Weapons Systems Are Unlawful’ in C Kreß and others (eds), Autonomous Weapons Systems: Law, Ethics, Policy (2016).

86 The American military, it is remembered, formally maintains that it is bound by such a duty, as a matter of internal policy. Whether this amounts to a legal obligation under domestic law remains a matter of some dispute; see further Department of Defence, Directive 3000.09: Autonomy in Weapons Systems (Footnote n 77); Saxon, ‘A Human Touch’ (Footnote n 72).

87 Solis, The Law of Armed Conflict (Footnote n 22) 543.

88 International Committee of the Red Cross (ICRC), A Guide to the Legal Review of New Weapons, Means and Methods of Warfare (2006) 23.

89 See generally Calo, ‘Robotics and the Lessons of Cyberlaw’ (Footnote n 81).

90 See further M Sassòli, ‘Autonomous Weapons and International Law: Advantages, Open Technical Questions and Legal Issues to be Clarified’ (2014) 90 International Law Studies 308, 323; likewise, Schmitt and Thurnher, ‘Out of the Loop’ (Footnote n 79) 277.

91 Note for instance the Israeli electronic disabling of Syria’s expensive, Russian-made air defence system prior to their bombing of a half-constructed nuclear power reactor in 2007, discussed in Solis, The Law of Armed Conflict (Footnote n 22) 677.

92 This has been the main excuse offered by the Captain of the US warship Vincennes for shooting down an Iranian civilian airliner in 1988. With AI, this problem is likely to become much more acute. For a discussion of the former, see Footnote ibid, 563–566. For the latter, see Baker, ‘Artificial Intelligence and National Security Law’ (Footnote n 3).

93 Note for instance the successful use by Houthi militias in Yemen and by Hamas in Gaza of very cheap commercial drones as deliberate targets for very expensive Israeli, Emirati, and Saudi Patriot air defence systems; see A Kurth Cronin, Power to the People: How Drones, Data and Dynamite Empower and Imperil Our Security (2019) 213.

94 These are sometimes called ‘suicide drones.’ For an excellent technical overview, see D Gettinger and HM Arthur, ‘Loitering Munitions’ (CSD Bard, 2017) https://dronecenter.bard.edu/files/2017/02/CSD-Loitering-Munitions.pdf.

95 T McMullan, ‘How Swarming Drones Will Change Warfare’ (BBC News, 16 March 2019) www.bbc.com/news/technology-47555588 (hereafter McMullan, ‘How Swarming Drones Will Change Warfare; SM Williams, ‘Swarm Weapons: Demonstrating a Swarm Intelligent Algorithm for Parallel Attack’ (2018) https://apps.dtic.mil/sti/pdfs/AD1071535.pdf (hereafter Williams, ‘Swarm Weapons’).

96 R Martinage, ‘Toward a New Offset Strategy – Exploiting US Long-Term Advantages to Restore US Global Power Projection Capability’ (CSBA, 2014) https://csbaonline.org/uploads/documents/Offset-Strategy-Web.pdf 23–28 (hereafter Martinage, ‘Toward a New Offset Strategy’).

97 S Shaikh and R Wes, ‘The Air and Missile War in Nagorno-Karabakh: Lessons for the Future of Strike and Defense’ (CSIC, 8 December 2020) www.csis.org/analysis/air-and-missile-war-nagorno-karabakh-lessons-future-strike-and-defense (hereafter Shaikh and Wes, ‘Lessons for the Future of Strike and Defense’).

98 McMullan, ‘How Swarming Drones Will Change Warfare’ (Footnote n 95); Williams, ‘Swarm Weapons’ (Footnote n 95).

99 For an overview, see PL Bergen and D Rothenberg (eds), Drone Wars: Transforming Conflict, Law, and Policy (2015) (hereafter Bergen and Rothenberg, Drone Wars).

100 K Kakaes, ‘From Orville Wright to September 11: What the History of Drone Technology Says about Its Future’ in Bergen and Rothenberg, Drone Wars: Transforming Conflict, Law, and Policy (2015) (hereafter Kakaes, ‘From Orville Wright to September 11’).

101 For a good overview, see Solis, The Law of Armed Conflict (Footnote n 22) 545–554. The claim that the existing law of armed conflict is inadequate for the actual conflict at hand is probably as old as the truism that this body of law is ‘always one war behind.’ While there is some truth in the latter observation, the first is usually little more than exculpatory. Both discussions are as old as humanitarian law itself and it is unlikely that the rise of either drone technology or AI will do much to affect its basis parameters, namely the basic adequacy of existing legal principles. For the debate as such, see inter alia T Meron, ‘Customary Humanitarian Law Today: From the Academy to the Courtroom’ in A Clapham and P Gaeta (eds), The Oxford Handbook of International Law in Armed Conflict (2014); MN Schmitt and S Watts, ‘State Opinio Juris and International Humanitarian Law Pluralism’ (2015) 91 International Law Studies 171215; G Best, Humanity in Warfare: The Modern History of the International Law of Armed Conflicts (1980); T Meron, ‘Humanization of Humanitarian Law’ (2000) 94 AJIL 239278.

102 See generally Y Dinstein, ‘International Humanitarian Law Research Initiative: IHL in Air and Missile Warfare’ (2006) www.ihlresearch.org/amw/; Y Dinstein, ‘The Laws of Air, Missile and Nuclear Warfare’ (1997) 27 Isr Y B Hum Rts 116.

103 O Manea and RO Work, ‘The Role of Offset Strategies in Restoring Conventional Deterrence’ (2018) Small Wars Journal https://smallwarsjournal.com/jrnl/art/role-offset-strategies-restoring-conventional-deterrence (hereafter Manea and Work, ‘The Role of Offset Strategies’); RR Tomes, ‘The Cold War Offset Strategy: Assault Breaker and the Beginning of the RSTA Revolution’ (War on the Rocks, 20 November 2014) https://warontherocks.com/2014/11/the-cold-war-offset-strategy-assault-breaker-and-the-beginning-of-the-rsta-revolution/ (hereafter Tomes, ‘The Cold War Strategy’).

104 ‘Since we cannot keep the United States an armed camp or a garrison state, we must make plans to use the atom bomb if we become involved in a war.’ President Eisenhower in 1953, quoted in Martinage, ‘Toward a New Offset Strategy’ (Footnote n 96) 8. I have provided a brief history of the dynamic development of US nuclear strategy in E Afsah, ‘Creed, Cabal, or Conspiracy: Origins of the Current Neo-Conservative Revolution in US Strategic Thinking’ (2003) GLJ 902, 907–910; a fuller, accessible account can be found in DM Lawson and DB Kunsman, ‘A Primer on US Strategic Nuclear Policy’ (OSTI, 1 January 2001) www.osti.gov/servlets/purl/776355/ (hereafter Lawson and Kunsman, US Strategic Nuclear Policy’).

105 BD Watts, ‘The Evolution of Precision Strike’ (CSBA, 2013) https://csbaonline.org/uploads/documents/Evolution-of-Precision-Strike-final-v15.pdf 1–2, references omitted.

106 Martinage, ‘Toward a New Offset Strategy’ (Footnote n 96) 17–20, 72.

107 Martinage, ‘Toward a New Offset Strategy’ (Footnote n 96) 72.

108 US Defence Secretary Chuck Hagel outlined these threats in a programmatic speech on 3 September 2014, which explicitly drew an analogy to Eisenhower’s ‘first offset’ strategy and committed the country to invest in asymmetric, high-technology counter-measures, including AI, see inter alia Martinage, ‘Toward a New Offset Strategy’ (Footnote n 96) i.

109 Their history is well summarised in Kakaes, ‘From Orville Wright to September 11’ (Footnote n 100).

110 See generally Bergen and Rothenberg, Drone Wars (Footnote n 99).

111 Kakaes, ‘From Orville Wright to September 11’ (Footnote n 100) 375.

112 J Abizaid and R Brooks, ‘Recommendations and Report of the Task Force on US Drone Policy’ (Stimson, April 2015) www.stimson.org/wp-content/files/file-attachments/recommendations_and_report_of_the_task_force_on_us_drone_policy_second_edition.pdf 23.

113 Since 2009, the US Air Force has trained more drone than conventional pilots and the US Navy has announced in 2015 that the current F-35 will be the last manned strike fighter aircraft they will buy and operate, discussed in Solis, The Law of Armed Conflict (Footnote n 22) 547.

114 The Turkish Bayraktar TB2 drone relies heavily on commercial civilian components, such as generic Garmin navigation systems. The UK defence minister remarked with respect to Turkey’s new role as a supplier of weaponry, training, and intelligence that ‘other countries are now leading the way’ and that, therefore, the UK would itself begin to invest in such new, much cheaper drone technology; D Sabbagh, ‘UK Wants New Drones in Wake of Azerbaijan Military Success’ The Guardian (29 December 2020) www.theguardian.com/world/2020/dec/29/uk-defence-secretary-hails-azerbaijans-use-of-drones-in-conflict (hereafter Sabbagh, ‘UK Wants New Drones’).

115 J Detsch, ‘The US Army Goes to School on Nagorno-Karabakh Conflict – Off-the-Shelf Air Power Changes the Battlefield of the Future’ Foreign Policy (30 March 2021) https://foreignpolicy.com/2021/03/30/army-pentagon-nagorno-karabakh-drones/.

117 Shaikh and Wes, ‘Lessons for the Future of Strike and Defense’ (Footnote n 97).

118 Sabbagh, ‘UK Wants New Drones’ (Footnote n 114).

119 There is good reason to doubt that this perceived inferiority actually existed, see Martinage, ‘Toward a New Offset Strategy’ (Footnote n 96) 11 et seq; Lawson and Kunsman, ‘US Strategic Nuclear Policy’ (Footnote n 104) 51–64.

120 Manea and Work, ‘The Role of Offset Strategies’ (Footnote n 103).

121 R Grant, ‘The Second Offset’ Air Force Magazine (24 June 2016) www.airforcemag.com/article/the-second-offset/; Tomes, ‘The Cold War Offset Strategy’ (Footnote n 103).

122 RR Tomes, US Defence Strategy from Vietnam to Operation Iraqi Freedom: Military Innovation and the New American Way of War, 1973–2003 (2006).

123 Solis, The Law of Armed Conflict (Footnote n 22) 551.

124 Footnote Ibid, 551–553.

125 B Wittes, ‘Drones and Democracy: A Response to Firmin DeBrabander’ (Lawfare Blog, 15 September 2014) www.lawfareblog.com/drones-and-democracy-response-firmin-debrabander.

126 DE Sanger, Confront and Conceal: Obama’s Secret Wars and Surprising Use of American Power (2012) 257 (hereafter Sanger, Confront and Conceal), quoted in Solis, The Law of Armed Conflict (Footnote n 22) 554.

127 From the copious literature, see inter alia Y Dinstein, ‘Concluding Observations: The Influence of the Conflict in Iraq on International Law’ in RA Pedrozo (ed), The War in Iraq: A Legal Analysis (2010); M Sassòli, ‘Ius ad Bellum and Ius in Bello: The Separation between the Legality of the Use of Force and Humanitarian Rules to be Respected in Warfare: Crucial or Outdated’ in MN Schmitt and J Pejic (eds), International Law and Armed Conflict: Exploring the Faultlines (2007).

128 See generally C Heyns, ‘Autonomous Weapons Systems: Living a Dignified Life and Dying a Dignified Death’ in C Kreß and others (eds), Autonomous Weapons Systems: Law, Ethics, Policy (2016); GS Corn, ‘Autonomous Weapons Systems’ (Footnote n 72).

129 Solis, The Law of Armed Conflict (Footnote n 22) 553.

130 S Hoffmann, ‘The Politics and Ethics of Military Intervention’ (1995) 37 Survival 29.

131 Solis, The Law of Armed Conflict (Footnote n 22) 550.

132 Sanger, Confront and Conceal (Footnote n 126).

133 A Barak, ‘International Humanitarian Law and the Israeli Supreme Court’ (2014) Isr L Rev 181; N Melzer, Targeted Killing in International Law (2008); J Ulrich, ‘The Gloves Were Never On: Defining the President’s Authority to Order Targeted Killing in the War against Terrorism’ (2005) Va J Int’l L 1029; D Kretzmer, ‘Targeted Killings of Suspected Terrorists: Extra-Judicial Execution or Legitimate Means of Defence?’ (2005) 16 EJIL 171.

134 P Kalmanovitz, ‘Judgment, Liability and the Risks of Riskless Warfare’ in C Kreß and others (eds), Autonomous Weapons Systems: Law, Ethics, Policy (2016); Saxon, ‘A Human Touch’ (Footnote n 72).

135 See also G Allen and T Chan, ‘Artificial Intelligence and National Security’ (Belfer Center, July 2017) www.belfercenter.org/sites/default/files/files/publication/AI%20NatSec%20-%20final.pdf 27–35 (hereafter Allen and Chan, ‘Artificial Intelligence and National Security’).

136 Solis, The Law of Armed Conflict (Footnote n 22) 550.

137 See further S Smagh, ‘Intelligence, Surveillance, and Reconnaissance Design for Great Power Competition’ (Congressional Research Service, 4 June 2020) https://crsreports.congress.gov/product/pdf/R/R46389.

138 The human factor is not only expensive and rare, it is also susceptible to bias, emotional attachment, and similar factors, which limit systemic reliability as a whole. The enormous human cost in both effort and emotional distortion in classical surveillance has been described with great artistic sensibility in the film The Lives of Others about the East German surveillance system. The film’s great impact and merit lay in its humanisation of those charged with actually listening to the data feed; C Dueck, ‘The Humanization of the Stasi in ‘Das Leben der Anderen’’ (2008) German Studies Review 599; S Schmeidl, ‘The Lives of Others: Living Under East Germany’s “Big Brother” or the Quest for Good Men (Das Leben der Anderen) (review)’ (2009) HRQ 557.

139 ‘The Intelligence Agencies of the United States each day collect more raw intelligence data than their entire workforce could effectively analyze in their combined lifetimes.’ Allen and Chan, ‘Artificial Intelligence and National Security’ (Footnote n 134) 27, referring to P Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (2015) 19.

140 This early realisation was made by Joseph Weizenbaum, the creator of ELIZA, one of the earliest natural language processing softwares. It ran on ordinary personal computers and, despite its simplicity, yielded important insights about computers themselves as social objects. The insight about surveillance was expressed in J Weizenbaum, Computer Power and Human Reason: From Calculation to Judgment (1976) 272.

141 R Calo, ‘Peeping HALs: Making Sense of Artificial Intelligence and Privacy’ (2010) European Journal of Legal Studies 168, 171174.

142 HA Simon, Designing Organizations for an Information-Rich World (1971) 40–41.

143 T Davenport and J Beck, The Attention Economy: Understanding the New Currency of Business (2001) 20.

144 Zarsky, ‘Mine Your Own Business!’ (Footnote n 11) 4, 6.

145 Allen and Chan, ‘Artificial Intelligence and National Security’ (Footnote n 135) 14.

146 During my graduate training at the Kennedy School of Government’s specialisation in international security, my tutorial group consisted largely of seconded military officers, many of whom had been trained to do precisely these very difficult, very taxing, and fairly boring intelligence tasks. Especially the need to do this in difficult foreign languages was a very serious limiting factor. The promise of AI and especially machine learning in voice recognition etc. here is apparent.

147 The issue of bias in the underlying algorithms is itself a field of intense scrutiny, see inter alia OA Osoba and W Welser IV, An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence (2017).

148 The ability of disparate, seemingly innocuous information to reveal striking and strikingly-accurate predictions has been described in a seminal newspaper article about early commercial algorithmic prediction, the principles of which have direct national security implications, see C Duhigg, ‘How Companies Learn Your Secrets’ New York Times (16 February 2012) www.nytimes.com/2012/02/19/magazine/shopping-habits.html.

149 E Smith, ‘The Internet on Dead Trees’ (Tedium, 29 June 2017) https://tedium.co/2017/06/29/90s-internet-books-history/ (hereafter Smith, ‘The Internet on Dead Trees’).

150 ‘What Is Internet? Explained by Bill Gates 1995, David Letterman Show’ (17 November 2019) https://youtu.be/gipL_CEw-fk, emphasis added.

151 For the purposes of this chapter, we can ignore that he himself turned out to have misjudged how much ordinary people would see value in that Internet thing.

152 J Chapman, ‘Internet “May Be Just a Passing Fad as Millions Give Up On It”’ (5 December 2000) Daily Mail.

153 Rob Bernstein quoted in Smith, ‘The Internet on Dead Trees’ (Footnote n 149).

154 The work of the Electronic Frontier Foundation illustrated the spatial metaphor and combines all three aspects that is the perceived need to defend old and necessary new rights through joint political advocacy on the frontier between traditional physical political communities and the non-corporeal space created through electronic communication, https://www.eff.org/de.

155 S Binkley, ‘The Seers of Menlo Park: The Discourse of Heroic Consumption in the ‘Whole Earth Catalog’’ (2003) Journal of Consumer Culture 283; L Dembart, ‘“Whole Earth Catalog” Recycled as “Epilog’’’ New York Times (8 November 1974) https://www.nytimes.com/1974/11/08/archives/-whole-earth-cataog-recycled-as-epilog-new-group-to-serve.html.

156 Samizdat describes the analog distribution of unauthorised, critical literature throughout the former Communist countries using mimeographs, photocopiers, often simply re-typed carbon-copies or audio-cassettes for music or poetry readings. The effect of such underground criticism on the stability and legitimacy of the Soviet system has been devastating. Islamists used similar methods during the Iranian revolution. The advent of hard-to-monitor electronic communication portended highly destabilising times for local autocrats, but these hopes did not materialise. On the former aspect, see T Glanc, Samizdat Past & Present (2019); L Aron, ‘Samizdat in the 21st Century’ (2009) Foreign Pol’y 131; on the role of audio-cassettes and radio in the Iranian revolution, see BBC Persian Service, ‘The History of the Revolution [انقلاب داستان]’ (n.d.), www.bbc.com/persian/revolution; E Abrahamian, ‘The Crowd in the Iranian Revolution’ (2009) Radical History Review 13–38; on the role of the Internet in post-Communist politics, see S Kulikova and DD Perlmutter, ‘Blogging Down the Dictator? The Kyrgyz Revolution and Samizdat Websites’ (2007) International Communication Gazette 29–50; L Tsui, ‘The Panopticon as the Antithesis of a Space of Freedom: Control and Regulation of the Internet in China’ (2003) China Information 65; on the political space created by electronic communication generally, see JM Balkin, ‘Digital Speech and Democratic Culture: A Theory of Freedom of Expression for the Information Society’ (2004) NYU Law Review 1; O Tkacheva and others, Internet Freedom and Political Space (2013); D Joyce, ‘Internet Freedom and Human Rights’ (2015) 26 EJIL 493.

157 Benkler, ‘Degrees of Freedom’ (Footnote n 42) 18, 19.

158 PH Lewis, ‘Personal Computers: First-Time Tourists Need a Pocket Guide to Downtown Internet’ New York Times (5 April 1994) www.nytimes.com/1994/04/05/science/personal-computers-first-time-tourists-need-a-pocket-guide-to-downtown-internet.html; Lewis’ reference to Paris and New York was probably not a coincidence, given the somewhat fearsome reputation the inhabitants of these two cities have earned, because he goes on to warn: ‘Newcomers to the Internet are warned repeatedly to avoid annoying the general population with their questions.’

159 Y Benkler, R Faris, and H Roberts, Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics (2018) (hereafter Benkler, Faris, and Roberts, Network Propaganda).

160 B Hubbard, F Farnaz, and R Bergman, ‘Iran Rattled as Israel Repeatedly Strikes Key Targets’ New York Times (20 April 2021) www.nytimes.com/2021/04/20/world/middleeast/iran-israeli-attacks.html.

161 Allen and Chan, ‘Artificial Intelligence and National Security’ (Footnote n 135) 29–34.

162 KM Sayler and LA Harris, ‘Deep Fakes and National Security’ (26 August 2020) Congressional Research Service https://apps.dtic.mil/sti/pdfs/AD1117081.pdf; DK Citron and R Chesney, ‘Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security’ (2019) California Law Review 1753.

163 Forsvarsministeriet, ‘Center for Cybersikkerhed’ (18 September 2020) https://www.fmn.dk/da/arbejdsomraader/cybersikkerhed/center-for-cybersikkerhed/.

164 On such ‘information attacks,’ see generally MJ Blitz, ‘Lies, Line Drawing, and (Deep) Fake News’ (2018) 72 Okla L Rev 59; Benkler, Faris, and Roberts, Network Propaganda (Footnote n 159).

165 S Agarwal and others, ‘Protecting World Leaders against Deep Fakes’ (2019) IEEE Xplore 38.

166 For an account of the technology involved, see for instance S Agarwal and others, ‘Detecting Deep-Fake Videos from Appearance and Behavior’ (2020) IEEE International 1.

167 J Silbey and W Hartzog, ‘The Upside of Deep Fakes’ (2019) 78 Maryland Law Review 960, 960.

168 Footnote Ibid, 966.

169 R Darnton, ‘The True History of Fake News’ The New York Review (13 February 2017) www.nybooks.com/daily/2017/02/13/the-true-history-of-fake-news/.

170 The term has been suggested by General Charles Dunlap who offered the following definition: ‘the strategy of using – or misusing – law as a substitute for traditional military means to achieve a warfighting objective.’ CJ Dunlap, ‘Lawfare Today: A Perspective’ (2008) Yale J Int’l L 146. 146. See also D Stephens, ‘The Age of Lawfare’, in RA Pedrozo and DP Wollschlaeger (eds), International Law and the Changing Character of War (2011); CJ Dunlap, ‘Lawfare Today … and Tomorrow’, in RA Pedrozo and DP Wollschlaeger (eds), International Law and the Changing Character of War (2011).

171 See inter alia Chapter 13 “What Can Men Do against Such Reckless Hate?” in Benkler, Faris, and Roberts, Network Propaganda (Footnote n 159) 351–380.

172 European Commission, ‘Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)’ (European Commission, 26 April 2021) https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence-artificial-intelligence.

173 K Roose, ‘The Robots Are Coming for Phil in Accounting’ New York Times (6 March 2021) www.nytimes.com/2021/03/06/business/the-robots-are-coming-for-phil-in-accounting.html.

174 KN Waltz, Theory of International Politics (1979) 129.

175 P Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (2015) 13.

176 Allen and Chan, ‘Artificial Intelligence and National Security’ (Footnote n 134) 36–39.

177 Denmark and the other Scandinavian economies have a long history of seeking productivity gains in both the public and private sector as a way to keep their costly welfare systems fiscally sustainable and labour markets globally competitive. See inter alia Forsvarsminisiteriet, ‘National strategi for cyber- og informationssikkerhed. Øget professionalisering og mere viden’ (December 2014); C Greve and N Ejersbo, Moderniseringen af den offentlige sektor (3rd ed. 2014); J Hoff, Danmark som Informationssamfund. Muligheder og Barrierer for Politik og Demokrati (2004); PA Hall, ‘Danish Capitalism in Comparative Perspective’, in JL Campbell, JA Hall, and OK Pedersen (eds), National Identity and the Varieties of Capitalism: The Danish Experience (2006).

178 Allen and Chan, ‘Artificial Intelligence and National Security’ (Footnote n 135) 37.

179 See inter alia G Luciani, ‘Allocation v Production States: A Theoretical Framework’ in G Luciani and B Hazem (eds), The Rentier State (2015).

180 Mozur and Myers, ‘Xi’s Gambit’ (Footnote n 5); R Doshi and others, ‘China as a “Cyber Great Power” – Beijing’s Two Voices in Telecommunications’ (Brookings, April 2021) www.brookings.edu/wp-content/uploads/2021/04/FP_20210405_china_cyber_power.pdf.

181 Bird and others, ‘The Ethics of Artificial Intelligence’ (Footnote n 59).

182 Allen and Chan, ‘Artificial Intelligence and National Security’ (Footnote n 135) 35–41.

183 Footnote Ibid, 3 and 58–59.

27 Morally Repugnant Weaponry? Ethical Responses to the Prospect of Autonomous Weapons

1 N Werkhauser, ‘UN Impasse Could Mean Killer Robots Escape Regulation’ DW News (20 August 2018) www.dw.com/en/un-impasse-could-mean-killer-robots-escape-regulation/a-50103038 (hereafter Werkhauser, ‘Killer Robots’).

2 Secretary-General, Machines Capable of Taking Lives without Human Involvement are Unacceptable, Secretary-General Tells Experts on Autonomous Weapons Systems (United Nations Press Briefing, 25 March 2019), www.un.org/press/en/2019/sgsm19512.doc.htm.

3 To avoid any misunderstanding at the outset, autonomy, in the debate on AWS, is not understood in the same way as in moral philosophy. Autonomy, in a moral sense, means to act for one’s own reasons. This is clearly not the case in the context of AWS. These systems, as I point out shortly, require programming by a human individual. In quasi-Kantian parlance, then, AWS are heteronomous, rather than autonomous, in that they do not act for their own reasons. As I shall explain later, in the context of the debate on AWS, autonomy essentially describes a machine’s capacity, once it has been programmed, to carry out tasks independently of, and without further guidance from, a human individual. This is, of course, not sufficient for moral autonomy in a meaningful sense. In the chapter, I use the term autonomy according to its technological meaning, rather than its moral one.

4 The term Yuck factor describes a strong emotional reaction of revulsion and disgust towards certain activities, things, or states of affairs. The question is whether such visceral emotional responses are a reliable guide to ethics. Some activities or things – for example, in vitro meat or a human ear grown on a mouse for transplantation – might seem disgusting to some people, and sometimes this can indeed have normative significance. That being said, the feeling of disgust does not always explain why something is ethically undesirable. One problem is that our emotional responses are often shaped by social, economic, and political factors that can cloud our ethical judgement. Especially in the context of emerging technologies, the danger is that the Yuck factor might prevent the adoption of technologies that might be genuinely beneficial.

5 M Walzer, Just and Unjust Wars: A Moral Argument with Historical Illustrations (5th ed. 2015) (hereafter Walzer, Just and Unjust Wars).

7 See J McMahan, Killing in War (2009).

8 J Forge, Designed to Kill: The Case against Weapons Research (2013).

9 Werkhauser, ‘Killer Robots’ (Footnote n 1)

10 P Scharre, Army of None: Autonomous Weapons and the Future of War (2019).

11 A Leveringhaus, Ethics and Autonomous Weapons (2006) 46 et seq (hereafter Leveringhaus, Ethics and Autonomous Weapons).

12 Leveringhaus, Ethics and Autonomous Weapons (Footnote n 11).

13 Footnote Ibid, 62–63.

14 R Arkin, ‘The Case for Ethical Autonomy in Unmanned Systems’ (2010) 9(4) Journal of Military Ethics, 332341.

15 R Sparrow, ‘Killer Robots’ (2007) 24(1) Journal of Applied Philosophy, 6277.

16 S Uniacke, Permissible Killing: The Self-Defence Justification of Homicide (1994).

17 Leveringhaus, Ethics and Autonomous Weapons (Footnote n 11) 76–86.

18 See M Gluck, ‘Examination of US Military Payments to Civilians Harmed during Conflict in Afghanistan and Iraq’ (Lawfare, 8 October 2020) www.lawfareblog.com/examination-us-military-payments-civilians-harmed-during-conflict-afghanistan-and-iraq.

19 Associated Press, ‘US Compensation for Afghanistan Shooting Spree’ (The Guardian, 25 March 2012) www.theguardian.com/world/2012/mar/25/us-compensation-afghanistan-shooting-spree.

20 See A Pop, ‘Autonomous Weapon Systems: A Threat to Human Dignity?’ (International Committee of the Red Cross, Humanitarian Law & Policy, 10 April 2018) https://blogs.icrc.org/law-and-policy/2018/04/10/autonomous-weapon-systems-a-threat-to-human-dignity/.

21 Walzer, Just and Unjust Wars (Footnote n 5) 153–154.

22 TA Cavanaugh, Double Effect Reasoning: Doing Good and Avoiding Evil (2006).

23 Walzer, Just and Unjust Wars (Footnote n 5) 136.

24 Footnote Ibid, 36–45.

25 M Ignatieff, Virtual War (2000).

26 Leveringhaus, Ethics and Autonomous Weapons (Footnote n 11) 89–117.

27 B Cronin, Bugsplat: The Politics of Collateral Damage in Western Armed Conflict (2018).

28 A Leveringhaus, ‘Autonomous Weapons and the Future of Armed Conflict’, in J Gailliot, D McIntosh, and JD Ohlin (eds), Lethal Autonomous Weapons: Re-examining the Law and Ethics of Robotic Warfare (2021) 175.

28 On ‘Responsible AI’ in War Exploring Preconditions for Respecting International Law in Armed Conflict

1 My analysis in this chapter – and especially Section IV – draws heavily on, and reproduces certain text from, a DA Lewis, ‘Preconditions for Applying International Law to Autonomous Cyber Capabilities’, in R Liivoja and A Väljataga (eds), Autonomous Cyber Capabilities under International Law (NATO Cooperative Cyber Defence Centre of Excellence, 2021). Both the current chapter and that piece draw on the work of a research project at the Harvard Law School Program on International Law and Armed Conflict titled ‘International Legal and Policy Dimensions of War Algorithms: Enduring and Emerging Concerns’ (Harvard Law School Program on International Law and Armed Conflict, ‘Project on International Legal and Policy Dimensions of War Algorithms: Enduring and Emerging Concerns’ (November 2019) https://pilac.law.harvard.edu/international-legal-and-policy-dimensions-of-war-algorithms). That project seeks to strengthen international debate and inform policy-making on the ways that AI and complex computer algorithms are transforming, and have the potential to reshape, war.

2 This paragraph draws extensively on DA Lewis, ‘Legal Reviews of Weapons, Means and Methods of Warfare Involving Artificial Intelligence: 16 Elements to Consider’ (ICRC Humanitarian Law and Policy Blog, 21 March 2019) https://blogs.icrc.org/law-and-policy/2019/03/21/legal-reviews-weapons-means-methods-warfare-artificial-intelligence-16-elements-consider/ (hereafter Lewis, ‘Legal Reviews’); see also W Burgard, Chapter 1, in this volume.

3 See M Bienkowski, ‘Demonstrating the Operational Feasibility of New Technologies: the ARPI IFDs’ (1995) 10(1) IEEE Expert 27, 28–29.

4 See, e.g., MAC Ekelhof and G Paoli, ‘The Human Element in Decisions about the Use of Force’ (UN Institute for Disarmament Research, 2020) https://unidir.org/publication/human-element-decisions-about-use-force; E Kania, ‘“AI Weapons” in China’s Military Innovation’ (Brookings Institution, April 2020) www.brookings.edu/wp-content/uploads/2020/04/FP_20200427_ai_weapons_kania_v2.pdf; MAC Ekelhof and GP Paoli, ‘Swarm Robotics: Technical and Operational Overview of the Next Generation of Autonomous Systems’ (2020) UN Institute for Disarmament Research https://unidir.org/sites/default/files/2020-04/UNIDIR_Swarms_SinglePages_web.pdf; MAC Ekelhof, ‘The Distributed Conduct of War: Reframing Debates on Autonomous Weapons, Human Control and Legal Compliance in Targeting’ (PhD Dissertation, Vrije Universiteit 2019); KM Sayler, ‘Artificial Intelligence and National Security’ (21 November 2019) Congressional Research Service Report No R45178 https://fas.org/sgp/crs/natsec/R45178.pdf; International Committee of the Red Cross, ‘Autonomy, Artificial Intelligence and Robotics: Technical Aspects of Human Control’ (ICRC Report, August 2019) www.icrc.org/en/download/file/102852/autonomy_artificial_intelligence_and_robotics.pdf; United Nations Institute for Disarmament Research, ‘The Weaponization of Increasingly Autonomous Technologies: Artificial Intelligence – A Primer for CCW delegates’ (2018) UNIDIR Paper No 8 https://unidir.org/publication/weaponization-increasingly-autonomous-technologies-artificial-intelligence; MAC Ekelhof, ‘Lifting the Fog of Targeting: “Autonomous Weapons” and Human Control the Lens of Military Targeting’ (2018) 73 Nav War Coll Rev 61; P Sharre, Army of One (2018) 27–56; V Boulanin and M Verbruggen, ‘Mapping the Development of Autonomy in Weapons Systems’ (Stockholm International Peace Research Institute, 2017) www.sipri.org/sites/default/files/2017-11/siprireport_mapping_the_development_of_autonomy_in_weapon_systems_1117_1.pdf.

5 ‘DOD Official Briefs Reporters on Artificial Intelligence Developments’ (Transcript of Nand Mulchandani, 8 July 2020) www.defense.gov/Newsroom/Transcripts/Transcript/Article/2270329/dod-official-briefs-reporters-on-artificial-intelligence-developments/.

6 K Reichmann, ‘Can Artificial Intelligence Improve Aerial Dogfighting?’ (C4ISRNET, 7 June 2019) www.c4isrnet.com/artificial-intelligence/2019/06/07/can-artificial-intelligence-improve-aerial-dogfighting/.

7 See Industry News Release, ‘Air Force to Deploy Citadel Defense Titan CUAS Solutions to Defeat Drone Swarms’ Defense Media Network (17 September 2019) www.defensemedianetwork.com/stories/air-force-to-deploy-citadel-defense-titan-cuas-solutions-to-defeat-drone-swarms/.

8 See, e.g., D Gettinger and AH Michel, ‘Loitering Munitions’ (Center for the Study of the Drone, 2 February 2017) https://dronecenter.bard.edu/files/2017/02/CSD-Loitering-Munitions.pdf.

9 On legal aspects of automatic target recognition systems involving ‘deep learning’ methods, see JG Hughes, ‘The Law of Armed Conflict Issues Created by Programming Automatic Target Recognition Systems Using Deep Learning Methods’ (2018) 21 YBIHL 99.

10 See, e.g., N Strout, ‘Inside the Army’s Futuristic Test of Its Battlefield Artificial Intelligence in the Desert’ (C4ISRNET, 25 September 2020) www.c4isrnet.com/artificial-intelligence/2020/09/25/the-army-just-conducted-a-massive-test-of-its-battlefield-artificial-intelligence-in-the-desert/.

11 See N Strout, ‘How the Army Plans to Use Space and Artificial Intelligence to Hit Deep Targets Quickly’ Defense News (5 August 2020) www.defensenews.com/digital-show-dailies/smd/2020/08/05/how-the-army-plans-to-use-space-and-artificial-intelligence-to-hit-deep-targets-quickly/.

12 See J Keller, ‘The Army’s Futuristic Heads-Up Display Is Coming Sooner than You Think’ (Task & Purpose, 20 November 2019) https://taskandpurpose.com/military-tech/army-integrated-visual-augmentation-system-fielding-date.

13 See CP Trumbull IV, ‘Autonomous Weapons: How Existing Law Can Regulate Future Weapons’ (2020) 34 EmoryILR 533, 544–550.

14 See L Xuanzun, ‘China Launches World-Leading Unmanned Warship’ Global Times (22 August 2019) www.globaltimes.cn/content/1162320.shtml.

15 See DA Lewis, ‘AI and Machine Learning Symposium: Why Detention, Humanitarian Services, Maritime Systems, and Legal Advice Merit Greater Attention’ (Opinio Juris, 28 April 2020) http://opiniojuris.org/2020/04/28/ai-and-machine-learning-symposium-ai-in-armed-conflict-why-detention-humanitarian-services-maritime-systems-and-legal-advice-merit-greater-attention/ (hereafter Lewis, ‘AI and Machine Learning’); T Bridgeman, ‘The Viability of Data-Reliant Predictive Systems in Armed Conflict Detention’ (ICRC Humanitarian Law and Policy Blog, 8 April 2019) https://blogs.icrc.org/law-and-policy/2019/04/08/viability-data-reliant-predictive-systems-armed-conflict-detention/; A Deeks, ‘Detaining by Algorithm’ (ICRC Humanitarian Law and Policy Blog, 25 March 2019) https://blogs.icrc.org/law-and-policy/2019/03/25/detaining-by-algorithm/; A Deeks, ‘Predicting Enemies’ (2018) 104 Virginia LR 1529.

16 CBS News, ‘Israel Claims 200 Attacks Predicted, Prevented with Data Tech’ CBS News (12 June 2018) www.cbsnews.com/news/israel-data-algorithms-predict-terrorism-palestinians-privacy-civil-liberties/.

17 See ICRC, Commentary on the First Geneva Convention: Convention (I) for the Amelioration of the Condition of the Wounded and Sick in Armed Forces in the Field (2nd ed. 2016) paras 807–821 https://ihl-databases.icrc.org/ihl/full/GCI-commentary (hereafter ICRC, Commentary).

18 See Lewis, ‘AI and Machine Learning’ (Footnote n 15).

19 See UNHCR, ‘The Jetson Story’ (UN High Commissioner for Refugees Innovation Service) http://jetson.unhcr.org/story.html; N Manning, ‘Keeping the Peace: The UN Department of Field Service’s and Peacekeeping Operations Use of Ushahidi’ (Ushahidi Blog, 8 August 2018) www.ushahidi.com/blog/2018/08/08/keeping-the-peace-the-un-department-of-field-services-and-peacekeeping-operations-use-of-ushahidi. See also A Duursma and J Karlsrud, ‘Predictive Peacekeeping: Strengthening Predictive Analysis in UN Peace Operations’ (2019) 8 Stability IJ Sec & Dev 1.

20 This section draws heavily on DA Lewis, ‘An Enduring Impasse on Autonomous Weapons’ (Just Security, 28 September 2020) www.justsecurity.org/72610/an-enduring-impasse-on-autonomous-weapons/ (hereafter Lewis, ‘An Enduring Impasse’).

21 Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects (with Protocols I, II, and III) (signed 10 October 1980, entry into force 2 December 1983) 1342 UNTS 137.

22 See GGE, ‘Questionnaire on the Legal Review Mechanisms of New Weapons, Means and Methods of Warfare’ (29 March 2019) Working Paper by Argentina to the Group of Governmental Experts on Lethal Autonomous Weapons Systems CCW/GGE.1/2019/WP.6; GGE, ‘The Australian Article 36 Review Process’ (30 August 2018) Working Paper by Australia to the Group of Governmental Experts on Lethal Autonomous Weapons Systems CCW/GGE.2/2018/WP6; GGE, ‘Strengthening of the Review Mechanisms of a New Weapon, Means or Methods of Warfare’ (4 April 2018) Working Paper by Argentina to the Group of Governmental Experts on Lethal Autonomous Weapons Systems CCW/GGE.1/2018/WP2; GGE, ‘Weapons Review Mechanisms’ (7 November 2017) Working Paper by the Netherlands and Switzerland to the Group of Governmental Experts on Lethal Autonomous Weapons Systems CCW/GGE.1/2017/WP5; German Defense Ministry, ‘Statement on the Implementation of Weapons Reviews under Article 36 Additional Protocol I by Germany’ (The Convention on Certain Conventional Weapons (CCW) Third Informal Meeting of Experts on Lethal Autonomous Weapons Systems, Geneva, 11–15 April 2016) https://perma.cc/4EFG-LCEM; M Meier, ‘US Delegation Statement on “Weapon Reviews”’ (The Convention on Certain Conventional Weapons (CCW) Informal Meeting of Experts on Lethal Autonomous Weapons Systems, Geneva, 13 April 2016) www.reachingcriticalwill.org/images/documents/Disarmament-fora/ccw/2016/meeting-experts-laws/statements/13April_US.pdf.

23 M Brenneke, ‘Lethal Autonomous Weapon Systems and Their Compatibility with International Humanitarian Law: A Primer on the Debate’ (2018) 21 YBIHL 59.

24 See M Wareham ‘Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control’ (Human Rights Watch, August 2020) www.hrw.org/sites/default/files/media_2020/08/arms0820_web.pdf; AM Eklund, ‘Meaningful Human Control of Autonomous Weapon Systems: Definitions and Key Elements in the Light of International Humanitarian Law and International Human Rights Law’ (Swedish Defense Research Agency FOI, February 2020) www.fcas-forum.eu/publications/Meaningful-Human-Control-of-Autonomous-Weapon-Systems-Eklund.pdf; V Boulanin and others, ‘Limits on Autonomy in Weapon Systems: Identifying Practical Elements of Human Control’ (Stockholm International Peace Research Institute and International Committee of the Red Cross, June 2020) www.sipri.org/sites/default/files/2020-06/2006_limits_of_autonomy_0.pdf (hereafter Boulanin and others, ‘Limits on Autonomy’); ICRC, ‘Artificial Intelligence and Machine Learning in Armed Conflict: A Human-Centred Approach’ (International Committee of the Red Cross, 6 June 2019) www.icrc.org/en/document/artificial-intelligence-and-machine-learning-armed-conflict-human-centred-approach; T Singer, Dehumanisierung der Kriegführung: Herausforderungen für das Völkerrecht und die Frage nach der Notwendigkeit menschlicher Kontrolle (2019); Advisory Council on International Affairs and Advisory Committee on Issues of Public International Law, Autonomous Weapon Systems; the Need for Meaningful Control (No. 97 AIV/ No. 26 CAVV, October 2015) (views adopted by Government) www.advisorycouncilinternationalaffairs.nl/documents/publications/2015/10/02/autonomous-weapon-systems; Working Paper by Austria, ‘The Concept of “Meaningful Human Control”’ (The Convention on Certain Conventional Weapons (CCW) Second Informal Meeting of Experts on Lethal Autonomous Weapons Systems, Geneva, 13–18 April 2015) https://perma.cc/D35A-RP7G.

25 See, e.g., Lewis, ‘An Enduring Impasse’ (Footnote n 20).

26 See generally AH Michel, ‘The Black Box, Unlocked: Predictability and Understandability in Military AI’ (UN Institute for Disarmament Research, 2020) https://unidir.org/publication/black-box-unlocked (hereafter Michel, ‘The Black Box, Unlocked’).

27 See, e.g., Lewis, ‘An Enduring Impasse’ (Footnote n 20).

28 See Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I) (signed 8 June 1977, entered into force 7 December 1978) 1125 UNTS 3 (Additional Protocol I) Art 51(3) (hereafter AP I); Protocol Additional to the Geneva Conventions of 12 August 1949 and relating to the Protection of Victims of Non-International Armed Conflicts (Protocol II) (signed 8 June 1977, entered into force 7 December 1978) 1125 UNTS 609 (Additional Protocol II) Article 13(3) (hereafter AP II).

29 See Geneva Convention relative to the Protection of Civilian Persons in Time of War (signed 12 August 1949, entry into force 21 October 1950) 75 UNTS 287 (GC IV) Article 78, first para.

30 See Draft Articles on Responsibility of International Organizations with Commentary (Report of the Commission to the General Assembly on the Work of Its Sixty-Third Session, 2011) Ybk Intl L Comm, Volume II (Part 2) A/CN.4/SER.A/2011/Add 1 (Part 2), Article 2(a) (hereafter (D)ARIO).

31 See ICRC, Commentary (Footnote n 17) paras 201–342, 384–502.

32 Regulations Respecting the Laws and Customs of War on Land, Annex to Convention (IV) Respecting the Laws and Customs of War on Land (signed 18 October 1907, entered into force 26 January 1910) 36 Stat 2295, Article 23(a).

33 Convention on the Prohibition of the Development, Production, Stockpiling and Use of Chemical Weapons and on their Destruction (signed 3 September 1992, entered into force 29 April 1997) 1975 UNTS 45, Article I(1).

34 Protocol on Non-detectable Fragments (Protocol I) to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons which may be deemed to be Excessively Injurious or to have Indiscriminate Effects (signed 10 October 1980, entered into force 2 December 1983) 1342 UNTS 147.

35 AP I (Footnote n 28) Article 48.

36 Footnote Ibid Article 51(5)(b).

37 Footnote Ibid Article 57(1).

38 Footnote Ibid Article 57(2).

39 See JR Crawford, ‘State Responsibility’ in R Wolfrum (ed), Max Planck Encyclopedia of Public International Law (2006) (hereafter Crawford, ‘State Responsibility’).

40 Footnote Ibid; Draft Articles on Responsibility of States for Internationally Wrongful Acts, with Commentary (Report of the Commission to the General Assembly on the Work of its Fifty-Third Session, 2001) Ybk Intl L Comm, Volume II (Part Two) A/CN.4/SER.A/2001/Add 1 (Part 2) (hereafter (D)ARSIWA); (D)ARIO (Footnote n 30).

41 Crawford, ‘State Responsibility’ (Footnote n 39).

42 Rome Statute of the International Criminal Court (signed 17 July 1998, entered into force 1 July 2002) 2187 UNTS 3 (ICC Statute), Articles 5, 10–19.

43 See DA Lewis, ‘International Legal Regulation of the Employment of Artificial-Intelligence-Related Technologies in Armed Conflict’ (2020) 2 Moscow JIL 53, 61–63.

44 See ICC Statute, Articles 5–19.

45 See ICC Statute, Articles 25–26.

46 See ICC Statute, Articles 6–8 bis.

47 See ICC Statute, Article 30.

48 See ICC Statute, Articles 25, 28.

49 See ICC Statute, Articles 31–33.

50 See ICC Statute, Articles 62–76.

51 See ICC Statute, Article 77.

52 ICC Statute, Articles 81–84.

53 See Informal Working Paper by Switzerland (30 March 2016), ‘Towards a “Compliance-Based” Approach to LAWS [Lethal Autonomous Weapons Systems]’ (Informal Meeting of Experts on Lethal Autonomous Weapons Systems, Geneva, 11–15 April 2016) https://perma.cc/WRJ6-CCMS (expressing the position that ‘[t]he Geneva Conventions of 1949 and the Additional Protocols of 1977 were undoubtedly conceived with States and individual humans as agents for the exercise and implementation of the resulting rights and obligations in mind.’) (hereafter Switzerland, ‘Towards a “Compliance-Based” Approach’); see also Office of the General Counsel of the Department of Defense (US), Department of Defense Law of War Manual [June 2015, updated Dec. 2016], s 6.5.9.3, p 354 (expressing the position that law-of-war obligations apply to persons rather than to weapons, including that ‘it is persons who must comply with the law of war’) (hereafter US DoD OGC, Law of War Manual).

54 For an argument that algorithmic forms of warfare – which may apparently include certain employments of AI-related tools or techniques – cannot be subject to law writ large, see G Noll, ‘War by Algorithm: The End of Law?’, in M Liljefors, G Noll, and D Steuer (eds), War and Algorithm (2019).

55 See generally L Suchman, ‘Configuration’ in C Lury and N Wakeford (eds), Inventive Methods (2012). For an analysis of the ‘technical layer’, the ‘socio-technical layer,’ and the ‘governance layer’ pertaining to autonomous weapons systems, see I Verdiesen, F Santoni de Sio, and V Dignum, ‘Accountability and Control Over Autonomous Weapon Systems: A Framework for Comprehensive Human Oversight’(2020) Minds and Machines https://doi.org/10.1007/s11023-020-09532-9. For an analysis of US ‘drone operations’ (albeit admittedly not pertaining to AI as such) informed in part by methods relevant to socio-technical configurations, see MC Elish, ‘Remote Split: A History of US Drone Operations and the Distributed Labor of War’ (2017) 42(6) Science, Technology, & Human Values 1100. On certain issues related to predicting and understanding military applications of artificial intelligence, see Michel, ‘The Black Box, Unlocked’ (Footnote n 26). With respect to machine-learning algorithms more broadly, see J Burrell, ‘How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms’ (January–June 2016) Big Data & Society 1-12. For recent arguments concerning limits on autonomy in weapons systems in particular, see Boulanin and others, ‘Limits on Autonomy’ (Footnote n 24).

56 See N Goussac, ‘Safety Net or Tangled Web: Legal Reviews of AI in Weapons and War-Fighting’ (ICRC Humanitarian Law and Policy Blog, 18 April 2019) https://blogs.icrc.org/law-and-policy/2019/04/18/safety-net-tangled-web-legal-reviews-ai-weapons-war-fighting/; Lewis, ‘Legal Reviews’ (Footnote n 2).

57 For broader critiques and concerns – including some informed by socio-technical perspectives – related to (over-)reliance on algorithmic systems, see, among others, R Benjamin, Race after Technology (2019); SU Noble, Algorithms of Oppression (2018); BD Mittelstadt and others, ‘The Ethics of Algorithms: Mapping the Debate’ (July–Dec. 2016) Big Data & Society 1-21; C O’Neil, Weapons of Math Destruction (2016).

58 See, e.g., with respect to precautions in attacks in situations of armed conflict, AP I (Footnote n 28) Article 57(2)(b).

59 For an exploration of certain legal aspects of attribution in relation to ‘cyber operations’ (which may or may not involve AI-related tools or techniques), see HG Dederer and T Singer, ‘Adverse Cyber Operations: Causality, Attribution, Evidence, and Due Diligence’ (2019) 95 ILS 430, 435–466.

60 See (D)ARSIWA (Footnote n 40) ch V; (D)ARIO (Footnote n 30) ch V.

61 See (D)ARSIWA (Footnote n 40), Articles 30–31; (D)ARIO (Footnote n 30), Articles 30–31.

62 See Switzerland, ‘Towards a “Compliance-Based” Approach’, above (Footnote n 53); US DoD OGC, Law of War Manual, above (Footnote n 53).

63 See (D)ARSIWA (Footnote n 40) Article 30; (D)ARIO (Footnote n 30) Article 30.

64 See (D)ARSIWA (Footnote n 40) Article 31; (D)ARIO (Footnote n 30) Article 31.

65 See Switzerland, ‘Towards a “Compliance-Based” Approach’, above (Footnote n 53); US DoD OGC, Law of War Manual, above (Footnote n 53).

66 ICC Statute, Article 25(1).

67 See ICC Statute, Articles 5–19.

68 See ICC Statute, Articles 6–8 bis.

69 See ICC Statute, Article 30.

70 See ICC Statute, Articles 25, 28.

71 ICC Statute, Article 77.

72 See DA Lewis, ‘Three Pathways to Secure Greater Respect for International Law Concerning War Algorithms’ (Harvard Law School Program on International Law and Armed Conflict, December 2020) https://dash.harvard.edu/bitstream/handle/1/37367712/Three-Pathways-to-Secure-Greater-Respect.pdf?sequence=1&isAllowed=y; V Boulanin, L Bruun, and N Goussac, ‘Autonomous Weapon Systems and International Humanitarian Law: Identifying Limits and the Required Type and Degree of Human–Machine Interaction’ (Stockholm International Peace Research Institute, 2021) www.sipri.org/sites/default/files/2021-06/2106_aws_and_ihl_0.pdf.

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×