To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The answers that each political community finds to the law reform questions posed by AI may differ, but a near-term threat is that AI systems capable of causing harm will not be confined to one jurisdiction – indeed, it may be impossible to link them to a specific jurisdiction at all. This is not a new problem in cybersecurity, but different national approaches to regulation will pose barriers to effective regulation exacerbated by the speed, autonomy, and opacity of AI systems. For that reason, some measure of collective action is needed. Lessons may be learned from efforts to regulate the global commons as well as moves to outlaw certain products (weapons and drugs, for example) and activities (such as slavery and child sex tourism). The argument advanced here is that regulation, in the sense of public control, requires active involvement of states. To co-ordinate those activities and enforce global ‘red lines’, this chapter posits a hypothetical International Artificial Intelligence Agency (IAIA), modelled on the agency created after the Second World War to promote peaceful uses of nuclear energy, while deterring or containing its weaponization and other harmful effects.
The increasing autonomy of AI systems is exposing gaps in regulatory regimes that assume the centrality of human actors. Yet surprisingly little attention is given to what is meant by ‘autonomy’ and its relationship to those gaps. Driverless vehicles and autonomous weapon systems are the most widely studied examples, but related issues arise in algorithms that allocate resources or determine eligibility for programmes in the private or public sector. This chapter develops a novel typology that distinguishes three lenses through which to view the regulatory issues raised by autonomy: the practical difficulties of managing risk associated with new technologies, the morality of certain functions being undertaken by machines at all, and the legitimacy gap that is created when public authorities delegate their powers to algorithms.
The rule of law is the epitome of anthropocentrism: humans are the primary subject and object of norms that are created, interpreted, and enforced by humans – made manifest in government of the people, by the people, for the people. Though legal constructs such as corporations may have rights and obligations, these in turn are traceable back to human agency in their acts of creation, their daily conduct overseen to varying degrees by human agents. Even international law, which governs relations among states, begins its foundational text with the words ‘We the peoples…’. The emergence of fast, autonomous, and opaque AI systems forces us to question this assumption of our own centrality, though it is not yet time to relinquish it.
As AI systems operate with greater autonomy, the idea that they might themselves be held responsible has gained credence. On its face, the idea of giving those systems a form of independent legal personality may seem attractive. Yet this chapter argues that this is both too simple and too complex. It is simplistic in that it lumps a wide range of technologies together in a single, ill-suited legal category; it is overly complex in that it implicitly or explicitly embraces the anthropomorphic fallacy that AI systems will eventually assume full legal personality in the manner of the ‘robot consciousness’ arguments mentioned earlier in the book. Though the emergence of general AI is a conceivable future scenario – and one worth taking precautions against – it is not a sound basis for regulation today.
This chapter turns to the possibility that the AI systems challenging the legal order may also offer at least part of the solution. Here China, which has among the least developed rules to regulate conduct by AI systems, is at the forefront of using that same technology in the courtroom. This is a double-edged sword, however, as its use implies a view of law that is instrumental, with parties to proceedings treated as means rather than ends. That, in turn, raises fundamental questions about the nature of law and authority: at base, whether law is reducible to code that can optimize the human condition, or if it must remain a site of contestation, of politics, and inextricably linked to institutions that are themselves accountable to a public. For many of the questions raised, the rational answer will be sufficient; but for others, what the answer is may be less important than how and why it was reached, and whom an affected population can hold to account for its consequences.
Transparency has been embraced as a means of limiting the risks associated with AI. This chapter considers the manner in which transparency and the related concept of ‘explainability’ are being elaborated, notably the ‘right to explanation’ in the European Union and a move towards explainable AI (XAI) among developers. These are more promising than the arguments for legal personality, but the limits of transparency are already beginning to show as AI systems demonstrate abilities that even their programmers struggle to understand. That is leading regulators to cede ground and settle for explanations of adverse decisions rather than transparency of decision-making processes themselves. Such a backward-looking approach relies on individuals knowing that they have been harmed – which will not always be the case – and should be supplemented with forward-looking mechanisms like impact assessments, audits, and an ombudsperson.
As computer programs become ever more complex, the ability of non-specialists to understand them diminishes. Opacity may also be built into programs by companies seeking to protect proprietary interests. Both such systems are capable of being explained, albeit with recourse to experts or an order to reveal their internal workings. Yet a third kind of system may be naturally opaque: some machine learning techniques are difficult or impossible to explain in a manner that humans can comprehend. This raises concerns when the process by which a decision is made is as important as the decision itself. For example, a sentencing algorithm might produce a ‘just’ outcome for a class of convicted persons. Unless the justness of that outcome for an individual defendant can be explained in court, however, it is, quite rightly, subject to legal challenge. Separate concerns are raised by the prospect that AI systems may mask or reify discriminatory practices or outcomes.
Though worries about the impact of new technology have accompanied many inventions, AI is unusual in that some of the starkest recent warnings have come from those most knowledgeable about the field. Many of these concerns are linked to ‘general’ or ‘strong’ AI, meaning the creation of a system that is capable of performing any intellectual task that a human could – and raising complex questions about the nature of consciousness and self-awareness in a non-biological entity. The possibility that such an entity might put its own priorities above those of humans is non-trivial, but this book focuses on the more immediate challenges raised by ‘narrow’ AI – meaning systems that can apply cognitive functions to specific tasks typically undertaken by a human. The book is organized around the following sets of problems: How should we understand the challenges to regulation posed by the technologies loosely described here as ‘AI systems’? What regulatory tools exist to deal with those challenges and what are their limitations? And what more is needed – rules, institutions, actors – to enable us to reap the benefits offered by AI while minimizing avoidable harm?