Skip to main content Accessibility help
×
Hostname: page-component-76fb5796d-qxdb6 Total loading time: 0 Render date: 2024-04-26T09:11:31.581Z Has data issue: false hasContentIssue false

Part II - Regulation and Policy

Published online by Cambridge University Press:  01 November 2021

Hans-W. Micklitz
Affiliation:
European University Institute, Florence
Oreste Pollicino
Affiliation:
Bocconi University
Amnon Reichman
Affiliation:
University of California, Berkeley
Andrea Simoncini
Affiliation:
University of Florence
Giovanni Sartor
Affiliation:
European University Institute, Florence
Giovanni De Gregorio
Affiliation:
University of Oxford

Summary

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2021
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

8 Algorithms and Regulation

Amnon Reichman and Giovanni Sartor
8.1 Setting Up the Field

Algorithms – generally understood as sequences of precise instruction unambiguously specifying how to execute a task or solve a problem – are such a natural ingredient of regulation that some may wonder whether regulation could even be understood without recognising its algorithmic features, and without realising algorithms as a prime subject for regulation. In terms of the algorithmic features of regulation, somewhat simplistically and without suggesting in any way that the algorithmic language captures regulation in its entirety – far from it – algorithms are relevant to the three dimensions of regulation: the regulatory process, the modalities of regulation, and the regulatory approaches (or attitudes). By the regulatory process, we refer to the process that, stylistically, commences with political and economic pressures to find a solution to a certain problem and continues with the formation of policy goals, data gathering, and the mapping of possible regulatory responses to achieve these goals (which ought to include the sub-processes of regulatory impact assessment upon choosing the preferred measure). The chosen measures are translated into regulatory norms and implemented (or enforced), resulting, if all went well, with some improvement of the conditions related to the initial social problem (as can be analysed by a back-end regulatory impact assessment). By regulatory modalities, we mean the set of regulatory measures available to the state (or, more accurately, to state agencies acting as regulators): regulation through (and of) information, permits and licensing, civil, administrative and criminal liability, taxes and subsidies, or insurance schemes. By regulatory approaches, or attitudes, we mean the top-down command and control attitude, performance-based regulation, and the managerial approach, with the latter two also including co-regulation or private regulation.

Algorithms are relevant to all three dimensions of regulation, as they may assist most, if not all, stages of the regulatory process, may inform or even be a component of the regulatory modalities, and may similarly inform and be integrated into the regulatory attitudes. Conversely, algorithms may be the subject matter of regulation. Their development and deployment may be considered (as part of) the social problem triggering the regulatory process; they may then enlist one or more of the regulatory modalities to address the structure of incentives that generate harmful use of algorithms, and stand at the focal point of the policy question regarding which regulatory attitude fits best to address the host of risks associated with algorithms, and in particular with machine learning and AI.

In the following section, we will first introduce a general concept of an algorithm, which then can be applied both to human action and to computer systems. On this basis, we shall consider the jurisprudential debate on prospects and limits of ‘algorithmicisation’ or ‘mechanisation’ of law and government.

We shall then address computer algorithms and consider the manner in which they have progressively entered government. We shall focus on artificial intelligence (AI) and machine learning, and address the advantages of such technologies, but also the concerns their adoption raises. The motivation of this analysis is to shed an important light on the relationship between the state and AI, and on the need to consider regulating the state’s recourse to algorithms (including via attention to the technology itself, usually referred to as ‘regulation by design’).

8.2 Algorithmic Law before Computers

An algorithm, in the most general sense, is a sequence of instructions (a plan of action, or a recipe) that univocally specifies the steps to be accomplished to achieve a goal, as well as the order over such steps.Footnote 1 It must be directed to executors that are able to exactly perform each of the steps indicated in the algorithm, in their prescribed order. The order may include structures such as sequence (first do A, then B), conditional forks (if A is true then do to B, otherwise do C), or repetitions (continue doing B until A is true).

The execution of an algorithm should not require a fresh cognitive effort by the executor, when the latter is provided with a suitable input: every action prescribed by the algorithm should either be a basic action in the repertoire of the executor (such as pushing a button or adding two digits) or consist of the implementation of an algorithm already available to the executor. Algorithms, in this very broad sense, may be directed to humans as well as to automated systems.

Precise and univocal instructions to use hardware or software devices, install appliances, get to locations, or make mathematical calculations, can be viewed as algorithms. There is, however, a special connection between algorithms and computations. The term ‘algorithm’ in fact derives from the name of a Persian scholar, Muhammad ibn Mūsā al-Khwārizmī, who published in the 9th century a foundational text of algebra, providing rules for solving equations, with practical applications, in particular in the division of inheritance. The idea of a mathematical algorithm however is much earlier. For instance, the Greek mathematician Euclid is credited with having invented, in the 4th century BC, an algorithm for finding the greatest common divisor between two integer numbers.

In any case, algorithms, as plans meant to have a ‘mechanical’ implementation (i.e., whose execution does not require a fresh cognitive effort nor the exercise of discretion), should always lead to the same outcome for the same input, whenever they are entrusted to a competent executor. This idea is often expressed by saying that algorithms are deterministic or repeatable (though, as we shall see, some algorithms go beyond this idea; i.e., they also include elements of randomness).

The idea that at least some state activities could be governed by algorithms in a broad sense – unambiguous and repeatable impersonal procedures, leading to predictable decisions according to precise rules – was viewed as a characteristic feature of modern bureaucracies by the social theorist Max Weber according to whom: ‘The modem capitalist enterprise rests primarily on calculation and presupposes a legal and administrative system, whose functioning can be rationally predicted, at least in principle, by virtue of its fixed general norms, just like the expected performance of a machine.’Footnote 2

The same Weber, however, also observed an opposite tendency in contemporary administration and adjudication, namely, the pressure toward ‘material justice’, which evades air-tight codification because it is concerned with the effective pursuit of interests and values. Approaching the exercise of administrative and judicial power as a goal-directed activity, meant to satisfy certain interests or values rather than satisfying exact application of rules, involves, to some extent, an original cognitive effort by decision makers. Some discretion in the identification of the interests or values to be pursued, as well as choices regarding the means to achieve them, cannot be avoided. This cognitive presence, in turn, is a site of agency, representing substantive, or material, moral reasoning (and, it seems, not only rationality but also empathy, and perhaps other virtuous sensibilities and emotions). We will return to this matter when we further discuss machine-generated (i.e., learnt) algorithms (sometimes referred to as AI).

Focusing on adjudication – a key function of the state in exercising its official power – the ideal of a mechanical (or algorithmic, as we say today) approach has most often been the target of critique. Adjudication in many cases cannot, and indeed should not, be reduced to the application of precisely defined rules. The very term ‘mechanical jurisprudence’ was introduced, more than a century ago, by US legal theorist Roscoe Pound,Footnote 3 in a critical essay where he argued that judicial decision-making should not consist of the ‘automatic’ application of precedents’ rulings, legislative rules, and legal conceptions. Pound stated that such an approach, to the extent that it is viable, would have the law depart from shared ideas of correctness and fair play, as understood by citizens, and would lead to the law being ‘petrified’, and more generally unable to meet new challenges emerging in society, to ‘respond to the needs of present-day life’.

A similar criticism against a ‘mechanical’ application of the law can be found via a famous US justice at the time, Oliver Wendell Holmes, who made two related somewhat polemical claims: the claim that ‘general propositions do not decide concrete cases’Footnote 4 and the claim that ‘the life of the law has not been logic: it has been experience’.Footnote 5 These two claims clarify that Holmes is attacking the view that the application of the law is a mere matter of deductive inference, namely, a reasoning process that only derives, relative to the facts of a case, what is entailed by pre-existing general premises and concepts. Holmes argued that, on the contrary, the application of law should be geared toward the social good, which requires officers, and in particular judges ‘to consider and weigh the ends of legislation, the means of attaining them, and the cost’.Footnote 6 However, if considered more carefully, Holmes’s perspective while rejecting the algorithmic application of the law (premised on mechanical jurisprudence), as it requires decision makers to obtain knowledge that is not included in legal sources, still adopts a restrictive approach to legal decision-making (premised on optimising a given object, based on past practice). Following this idea, the interpretation and application of the law only require fresh knowledge of social facts – that is, a better understanding (data and analysis) of experience, a clear formulation of the ends of legislation, and a good formula for assessing costs of applying the means towards these ends. It does not involve a creative and critical normative assessment of the goals being pursued and the side-effects of their pursuit, in the given social contexts.

A number of different currents in legal thinking have developed providing descriptive and prescriptive arguments that judges do not and indeed should not apply the law mechanically; they do, and should, rather aim to achieve values, pertaining to the parties of a case and to society at large. We cannot here do justice to such approaches; we can just mention, as relevant examples, the following: legal realism, sociological jurisprudence, interest-jurisprudence, value jurisprudence, free law, critical legal studies, and so forth. According to some of these approaches, the objections against rigid or static approaches to the law have gone beyond the advocacy of teleological reasoning as opposed to the application of given rules and concepts. Rather, it has been argued that legal problem solving, properly understood, goes beyond optimising the achievement of given goals, especially when such goals are limited to a single purpose such as economic efficiency or even welfare.Footnote 7 On the contrary, legal reasoning also includes the reflective assessment and balancing, of multiple social and individual values, which often presuppose moral or political evaluations and processes of communication and justification, inspired by deliberative ideas of integrity and meaningful belonging in a community.Footnote 8

The view that the application of the law is not algorithmic or deductive has also been endorsed by authors that argued that the (private) law should not serve political aims, but rather focus on its ‘forms’, namely, on the internal coherence of its concepts, and its ability to reflect the nature of legal relations and the underlying theory of justice.Footnote 9

A criticism of mechanical approaches to adjudication (and administrative decision-making) can also be found in analytical legal theorists. Hans Kelsen made the radical claim that legal norms never determine a single outcome for individual cases: they only provide a frame for particular decisions; their application requires discretion since ‘every law-applying act is only partly determined by law and partly undetermined’.Footnote 10 For Kelsen, the relationship between a rule and the application in a particular case is always a site for judgment. More cautiously, H. L. A. Hart affirmed that it is impossible to make ‘rules the application of which to particular cases never calls for a further choice’. Enacted laws are meant to address the prototypical cases that the legislator had envisaged; un-envisaged cases may require a different solution that has to be found outside of the legislative ‘algorithm’, by exercising choice or discretion, that is, by ‘choosing between the competing interests in the way which best satisfies us’.Footnote 11 For Hart, then, cases that fall in greyer areas (relative to the core paradigmatic cases envisioned by the norm-giver) are sites of greater discretion. The question then becomes how to differentiate between the core and the penumbra – whether based solely on a conventional understanding of the words used by the rule, or whether also based on the purpose of the rule. A teleological approach may be needed since legal rules are performative (i.e., require action by those governed by the rules), so that the purpose of a rule may inform its meaning. In the latter case, applying the rule requires discretion regarding which application would further the purpose, and whether exceptions exist (either because the conventional meaning may disrupt the purpose or because a non-conventional meaning would further the purpose better).

This brief survey of leading approaches to jurisprudence demonstrates that the application of law is not merely algorithmic, but rather relies upon the discretion of the decision maker, whenever the norms (embedded in legislation or case-law) do not dictate a single outcome to a decisional problem. It is true that some authors have strongly reiterated the view that in order to effectively direct and coordinate the action and the expectations of citizens and officers, the law should provide clear if-then rules specifying the link between operative facts and corresponding rights and obligations (and other legal effects).Footnote 12 However, there is an apparent consensus that legal decision-making cannot be fully driven by rules (or algorithms) alone; it calls for teleological and value-based reasoning and for the assessment of uncertain factual situations, with regard to the specific cases at stake.Footnote 13 Other authors have observed that even when the application of a general norm to given facts is needed, matching the general terms in the norm to the features of specific factual situations involves a ‘concretisation’ of the norm itself, namely, it requires enriching the indeterminate content of such terms, as needed to determine whether they apply or not to the given facts.Footnote 14 Applying the law, therefore, requires that the decision-maker engages in a genuine cognitive effort. This effort may involve interlinked epistemic and practical inquiries: determining the relevant facts and correlation between them, assessing accordingly the impacts that alternative choices may have on relevant interests and values, and determining accordingly which choice is preferable, all things considered. Discretion may also include honing the contours of the values or interests to be pursued, as well as their relative importance. This broad idea of discretion also includes proportionality assessments under constitutional law, which aim to determine whether an infringement of constitutional rights is justified by pursuing non-inferior advantages with regard to other constitutional rights and values, and by ensuring that no less- infringing choice provides a better trade-off.Footnote 15

So far, we have focused on algorithmic approaches to judicial decision-making, which usually involves disputes about the facts of a case or about the interpretation of the applicable legal norms, so that reasoned choices are needed to come to a definite outcome. But legal decisions, on a daily basis, are entered not only – in fact, not predominantly – by judges. Rather, public agencies (sometimes referred to as ‘administrative’ or ‘bureaucratic’ agencies) apply the law routinely, on a large scale. In some domains, such as tax and social security, a complex set of rules, often involving calculations, is designed to minimise discretion and therefore appears to be amenable to ‘algorithmic’ application (even before the computerisation of public administration). Even though controversies are not to be excluded in the application of such regulations, often the facts (i.e., data) are available to the agency per each case (usually as a result of rather precise rules governing the submission of such data), to which precise rules can then be applied, to provide definite outcomes that in standard cases will withstand challenge (if the rules are applied correctly).

In these domains too, however, fully eliminating discretion may undermine the purpose of the scheme and thus not only be counter-productive but also potentially raise legal validity concerns, to the extent the legal system includes more general legal principles according to which particular rules incompatible with the purpose of the statutes (or the values of the constitution) are subject to challenge. More specifically, a tension may emerge on occasion between the strict application of rules and a call, based on the purposes of the empowering statute (or on more general legal principles and values), to take into account unenumerated particular circumstances of individual cases. For instance, in social security, there may be a tension between taking into account the conditions of need of benefit claimants and applying a law that appears prima-facie not to include such claimants.

More generally, we may observe that algorithms – whether computerised or not – are less applicable when the legal terrain is not paved fully by rules but is interspersed with standards, which by definition are more abstract and thus less amenable to codification based on the clear meaning of the language (more on that in Section 8.10). Moreover, analytically, algorithms are less applicable when more than one norm applies (without a clear binary rule on which norm trumps in case of potential clashes). This is often the case, as various rules on different levels of abstraction (including, as mentioned, standards) may apply to a given situation. Lastly, it should be noted that the debate on mechanical application of the law has thus far assumed a rather clear distinction between the application of a legal norm and the generation (or enactment) of the norm. At least in common law jurisdictions, this distinction collapses, as application of norms (precedents or statutes) is premised on interpretation, which may lead to refining the existing doctrine or establishing a novel doctrine. Norm-generation is even less amenable to algorithmicising, as it is difficult (for humans) to design rules that optimise this process, given the value-laden nature of generating legal norms.

The general conclusion we can derive from this debate is that the application of the law by humans is governed by algorithmic instructions only to a limited extent. Instructions given to humans concern the substance of the activities to be performed (e.g., the legal and other rules to be complied with and implemented, the quantities to be calculated, the goals to be aimed at, in a certain judicial or administrative context). They do not address the general cognitive functions that have to be deployed in executing such activities, such as understanding and generating language, visualising objects and situations, determining natural and social correlations and causes, and understanding social meaning. In particular, the formation and application of the law requires engaging with facts, norms, and values in multiple ways that evade capture by human-directed algorithmic instructions. Consider the following: determining what facts have happened on the basis of evidence and narratives; ascribing psychological attitudes, interests, and motivations to individuals and groups on the basis of behavioural clues; matching facts and states of mind against abstract rules; assessing the impacts of alternative interpretations/applications of such rules; making analogies; choosing means to achieve goals and values in new settings; determining the contours of such goals and values; quantifying the extent to which they may be promoted or demoted by alternative choices; assessing possible trade-offs. Even when officers are provided with plans to achieve a task, such plans include high-level instructions, the implementation of which by the competent officers requires human cognitive activities, such as those listed previously, which are not performed by implementing handed-down algorithmic commands. Such activities pertain to the natural endowment of the human mind, enhanced through education and experience, and complemented with the intelligent use of various techniques for analysis and calculations (e.g., methods for general and legal argumentation, statistics, cost-benefit analysis, multicriteria decision-making, optimisation, etc.). They result from the unconscious working of the neural circuitry of our brain, rather than from the implementation of a pre-existing set of algorithmic instructions, though qualitative and quantitative models can also be used in combination with intuition, to analyse data, direct performance, detect mistakes, and so forth.

But the question remains: does the problem lie with algorithms, in the sense that algorithms are inherently unsuited for tasks involving learning or creativity, or with humans, in the sense that the human condition (the way we acquire and process information, based on our natural endowment) is incompatible with engaging in such tasks by following algorithmic instructions? Put differently: is it the case that no set of algorithmic instructions, for any kind of executor, can specify how to execute such tasks, or rather that humans are unable to engage with such tasks by diligently executing algorithmic specifications given to them, rather than by relying on their cognitive competence?

A useful indication in this regard comes from the psychologist David Kahneman, who distinguishes two aspects of the human mind:

  • System 1 operates automatically (i.e., without the need of a conscious choice and control) and quickly, with little or no effort and no sense of voluntary control.

  • System 2 allocates attention to the effortful mental activities that demand it, including complex computations.Footnote 16

If following algorithmic instructions for humans requires exploiting the limited capacities of system 2 (or in any case the limited human capacity to learn, store and execute algorithms), then the human capacity for following algorithmic instructions is easily overloaded, and performance tends to degrade, also with regard to tasks that can be effortlessly performed when delegated to system 1. Therefore, some of the tasks that system 1 does automatically – those tasks that involve perception, creativity, and choice – cannot be performed, at the human level, by implementing algorithmic instructions handed in to a human executor. However, this does not mean, in principle, that such instructions cannot be provided for execution to a machine, or to a set of high-speed interconnected machines.Footnote 17

As we shall see in the following sections, machines can indeed be provided with algorithmic specifications (computer programs), the execution of which enables such machines to learn, in particular by extracting knowledge from vast data sets. This learned knowledge is then embedded in algorithmic models that are then used for predictions (and even decisions). As machines can learn by implementing algorithmic instructions, contrary to humans, the algorithmic performance of state functions though machines could expand beyond what is algorithmically possible to humans. Algorithms for learning can provide machines with the ability to adapt their algorithmic models to complex and dynamic circumstances, predict the outcome of alternative courses of action, adjust such predictions based on new evidence, and act accordingly.

Nevertheless, this does not mean that all tasks requiring a fresh cognitive effort by their executors can be successfully performed in this way today or in the near (or even mid-range) future; some can, and others cannot. We will address such issues in the following sections, as we turn our attention to state activities and the possible integration of algorithms into the apparatus of state agencies.

8.3 Computer Algorithms before AI

In the previous section, we considered the possibility of adopting an ‘algorithmic approach’ toward human activities concerned with the formation and application of the law, and more generally to state functions concerned with the administration of official functions. We have observed that such an algorithmic approach to decision-making within government existed much before the introduction of computers, but that it had a limited application. In this section, we consider the changes that have taken place following the automation of the execution of algorithms within government with the assistance of computer systems. Before moving into that, we need to discuss the nature of computer algorithms. Computer algorithms correspond to the general notion of an algorithm introduced previously, with the proviso that since such algorithms are directed to a computer system, the basic actions they include must consist of instructions that can be executed by such a system.

To make an algorithm executable by a computer, it must be expressed in a programming language, namely, in language that provides for a repertoire of exactly defined basic actions – each of which has a clear and univocal operational meaning – and for a precise syntax to combine such actions. Different programming languages exist, which have been used at different times and are still used for different purposes. In every case, however, the instructions of all such languages are translated into operations to be performed by the computer hardware, namely, in arithmetical operations over binary numbers. This translation is performed by software programs that are called compilers or interpreters. The automated execution of algorithms has much in common with the human execution of algorithms, when seen at a micro-level (i.e., at the level of single steps and combinations of them). This analogy, however, becomes more and more tenuous when we move to the macro level of complex algorithms, executed at super-high speed and interacting with one another.

The variety of algorithms (computer programs) which are and have been used within public administrations for different functions is amazingly vast. However, it may be possible to distinguish three key phases: a computer revolution, an Internet revolution, and finally an AI revolution, each of which has brought about a qualitative change in state activities.

The computer revolution consisted in the use of computers to perform what could be taken as routine tasks within existing state procedures, typically for making mathematical calculations, storing, retrieving data, and processing data. The history of computing is indeed, from its very beginning, part of the history of the modern states. Many of the first computers or proto-computers were built in connection with public activities, in particular in relation to warfare, such as decoding encrypted messages (e.g., the Colossus, developed in the UK in 1942) and computing ballistic trajectories (e.g., Harvard Mark I and Eniac in the US). Other state tasks to be conferred to computers were concerned with censuses (IBM was born out of the company that automated the processing of population data before computers were available) and the related statistics, as well as with scientific and military research (e.g., for space missions).

However, it was the use of computers for keeping vast sets of data (databases), and the retrieval and processing of the data, that really made a difference in more common governmental operations. Databases were created in all domains of public action (population, taxation, industries, health, criminal data, etc.), and these data sets and the calculations based on them were used to support the corresponding administrative activities. This led to a deep change in the governmental information systems, namely, in those socio-technical structures – comprised of human agents, technologies, and organisational norms – that are tasked with providing information to governments. The ongoing collecting, storing, and processing of data were thus integrated into the operational logic of the modern state (characterised by providing basic services and regulating the industry as well as the provision of these services). In a few decades, states have moved from relying on human information systems, based on paper records created and processed by humans, to hybrid information systems in which humans interact with computer systems. Multiple computer systems have been deployed in the public sphere to support an increasing range of administrative tasks, from taxation, to social security, to accounting, to the management of contracts, to the administration of courts and the management of proceedingsFootnote 18. As of the 1980s, personal computers entered all public administrations, providing very popular and widespread functions as text processing and spreadsheets, which increased productivity and facilitated digitisation. However, this technological advance did not, in and of itself, change the fundamental division of tasks between humans and automated devices, computers being limited to routine tasks supporting human action (and providing data to humans).Footnote 19

The emergence of networks, culminating with the Internet (but comprising of other networks as well), brought a fundamental change in the existing framework, as it integrated computational power with high-speed communications, enabling an unprecedented flow of electronic data. Such flow takes place between different government sections and agencies, but also between government and citizens and private organisations (and of course within the private sphere itself). Even though the private sector was the driving force in the development of the Internet, it would be a mistake to ignore the significant role of the government and the deep impact of digitised networks for the manner in which public institutions go about their business. Recall that the initial thrust for the Internet was generated by the Defence Advanced Research Projects (DARPA) of the US government. The security establishment has not withdrawn from this realm ever since (although its activities remain mostly behind the scenes, until revealed by whistle-blowers, such as Snowden). Focusing on the civil and administrative facets of modern governments, and in particular on the tools of government in the digital era, Hood and Margetts observed that all different modalities through which the government may exercise influence on society were deeply modified by the use of computers and telecommunication. They distinguish the four basic resources which the government can use to obtain information from and make an impact on the world: nodality (being at the centre of societal communication channels), authority (having legal powers), treasure (having money and other exchangeable properties), and organisation (having administrative structures at their service). They note that in the Internet era, the flow of information from government to society has increased due to the ease of communications and the availability of platforms for posting mass amounts of information online.

Moreover, and perhaps more importantly, the provision of public services through computer systems has enabled the automated collection of digital information as well as the generation of automated messages (e.g., pre-compiled tax forms, notices about sanctions, deadlines, or the availability of benefits) in response to queries. The exercise of authority has also changed in the Internet age, as the increased possession of digital information about citizens enables states to automatically detect certain unlawful or potentially unlawful behaviour (e.g., about tax or traffic violations) and trigger corresponding responses. Tools to collect and filter information offline and online enable new forms of surveillance and control. Regarding the treasury, payment by and by the government has increasingly moved to electronic transfers. Moreover, the availability of electronic data and the automation of related computation has facilitated the determination of entitlements (e.g., to tax credits or benefits) or has allowed for automated distinctions in ticketing (e.g., automatically sanctioning traffic violations, or charging for transportation fees according to time of the day or age of the passenger).

Finally, the way in which governmental organisations work has also evolved. Not only the internal functioning of such organisations relies on networked and computerised infrastructures, but digital technologies are widely used by governmental agencies and services to collect and process information posted online (e.g., intercept telecommunications, analyse Internet content), as well as deploying other networked sensors (e.g., street cameras, satellites and other tools to monitor borders, the environment, and transfers of goods and funds).

To sum up this point, we may say that in the Internet era the internal operation of the state machinery (in particular, the bureaucracy), and the relation between government and civil society is often mediated by algorithms. However, this major development, in which considerable segments of the daily activities of the government are exercised through computer networks (i.e., algorithms), is primarily confined to routine activities, often involving calculations (e.g., the determination of taxes and benefits, given all the relevant data). This idea is challenged by the third wave of algorithmic government, still underway: the emergence of AI, to which we now turn.

8.4 Algorithms and AI

The concept of AI covers a diverse set of technologies that are able to perform tasks that require intelligence (without committing to the idea that machine intelligence is ‘real’ intelligence), or at least tasks that ‘require intelligence if performed by people’.Footnote 20 AI systems include and possibly integrate different aspects of cognition, such as perception, communication (language), reasoning, learning, and the ability to move and act in physical and virtual environments.

While AI has been around for a few decades – in 1950 Alan Turing pioneered the idea of machine intelligence,Footnote 21 and in 1956 a foundational conference took place in Dartmouth, with the participation of leading scientistsFootnote 22 – only recently is AI rising to play a dominant role in governments, following and complementing AI successes in the private sector. In fact, an array of successful AI applications have been built which have already entered the economy, and are thus used by corporations and governments alike: voice, image, and face recognition; automated translation; document analysis; question-answering; high-speed trading; industrial robotics; management of logistics and utilities; and so forth. AI-based simulators are often deployed as part of training exercises. The security establishment, it has been reported, has also developed AI systems for analysing threats, following the 9/11 attacks. We are now witnessing the emergence of autonomous vehicles, and soon autonomous unmanned flying vehicles may join. In fact, in very few sectors AI is not playing a role, as a component of the provision of services or the regulation of society, in the application and enforcement segments or the norm-generation stages.

The huge success of AI in recent years is linked to a change in the leading paradigm in AI research and development. Until a few decades ago, it was generally assumed that in order to develop an intelligent system, humans had to provide a formal representation of the relevant knowledge (usually expressed through a combination of rules and concepts), coupled with algorithms making inferences out of such knowledge. Different logical formalisms (rule languages, classical logic, modal and descriptive logics, formal argumentation, etc.) and computable models for inferential processes (deductive, defeasible, inductive, probabilistic, case-based, etc.) have been developed and applied automatically.Footnote 23 Expert systems – like computer systems including vast domain-specific knowledge bases, for example, in medicine, law, or engineering, coupled with inferential engines – gave rise to high expectations about their ability to reason and answer users’ queries. The structure for expert systems is represented in Figure 8.1. Note that humans appear both as users of the system and as creators of the system’s knowledge base (experts, possibly helped by knowledge engineers).

Figure 8.1 Basic structure of expert systems

Unfortunately, such systems were often unsuccessful or only limitedly successful: they could only provide incomplete answers, were unable to address the peculiarities of individual cases, and required persistent and costly efforts to broaden and update their knowledge bases. In particular, expert-system developers had to face the so-called knowledge representation bottleneck: in order to build a successful application, the required information – including tacit and common-sense knowledge – had to be represented in advance using formalised languages. This proved to be very difficult and, in many cases, impractical or impossible.

In general, only in some restricted domains have logical models led to successful application. In the legal domain, logical models of great theoretical interest have been developed – dealing, for example, with arguments,Footnote 24 norms, and precedentsFootnote 25 – and some expert systems have been successful in legal and administrative practice, in particular in dealing with tax and social security regulations. However, these studies and applications have not fundamentally transformed the legal system and the application of the law. The use of expert systems has remained, in the application of legal norms, and more generally within governmental activity, confined to those routing tasks where other computer tools were already in use.

It may be useful to consider the connection between algorithms and expert systems. The ‘algorithm’ in a broad sense, of such systems, includes two components: the inferential engine and the knowledge base. Both have to be created, in all their details, by humans, and may be changed only by human intervention, usually to correct/expand the knowledge base. Thus the capacity of such systems to adequately address any new cases or issues depends on how well their human creators have been able to capture all relevant information, and anticipate how it might be used in possible cases. It is true that such systems can store many more rules than a human can remember and process them at high speed, but still humans must not only provide all such rules but also be able to understand their interactions, to maintain coherence in the system.

AI has made an impressive leap forward since it began to focus on the application of machine learning to mass amounts of data. This has led to a number of successful applications in many sectors – ranging from automated translation to industrial optimisation, marketing, robotic visions, movement control, and so forth – and some of these applications already have substantial economic and social impacts. In machine learning approaches, machines are provided with learning methods, rather than (or in addition to) formalised knowledge. Using such methods, computers can automatically learn how to effectively accomplish their tasks by extracting/inferring relevant information from their input data, in order to reach an optimised end.

More precisely, in approaches based on machine learning, the input data provided to the system is used to build a predictive model. This model embeds knowledge extracted from the input data – that is, it consists of a structure that embeds generalisations over the data, so that it can be used to provide responses to new cases. As we shall see such responses are usually called ‘predictions’. Different approaches exist, to construct such a model. For instance, the model may consist of one or more decision trees (i.e., combinations of choices), based on the features that a case may possess, leading to corresponding responses. Alternatively, it can consist of a set of rules, obtained through induction, which expresses connections between combinations of features and related responses. Or it can consist of a neural network, which captures the relation between case features and responses through a set of nodes (called neurons) and weighted connections between them. Under some approaches, the system’s responses can be evaluated, and based on this evaluation the system can self-update. By going through this process again (and again), optimisation is approximated.

8.5 Approaches to Machine Learning

Three main approaches to machine learning are usually distinguished: supervised learning, reinforcement learning, and unsupervised learning.

Supervised learning is currently the most popular approach. In this case, the machine learns through ‘supervision’ or ‘teaching’: it is given in advance a training set (i.e., a large set of answers that are assumed to be correct in achieving the task at hand). More precisely, the system is provided with a set of pairs, each linking the description of a case, in terms of a combination of features, to the correct response (prediction) for that case. Here are some examples: in systems designed to recognise objects (e.g., animals) in pictures, each picture in the training set is tagged with the name of the kind of object it contains (e.g., cat, dog, rabbit); in systems for automated translation, each (fragment of) a document in the source language is linked to its translation in the target language; in systems for personnel selection, the description of each past applicants (age, experience, studies, etc.) is linked to whether the application was successful (or to an indicator of the work performance for appointed candidates); in clinical decision support systems, each patient’s symptoms and diagnostic tests is linked to the patient’s pathologies; in recommendation systems, each consumer’s features and behaviour is linked to the purchased objects; in systems for assessing loan applications, each record of a previous application is linked to whether the application was accepted (or, for successful applications, to the compliant or non-compliant behaviour of the borrower). And in our context, a system may be given a set of past cases by a certain state agency, each of which links the features of a case with the decision made by the agency. As these examples show, the training of a system does not always require a human teacher tasked with providing correct answers to the system. In many cases, the training set can be the side product of human activities (purchasing, hiring, lending, tagging, deciding, etc.), as is obtained by recording the human choices pertaining to such activities. In some cases, the training set can even be gathered ‘from the wild’ consisting of the data which are available on the open web. For instance, manually tagged images or faces, available on social networks, can be scraped and used for training automated classifiers.

Figure 8.2 Kinds of learning

The learning algorithm of the system (its trainer) uses the training set to build a model meant to capture the relevant knowledge originally embedded in the training set, namely the correlations between cases and responses. This model is then used, by the system – by its predicting algorithm – to provide hopefully correct responses to new cases, by mimicking the correlations in the training set. If the examples in the training set that come closest to a new case (with regard to relevant features) are linked to a certain answer, the same answer will be proposed for the new case. For instance, if the pictures that are most similar to a new input were tagged as cats, the new input will also be tagged in the same way; if past applicants whose characteristics best match those of a new applicant were linked to rejection, the system will propose to reject also the new applicant; if the past workers who come closest to a new applicant performed well (or poorly), the system will predict that also the new applicant will perform likewise; if past people most similar to a convicted person turned out to be recidivists, the system will predict that the new convict will also re-offend.

Figure 8.3 Supervised learning

Reinforcement learning is similar to supervised learning, as both involve training by way of examples. However, in the case of reinforcement learning the system also learns from the outcomes of its own actions, namely, through the rewards or penalties (e.g., points gained or lost) that are linked to such outcomes. For instance, in the case of a system learning how to play a game, rewards may be linked to victories and penalties to defeats; in a system learning to make investments, to financial gains and penalties to losses; in a system learning to target ads effectively, to users’ clicks; and so forth. In all these cases, the system observes the outcomes of its actions, and it self-administers the corresponding rewards or penalties in order to optimise the relationship between the response and the goal. Being geared towards maximising its score (its utility), the system will learn to achieve outcomes leading to rewards (victories, gains, clicks), and to prevent outcomes leading to penalties. Note that learning from one’s successes and failures may require some exploration (experimentation): under appropriate circumstances, the system may experiment with randomly chosen actions, rather than performing the action that it predicts to be best according to its past experience, to see if something even better can come up. Also note that reinforcement learning must include, at least to an extent, a predefined notion of what counts as a ‘success’.

Finally, in unsupervised learning, AI systems learn without receiving external instructions, either in advance or as feedback, about what is right or wrong. The techniques for unsupervised learning are used, in particular, for clustering – that is, for grouping the set of items that present relevant similarities or connections (e.g., documents that pertain to the same topic, people sharing relevant characteristics, or terms playing the same conceptual roles in texts). For instance, in a set of cases concerning bail or parole, we may observe that injuries are usually connected with drugs (not with weapons as expected), or that people having prior record are those who are related to weapons. These clusters might turn out to be informative to ground bail or parole policies.

8.6 AI Systems as Prediction Machines

Machine-learning systems are still based on the execution of algorithmic instructions, conveyed through software programs, as any computer is. In the end, such programs govern the functioning of a digital computer, and their execution is reduced to the simple operations of binary arithmetic performed by one or more processors. However, such algorithms are different, in an important way, from the non-learning algorithms we have described previously, including algorithms meant to govern the behaviour of humans (see Section 8.2) and algorithms directed to machines (see Sections 8.3 and 8.4).

As noted previously, the difference is that to create a non-learning algorithm, humans have to provide in advance all knowledge that is needed to address the task that the algorithm is meant to solve. Thus the use of such algorithms is restricted to the cases in which it is possible, for humans, to give in advance all such information. A further restriction comes from the extent to which a human is able to process this information (in the case of algorithms directed to humans) or to which a human is able to grasp connections and impose coherence over the information (in the case of algorithms directed to computers).

With regard to learning algorithms, we enter a different domain. Once given a training set (in supervised learning), relevant feedback (in reinforcement learning), or just a set of data (in unsupervised learning), the learning algorithm produces a predictive model (i.e., a set of rules or decision trees or a neural network) which embeds information extracted from the training set. This information basically consists of correlations between certain data on objects or events (i.e., the predictors to be used) and other data concerning the same objects or events (i.e., the targets that the system is meant to determine), based on the predictors. Thus, for instance, in a system dealing with recidivism, the model might embed the correlations between features of offenders (age, criminal record, socio-economic conditions, or any other factors) and the crimes they are expected to commit after being released.Footnote 26 In a system dealing with case law, the model may embed correlations between the textual content of the judge’s opinions (plus possibly, further codified information on the case or may other information, regarding social, political, or economic events) and the corresponding decisions. We can consider the predictive model itself (in combination with the software that activates it) as a complex algorithm, an algorithm that is not constructed by humans (who may only specify some parameters and features of it), but by the learning algorithm. The predictive model can be applied to a new object or event, given the values of the predictors for that object or event, and asked to assign corresponding values for the target. It can evolve by being further modified by the learning algorithm, so as to improve its performance. Moreover, to the extent the learning process is given access to a very large (and ever-increasing) data set, it can find within this data set statistical patterns that predict given outcomes in ways that are difficult to foresee when the algorithm was first launched.

Thus, machine learning systems can be viewed as ‘prediction machines’.Footnote 27 To understand their impact on public activities, we need to clarify this notion of prediction. Within machine learning, predicting a target datum based on a set of input data (predictors) just means to suggest what the target datum is likely to be, on account of its correlation with such input data; it consists in ‘filling the missing information’ based on the information we have.Footnote 28 Prediction in this sense does not always, though it does often, refer to future events. As examples of prediction focused on the present, consider an image recognition system that labels pictures (as dogs, cats, humans, etc.), face recognition systems that label faces (with people’s names), or a diagnostic system that labels radiographies with possible pathologies. For predictions focused on the future, consider a system that predicts the likelihood that a person will have a certain health issue, or that a certain student admitted to a university will do well, that an applicant for parole will escape or engage in criminal activities, that a traffic jam will happen, or that crimes are likely to take place in a certain area of a city under certain circumstances.

Having systems that can make predictions, in a cheap and effective way, has three distinct implications:

  • Predictions currently made by humans will, partially or completely, be delegated to machines, or in any case machine predictions will be integrated with human ones.

  • A much larger number of predictions will be executed, in a broader set of domains.

  • A much larger set of data will be collected to enable automated predictions.

Moreover, the learning process may reveal factors that we have not yet realised to be relevant to the ‘correct’ outcome or may even suggest a different outcome as a correct outcome, if such an outcome correlates better with other outcomes identified as preferable.

8.7 From Prediction to Action

Automated predictions may empower decision makers by enabling them to better assess the situation at stake and take consequential actions. Alternatively, such actions too may be entrusted to an automated system. In certain cases, a system’s prediction may be subject to human control (‘human in the loop’, or ‘human over the loop’), in other cases, they may not be challenged by humans. For instance, the prediction that a patient suffers a pathology based on the automated analysis of his or her radiology is, to date, subject to endorsement by the doctor, for it to become the basis of subsequent treatment. Similarly, a prediction of recidivism has to be endorsed by a judge before it becomes the basis for a judgment. On the other hand, the prediction that there is a pedestrian in the middle of the road, for obvious reasons of time, will lead directly to the action of an autonomous car (without necessarily removing human intervention from the autonomous car altogether).

The link between prediction and decision may take place in different ways. A human may have the task of deciding what to do based on the prediction – that is, of determining whether to grant bail, or whether to approve a loan (and at which rate), after the system has predicted the likelihood that the convict will escape or recommit a crime or the likelihood of default on the loan. The choice to entrust a certain decision to a human, even when prediction is delegated to a machine, is ultimately a normative choice. When decisions – including legal decisions by judicial or administrative bodies – involve selecting one course of action among alternatives, based on the way in which the selected alternative promotes or demotes the values (individual rights, public interests) at stake, the process often entails evaluating the comparative importance of these values. To date, no machine has the ability to make such an assessment, but this does not mean that such choices can never be delegated to a machine.

First, a hard-coded automated rule may specify that given a prediction, a certain decision is to be taken by the system (e.g., that a loan application has to be rejected if the applicant is predicted to default with a likelihood that is above a given threshold); similarly, an online filtering system may reject a message given the likelihood that it is unlawful or inappropriate.Footnote 29 This ex-ante choice (i.e., the decision rule specifying what the systems should do, given its prediction), of course, is where the normative work is being done, and hence we would expect it to be rendered by humans.

In case no hard-coded rules are available for linking predictions to choices, but the goals to be achieved , as well as their relative importance, are clear (again, in the sense that humans have made a prior decision regarding these goals), the system may also be entrusted with learning the best way to achieve such goals under the predicted circumstances, and implement it. For instance, in the case of online advertising, a system may learn what kind of messages are most likely to trigger a higher response by certain kinds of users (the maximisation of users’ clicks or purchases being the only goal being pursued) and act accordingly. As this example shows, a problem arises from the fact that, in order to delegate a choice to a machine, the multiple values that are at stake (profit of the supplier, interests of the consumers, overall fairness etc.) are substituted by a single proxy (e.g., number of clicks or purchases) that is blindly pursued.

When even the goals are not clear, the system may still be delegated the task of suggesting or even taking actions, after it has acquired the ability to predict how a human would have acted under the given circumstances: the action to be taken is simply the action that the system predicts that a human would have taken, after training itself on a relevant data set that captures the inputs humans receive and their subsequent decisions. For instance, a system may learn – on the basis of human-made translations, documents, or paintings – how to translate a text, write a document, or draw a painting, by predicting (after adequate training) how humans would translate the text, write the document, or draw the painting. Similarly, a system may forecast or suggest administrative or judicial decisions, after having been trained on data sets of such decisions, by predicting how a human administrator or judge would decide under the given circumstances. This is what is aimed at in the domain of predictive justice: the system should forecast or suggest decisions by predicting what a judge would do under the given circumstances.

A learning process needs to be calibrated on the manner in which a human would make a decision, whenever hard facts, ground truths, or clear consequences which distinguish a correct decision from a faulty decision are hard to come by. Contrast for instance medical and legal decision-making. In medical decision-making, the evolution of the physical conditions of a patient may tell whether a diagnosis was right or wrong, or whether a therapy was effective or not; in the law, matters are more complicated. Whereas we may have facts regarding recidivism or ‘jumping bail’ (which, however, may reflect societal inequities or biases in and of themselves), it is much more difficult to generate a factual method with which to evaluate whether a correct decision has been entered regarding the validity of a contract or a will, or on whether a certain interpretation of a statute is more correct than another. The methodological precondition that requires learning by mimicking human decision makers is obviously a double-edged sword: the AI system will learn to replicate the virtues and successes of humans but also their biases and failures.

On this basis, we may wonder to what extent AI (predicting machines) do and may contribute to state activity. As prediction is key to most if not all decision-making, it appears that a vast domain of possibilities exists. A learning system can provide indications that pertain to different domains that are relevant to the government. For instance, such a system may predict the chances that a person is going to re-commit an offence (i.e., has certain recidivist tendencies) or violate certain obligations, and on this basis, it can suggest measures to be adopted. It can predict where and at what time crimes are most likely to take place, so that appropriate measures can be taken. Or it may predict the occurrence of traffic jams, and possibly suggest how to direct the traffic in such a way that jams are avoided. Or it may predict the possibility of environmental issues, and possible responses to them. It may predict the spread of a disease and the effectiveness of measures to counter it. More generally, it may predict where social issues are going to emerge, and how to mitigate them. The context of the system’s use often determines whether its proposals are interpreted as forecasts, or rather as suggestions. For instance, a system’s ‘prediction’ that a person’s application for bail or parole will be accepted can be viewed by the defendant (and his lawyer) as a prediction of what the judge will do, and by the judge as a suggestion for her decision (assuming that she prefers not to depart from previous practice). The same applies to a system’s prediction that a loan or a social entitlement will be granted. Depending on the context and on the technology used, such predictions can be associated (or not) with a probability score. In any case, such predictions are uncertain, being grounded on the data in the input set provided to the system, and on the statistical correlations between such data.

However, we must not forget that the fact that a machine is able to make predictions at a human and even at a superhuman level does not mean that the machine knows what it is doing. For instance, a system for automated translation does not know the meaning of the text in the input language, nor the meaning of the output in the target language; it has no idea of what the terms in the two languages refer to in the physical or social world. It just blindly applies the correlations – learned from previous translations – between textual expressions in the source and target language. It has indeed been argued that the success of automated translation does not show that machines today understand human language, since it rather consists of ‘bypassing or circumventing the act of understanding language’.Footnote 30 Similarly, a machine predicting appeal decisions – based on the text of the appealed sentence and the arguments by the parties – does not know what the case is about. It is just blindly applying correlations linking textual patterns (and other data) to possible outcomes; it is suggesting legal outcomes by bypassing or circumventing the act of understanding laws and facts.Footnote 31

It is true that the impacts of a choice on the real world may be fed back to, and taken into account by, a learning machine, but only to the extent that such impacts are linked to quantities that the machine can maximise. This may the case for investment decisions, where a quantification of the financial return of the investment may be fed back, or even directly captured by the machine (e.g., in the stock market); the situation is more difficult in most instances of administrative and judicial decision-making, where the multiple goals, values, and interests at stake have to be taken into account. Completely relaying decisions to the ‘blind’ machine assessment may involve a violation of the rule of law (as will be further discussed in Section 8.9, where we will address other concerns the recourse to AI raises).

8.8 Algorithmic Machine Learning as a Regulatory and Policy-Formation Instrument

In this section, we will consider how algorithms can assist governmental agencies in exercising executive functions, focusing first on algorithms as part of the administrative and regulatory apparatus, rather than as a subject for regulation. The state, it should be recalled, acts in three capacities: it is an operator, or an actor (when, for example, it goes to war or uses other forms of direct action); it is an administrative entity (when administering, or implementing, a regulatory scheme, for example, when providing services to citizens and residents); and it also has legislative powers (primary and secondary) to devise a policy and then enact a regulatory regime (which may apply to the state or to the industry). Algorithms can play a part in all three prongs.

First, as a direct actor, or operator, the state may harness AI for its war powers (autonomous or semi-autonomous weapons)Footnote 32 or police powers (when it resorts to AI in the law enforcement context for deploying its forces)Footnote 33 or other operational decisions, including logistics and human resources. In the policing domain, with surveillance sensors expanding to include online cameras, neural network technologies can be used for facial recognition,Footnote 34 and access to law enforcement agencies’ databases may provide real-time predictive policing, for assisting officers in making operational decisions in response or in anticipation of risks. More specifically, predictive policing systems are used to determine the locations and times in which different kinds of criminal activities are more likely to take place, so that a timely preventive action can be undertaken by police forces.

The police power of the state encompasses also the second prong of state power – the administration of a regulatory regime designed to achieve certain regulatory purposes. In that respect, predictive policing is not different from other types of predictive tools, designed to give implementing agencies more efficient capacities. To the extent that algorithmic instructions reach the desired outcome or rigorously reflect the legal criteria underlying a given regulatory scheme,Footnote 35 and so long as the factual input upon which the instructions are then implemented is sound, such algorithms can facilitate the day-to-day bureaucratic machinery, which is faced with the challenge of addressing a large number of decisions pursuant to a regulatory scheme. Among other duties, regulatory agencies perform monitoring routines; publish state-certified information; grant or withdraw permits and licenses; levy fines; assess, collect, and refund fees, taxes, and subsidies; and execute decisions of judicial bodies. Recall that many of these ‘application algorithms’ discussed previously need not necessarily include a machine-learning component, at least to the extent that the language of the legal codes may be translated into computer code and applied in a manner that does not require machine ‘discretion’. Depending on the specificity of the legal criteria undergirding the regulatory regime governing such duties, many such routine decisions are candidates for being coded and translated into algorithms, thereby relieving some of the administrative burden associated with these decisions, as well as assisting in achieving greater consistency in the application of the law to concrete cases. Moving beyond ‘simple’ algorithms, an AI component allows for optimisation of the decision-making process when only some facts are known but not all the facts are easily discernible. In such cases, one (or more) of the basic approaches to machine learning (described in Section 8.5) may be relevant for sifting through a large number of cases and detecting the cases in which the exercise of the regulatory function is most likely appropriate.

For example, when the state agencies are called upon to perform the basic function of ensuring compliance by the industry with rules, procedures, or outcomes, the question of how to allocate compliance resources may be one in which AI may assist and suggest possible resources that may be enlisted to assist. Consider the allocation of financial or other resources to citizens and organisations pursuant to some self-reporting: predicting which cases probably meet the criteria and therefore require fewer checks may promote the overall social good.Footnote 36 The technology may also be used to anticipate who may drop out of school. More generally, it may identify people who, in the near future, may require other forms of governmental assistance, or, for that matter, certain medical treatments. Similarly, AI may be used to assist the grading of public tenders, or other forms of public contracts. In the enforcement context, examples include detecting money laundering by relying on technological approaches such as those used by PayPal, banks, and credit card companies that seek to spot irregular activities based on established spending patterns.Footnote 37 Similarly, governments may use AI to detect welfare fraudsFootnote 38 (and tax frauds more generally). Enforcement may also capture relevant online communications (e.g., organised crimes, or terrorism, but also, in authoritarian states, proscribed opinions).

More fundamentally, algorithms can be harnessed to play a role not only in the implementation of regulatory schemes, technical or discretionary, but also in their evaluation and eventually in formation process of alternative schemes. The development of the predictive algorithms may be useful in assessing not only a particular case, but the more general relationship between regulatory means and ends. It may shed light on what measure is likely to work, and under what conditions. It may also inform the policy makers with respect to the probable cost-benefit analysis of achieving certain policy goals. Such algorithms may be conceptualised as ‘policy algorithms’, since the problem they are designed to solve is the overall risk allocation in a given socio-economic field, or the adequacy (likelihood) of a certain regulatory scheme as applied to achieve its goals, compared to (tested) alternatives. Obviously, such algorithms can also be designed so that they ‘learn’ and adapt, as they analyse policy decisions at the aggregate level, to detect those with greater probabilities of achieving a desired goal (and lower probability for achieving unintended negative consequences).

More specifically, then, to the extent a state agency was able to distil the objectives it seeks to optimise, or to identify key factors underlying a social problem (or which may affect such a problem), the agency may resort to the technology for designing policy, by focusing on what the technology may tell the policymaker regarding the relationship between means and ends.Footnote 39 For example, it may harness machine learning in public health for predicting risks and susceptibility to diseases and illnesses and for predicting which regulatory responses may optimise desired outcomes.Footnote 40 Similarly, machine learning may be used in education, where AI systems can predict educational performance,Footnote 41 including the correlation between such performance and different regulatory approaches. In transportation and urban planning, machine learning may be used to predict traffic, capacity, or urbanisation patterns, and their correlation with different planning policies.Footnote 42 In predicting external events or situations that are relevant to the activities of state agencies, environmental patterns should also be mentioned.Footnote 43 Note that in these cases as well, AI is not concerned with the overall set of values the policy is set to promote, but rather is placed at the level of optimising the means for achieving these goals. Furthermore, we can appreciate that predicting recidivism, crimes, financial frauds, and tax evasion are not only of interest to the law enforcement agency – they are also relevant for the policy formation segments of the state. Similarly, anticipating environmental, sanitary, or financial difficulties; reviewing purchases or other contractual arrangements; predicting the flow of traffic or the consumption of energy are relevant not only for real-time response, but are also valuable in the policy formation process, including for optimising the logistics in civil and military domains.

In conclusion of this section, machine learning holds the potential of going beyond what we currently identify as legally relevant criteria. To the extent the design of the algorithmic ‘production line’ includes access to big data, not classified according to any legally relevant criteria, the algorithm may come up with alternative criteria, which are based on statistical probabilities of certain correlated facts in a given instance. In this sense, the learning algorithm is not merely an ‘application algorithm’, which contends itself with the technical application of a predetermined set of instructions. Rather, a learning algorithm can be understood as a ‘discretionary algorithm’, since it may devise the criteria upon which a state decision may be based. These criteria are those embedded in the predictive model constructed by the learning algorithm of the system, regardless of whether such criteria have a linguistic form (as in system based on inferred rules or decision trees), or whether they are coded at the sub-symbolic level (as in the weighted connections within a neural network). This holds the potential to expand the ability of the state agency (or agencies, to the extent a regulatory regime involves multiple organs). It comes, however, with its own set of legal difficulties.

It is worthwhile to note AI is not a single technology, but rather a vast bundle of diverse methods, approaches, and technologies. Within that bundle, there are learning algorithms that may be designed to generate cognitive responses (rational and emotional) that nudge people – whether they are conscious of the manipulation or not – to behave or react in a certain way. This feature may be combined with algorithms that seek, upon mining big data, to ascertain what achieves a preferred outcome without necessarily following pre-ordained legal criteria.Footnote 44 Nudging algorithms, are relevant as a regulatory measure, precisely because of their ability to nudge people to react, form opinions/emotions, and invest their attention one way (or not invest it in another), and therefore they offer regulators the ability to channel the behaviour of an unspecified public by micro-targeting segments thereof. Their deployment also clearly raises considerable ethical and right-based questions. And we should also realise that automated nudging may be deployed by the regulated industry so as to prompt a certain reaction from the agency (and the decision makers therein).

8.9 The Algorithmic State – Some Concerns

With all their promise, algorithms – application algorithms, discretionary algorithms, and policy-analysis (or policy-formation) algorithms – challenge our understanding of regulation in two dimensions, both rather obvious. The first is that the integration of algorithms into the regulatory process comes with some serious drawbacks. The second is that algorithms are not only (or primarily) integrated into the regulatory process; they emerge as the backbone of the modern, data-driven industries, and as such call for regulation by the (algorithmic) state. As noted previously, they are the subject of regulation, and hence a tension may arise.

On a fundamental level, and in reference to the analysis of the different functions of government, we may observe that AI systems could be deployed to enhance the influence of government over information flows (nodality). AI systems have indeed been used to filter the information that is available to citizens (as happens most often in repressive regimes), to analyse the information generated by citizens (and not necessarily reveal such analysis to the general public), and to provide personalised answers to citizen’s queries, or otherwise target individuals, in a manner that may be manipulative. Furthermore, as has been identified by many, AI may be used by for-profit or not-for-profit entities to further enhance existing socio-political cleavages. By nudging activities within echo-chambers in a manner that alters priorities, perceptions, and attitudes, a degree of social control may be obtained in a manner that is inconsistent with underlying presumptions regarding deliberative discourse and the ongoing formation of values. To the extent the state fails to regulate such deployment of AI by for-profit or not-for-profit organisations, AI can be used to undermine democratic values.

8.10 Algorithms and Legal (Performative) Language

The drawbacks of algorithmic regulation have been noted by many. But before we outline some such key concerns, any serious discussion between jurists and computer scientists on algorithms (or machine learning or AI) reaches the issue of language and rules. Recall that algorithms are a form of prescriptive language, and as such share this feature with law. Yet as philosophers of law teach us, ‘the law’ – itself a rather complex term – is greater than the sum of its rules. The legal universe is also comprised of standards, principles, and values – which by definition are not ‘finite’ and as such evade codification into an algorithm. Moreover, the relationship between the rule (as a general norm) and the application of the rule (to one particular case) is not trivial. It would appear that by definition a rule must include a set of cases greater than one for it to be a rule of general application. Yet as soon as we shift our focus from the rule to the particular case, at least two things happen. The first is that we have to inquire whether other legal norms may be applicable, and since as noted the legal system includes standards and values, with relatively far-reaching application, the answer is almost always yes. This creates a built-in tension, as there are no easily available rules to solve the potential clash between a rule of general application and the more general standard or value. The second, more nuanced issue that arises relates to the very notion of ‘application’, which requires a certain form of judgement which cannot be reduced, in law, to a cut and dry, mechanical syllogism. This is because conceptually, language does not apply itself, and normatively built into the rule is its purpose, which may call, in the particular case, for generating an exception to the rule or otherwise refresh the understading of the rule to address its particular ‘application’ in a manner consistent with the purpose of the rule.

In other words, in law the relationship between the rule and its application is dialectic: the very essence of the rule is that it will be ‘binding’ and apply to the particular cases captured by the language of the rules, yet at the same time part of the DNA of the language of the rules is that the application in the particular case, while fitting a certain analytic structure is also consonant with the underlying purpose and function the rule is there to fulfil. Because in law rules do not self-apply, some form of judgment is inherent. Viewed slightly differently, there is always, again, because of the nature of human language, an ingredient of interpretation regarding the meaning of the words that construct the rule. Such interpretation may be informed by the core (conventional) meaning of a certain phrase, but it may also be informed by the penumbra, where the meaning is more vague. The line between the core and the penumbra is itself open to interpretation. Some even question the clear distinction between the core and the penumbra, suggesting that drawing such a line reflects normative considerations of purpose and aesthetic considerations of fit.

Be it as it may, normatively, we do not want to erase the tension between the rule and the exception because discretion, even when highly restricted, is nonetheless an integral part of what makes law worthy of our moral respect; it connects the operative words to their (otherwise morally appropriate) purpose. At least some leading jurists suggest that law provides a distinct reason for abiding by its prescriptions, and that reason at least at some level ties back to the notion of the moral legitimacy of the rule as part of a legitimate set of rules, and ultimately of a legal system and its processes.

Moreover, central to the notion of law in a liberal democracy is its human nature: it is a product of human agency, its values and goals should reflect care for human agency, and its application should ultimately be at the hands of humans exercising agency. The aforementioned legitimacy therefore is enhanced with the exercise of judgment as a matter of moral agency (and seeing right from wrong) by the person who applies the law. Some jurists suggest that the question of legal validity, scope, and operative meaning of a particular rule as considered for application in a given set of facts cannot be fully separated from the underlying values embedded in the rule (as part of a set of rules and principles, governing a given field of human interaction). If this is indeed the case, discretion is a feature, not a bug. It is not clear that we can fully separate the question of ‘what is the operative meaning of the rule with respect to a particular set of facts’ from the question ‘should we enforce that meaning in the given set of facts’.

In that respect, would we rather have bureaucrats fully automated, without seeing the unique circumstances before them – the human being (not only the case number), applying for the exercise of state power (or its withdrawal) in a particular case? Certainly, there is a risk that relaxing the technical commitment to the conventional meaning of rules will result in biases or favouritisms, as may be the case when human judgment is exercised. But the alternative, namely removing all ambiguity from the system, may result in detaching law from its human nature, by removing agency and by supposing that codes can adequately cover all circumstances, and that human language is capable of capturing ‘the reality’ in a transparent, technical manner. The latter assumption is difficult to support.

On some abstract level, the law is ‘quantic’. Contrary to our everyday understanding, in the marginal cases it evades being reduced to yes-no answers, and we may never know what the rule is until past its application (and then we know what the application has been, not necessarily how the rule will be applied in the next marginal case). The presence of the marginal cases radiates back to the core cases, such that even in some core cases irregular application may ensue, and thus an internal tension always exists between the rule and its application.

Algorithms it would seem, have a different logic: as a general matter, a clear binary answer is what makes an algorithm sound. In cases where such a binary answer is unavailable, it is replaced with an approximation, and then this approximation is reduced to a yes-no complex flow chart.

Even though AI system may be able to learn from past examples and from feedback to randomly select and test new solutions, and to model competing arguments and criteria for choosing between these solutions, it is still difficult to conceive – at least accordingly to the present state of the art – of a dialectic algorithm which adequately captures the internal tension between rule and exception, or the general rule and the particular case, built into law. As noted previously, even the most advanced predictive systems do not have an understanding of language; they can only harness ‘blind thought’ (i.e., in unreflected data manipulation), lacking the capacity to link language to reality, and in particular link legal provisions to the social and human issues that such provisions are meant to regulate. Consequently, delegating the application of the law to an automated system in a manner that eliminates human discretion (and fully removes the human from the loop, including from above the loop) entails, fundamentally, the displacement of a certain question from the legal realm to the technical/bureaucratic realm. This does not mean that certain matters cannot be so displaced, but it does mean that such displacement, to the extent it involves the exercise of state power, generates a call for a process of legal contestation, for reasons related to the rule of law. Hence, the law is reintroduced and the potential for human intervention is brought back.

An interesting, albeit highly speculative development in this domain suggests that we should reconfigure our understanding of general rules by introducing the concept of personalised law.Footnote 45 The idea is there to use AI to predict the relevant features of individual citizens, and to select accordingly the law that applies to them. For instance, if it is possible to distinguish automatically between skilful or incapable drivers, or between vulnerable or knowledgeable, consumers, each individual should be applied the law that fits his or her features, with regard to the achievement of the required level of care (e.g., speed limits), advertising messages, or privacy notices. Similarly, with regard to default rules (e.g., in matters of inheritance,), each one may be subject, by default, to the legal rule that fits his or her predicted preferences.Footnote 46 It remains to be seen not only whether this would indeed be technically feasible, but also whether it may challenge our understanding of the relationship between general norms and their application, including the relationship between rules and standards on the one hand, and rules and particular exceptions on the other.

8.11 Rule of Law

Moving to a less abstract level, resorting to algorithms as a regulatory tool may generate conflicts with the demands of the rule of law, to the extent the recourse to algorithms amounts to delegation of legal authority either to the state-run algorithm, or to private companies that own the data or the algorithm (or both). Clearly, to the extent that private entities play a key role in algorithmic regulation (of others), the issue of delegation of state power is a serious concern.Footnote 47 Considerable attention has been devoted to the public-private interface, sometimes referred as a ‘partnership’, although such partnership already assumes a model of co-regulation, which then raises concerns of self-dealing or the potential capture of either the policy formation or the enforcement processes, or both. But even if the private entities only play a supportive role (or play no role at all), the rule-of-law problem remains.

As noted previously, under the analysis of legal language, the rule of law, as a concept, is not the rule of machines. This is not a mere matter of legal technicality: the idea of the rule of law is premised on the conscious and intentional articulation and deployment of legal categories and concepts, reflecting certain values, to address specific and more general distributive and corrective decisions. Such premise holds at the level of norm-setting (thereby is relevant to policy-analysis algorithms) but also at the level of implementation (and is thereby relevant to implementation and discretionary algorithms). Contrary to a simplified meaning, according to which the rule of law is posited as the opposite of the rule of men, the rule of law is not a rule detached from humans. Rather, it is a rule formed through human interaction, governing human interaction, for the benefit of humans. The rule of law therefore is a mechanism to counter the rule of whim, desire, arbitrariness, or corrupted self-interest, which may follow from constructing the state as if it can do no wrong, and the rulers as if they are entitled to pursue whatever they deem through whatever means they chose.Footnote 48 It is not a mechanism designed to replace moral agency with automated decision-making, even if such automated decision-making may reduce negative outcomes.

Since the rule of law is an expression of autonomy and agency, and since agency is a rather complex term which includes the exercise of empathy, it appears that the rule of law demands a rule laid down and then implemented by moral agents, at least so long as an algorithmic rule (and application) will result in some errors (defined as rules or applications which fail to optimise the fit between legitimate regulatory purposes and the means used, or fail to be consistent with underlying values and the analytic logic of technical legal concepts). Granted that algorithms may reduce such errors, compared to human-made rules and applications, the errors caused by machines are more difficult to justify for those who suffer from their consequence than errors caused as a product of processes through which deliberative moral agency is exercised. A human error can be accepted, or tolerated, because collective decision-making – and legal rules and their implementations are examples of decisions made by some and then applied to others – is premised on a certain degree of solidarity, which stems from a shared notion of what it feels like to suffer from the harm errors cause. Such solidarity, and the premise of empathy, are not present when decisions are made by machines, even if machines may reach fewer decisions that cause such errors. In other words, the concept of the rule of law requires a human in or over the loop, even if we reach a position that AI is fully developed to pass a legal Turing Test (i.e., be indistinguishable from a competent human decision maker) for its ability to integrate purposes and the means achieve consistency between underlying values, on the one hand, and technical legal concepts, on the other. To date, we should be reminded, we are still some ways away from that demanding benchmark. In the law as in other domains, at least in the foreseeable future, it is most likely (and normatively appropriate) that complex tasks, including those requiring creativity and insight, are approached through a hybrid or symbiotic approach that combines the capacities of humans and machines.Footnote 49

Moreover, the issues of legal competence (who gets to call the shots?), of process (how is the decision reached?), and of discretion (what are the relevant considerations, and their respective weight?) are central because they reflect social experience regarding the use of power (and law is a form of power). A rather intricate system of checks and controls is usually in place to ensure the four heads of legal competence (over the matter, the person exercising power, the territory, and the time frame) are checked and often distributed to different entities. What would it mean for algorithms to reflect the need to distribute power when rules are promulgated and applied? Algorithms are designed to integrate and optimise. Should we rather design algorithms so as to check on other algorithms? Similarly, the process that produces legal norms and particular legal decisions is itself regulated, with principles of due process in mind. How would a due-process algorithm be designed?

And, lastly, the modern public law has developed rather extensive structures for managing executive discretion (at the policy-formation, norm-giving and then implementing stages), based upon a paradigm which stipulates that (a) certain considerations are ‘irrelevant’ to the statutory purpose or even ‘illegitimate’ to any purpose, and (b) the relevant or legitimate considerations are to be given a certain weight. The latter is premised on the language of balancing and proportionality, central to which is the structured duty for justifying the relationship between means to achieve the chosen goal, the lack of a less restrictive mean, and the overall assessment that the benefit (to the protection of rights and public interests) expected to be gained by the application of measure is not clearly outweighed by the harm the measure will cause (to protected rights and public interests). This language of proportionality, notwithstanding its rational structure, is rather difficult to code, given the absence of reliable data, unclear causal lines, and the lack of agreed-upon numbers with which to determine when something is clearly outweighed by something else.

This is not to say that algorithms cannot assist in determining where data is missing (or otherwise diluted or corrupt), whether less restrictive means may be available, and what may be the overall cost-benefit analysis. Banning access to proportionality-related algorithms is not warranted, it seems, by the principles of the rule of law, nor would it be a sound policy decision. But fully relying on such algorithms as if their scientific aura places them as a superior tool of governance is neither warranted nor reconcilable with the underlying premise of proportionality, namely that the judgement call will be made by a moral agent capable of empathy.

To sum up this point, a robust delegation of authority is, to date, compatible with the principles of the rule of law only in the most technical applications of clearly defined norms, where the matter under consideration is of relatively minor importance, the data in the particular case is relatively straightforward and verifiable, and an appeal processes (to other machines and ultimately to humans) is available. In such cases, machine learning (and AI more generally) may be relevant in the periphery, as a tool for identifying regimes where decisions are processed by the administration as technical decisions, and therefore as candidates for ‘simple’ algorithmic processing. Learning, in the sense that the algorithm will devise the predictive model to be applied, will be relevant to the technical, binary decisions discussed in this paragraph, mostly with regard to the assessment of relevant facts (e.g., recognising images and people in the context of traffic fines, identifying potential frauds or risks of violation in the tax of financial domain).

8.12 Responsive Law

As machines cannot be expected to understand the values and interests at stake in administrative and judicial decisions, we can conclude that, left alone, they would not be able to make improvements over the law, but just reproduce the existing practice, leading to the ‘petrification’ about which Roscoe Pound complained, as we observed previously, and about which modern scholars have expressed concerns.Footnote 50 Some aspects of this critique attracted possible rebuttals, suggesting that the force of the concern may depend on the manner in which the AI system is designed, and the manner in which it is used.Footnote 51 Researchers in AI and law have suggested that there may be computational models of legal reasoning going beyond deduction that involve the generation of multiple defeasible arguments,Footnote 52 possibly concerning alternative interpretations, on the basis of cases and analogies.Footnote 53 The advent of machine learning may advance these, or similar approaches by overcoming the technical difficulty of formalising such models, but at the same time, the opacity of machine learning systems proves counterproductive for generating meaningful debate regarding alternative norms.

A possible example on point may be what has been called predictive justice (but the same idea can also be applied both to the judiciary and to administration). The key idea is that systems can be trained on previous judicial or administrative decisions (on the relation between the features of such cases and the corresponding decisions), in such a way that such systems predict what a new decision may be, on the basis of the features of the case to be decided. The results so far obtained have limited significance, as accuracy is low. Some systems base their predictions on extra-legal features (e.g., identity of the parties, lawyers, and judges),Footnote 54 others on the text of case documents. Some of the experiments made no real prediction of the outcome of future cases, but rather the decision of an already decided case is predicted based on sections of the opinion on that case.Footnote 55 Moreover, it can be argued that the task of judicial or administrative decision makers does not consist of predicting what they would do, nor what their colleagues would do (though this may be relevant for the sake of coherence), but in providing an appropriate decision based on facts and laws, supported by an appropriate justification.Footnote 56 However, looking into the future, we may consider the possibility that outcomes of decisions may be reliably forecasted and we may wonder how this would affect the behaviour of the parties, officers, and judges. We may wonder whether this would reduce litigation and induce more conformism in the behaviour of officers and judges, so contributing to legal certainty, but also favouring the ‘petrification ‘of law.

8.13 Human Rights

Beyond rule of law and responsive law concerns, recourse to algorithmic regulation may infringe protected rights, primarily human dignity, due process, privacy, and equality. Human dignity can be infringed upon to the extent a person is reduced to being a data object rather than a fully embodied moral agent, deserving meaningful choice, reasoning, and a decision by another moral agent. We will expand on a variant of this argument below. Due process can be infringed upon to the extent the decision cannot be meaningfully contested as either the data or the explanation behind the decision are opaque.Footnote 57 Privacy can be infringed upon to the extent the algorithm relied on data mined without full and free consent (including consent for the application of the particular data for the purpose it was used) or to the extent the algorithm was used in a manner that inhibited decisional autonomy by nudging a person without full disclosure.Footnote 58 Finally, equality can be infringed upon to the extent the algorithm relies on what turns out to be discriminatory factors, reflects existing discriminatory practices in society, or generates discriminatory impact. As noted, the proportionality analysis, which is designed to check whether the infringement on individual rights may nonetheless be justified, with regard to other rights or social values, is difficult to run, in part because the key aspects of a proportionality assessment – the availability of alternative means, the overall benefit generated by recourse to algorithmic machine learning, and the extent to which the benefit outweighs the harm – are neither easy to concretise nor reasonably assess.

More specifically, the very increase in predictive capacity provided by machines can contribute to fixing and solidifying or even increasing inequalities and hardship embedded in social relations, rather than enabling solutions designed to overcome such inequalities. This is likely to happen when an unfavourable prediction concerning an individual – the prediction that the person is likely to have a health problem, to commit crime, to have inferior performance in education, and so forth – leads to a further disadvantage for the concerned individuals (increased insurance costs, heavier sentences, exclusion from education), rather than to a remedial action to mitigate the social causes of the predicted unfavourable outcome. For this to be avoided, prediction has to be complemented with the individuation of socially influenceable causes and with the creative identification of ways to address them or spread risks.

It should be noted that supporters of the use of predictive systems argue that the baseline for assessing the performance of automated predictors should be human performance rather than perfection: biased computer systems still contribute to fairness when their biases are inferior to those of human decision makers. They argue that automated decision-making can be controlled and adjusted much more accurately than human decision-making: automated prediction opens the way not only for more accuracy but also for more fairness,Footnote 59 since such systems can be ‘calibrated’ so that their functioning optimises, or at least recognises, the idea of fairness that is desired by the community.Footnote 60

A more general issue pertains to the fact that the possibility to use AI to make accurate predictions on social dynamics pertaining to groups and or individuals, based on vast sets of data sets, provides a powerful incentive toward the massive collection of personal data. This contributes to lead toward what has been called the ‘surveillance state’, or the ‘information state’, namely a societal arrangement in which ‘the government uses surveillance, data collection, collation, and analysis to identify problems, to head off potential threats, to govern populations, and to deliver valuable social services’.Footnote 61 The availability of vast data set presents risks in itself, as it opens the possibility that such data are abused for purposes pertaining to political control and discrimination.

In the context of the availability of massive amounts of data, AI enables new kinds of algorithmic mediated differentiations between individuals, which need to be strictly scrutinised. While in the pre-AI era differential treatments could be only based on the information extracted through individual interactions (interviews, interrogation, observation) and human assessments, or on few data points whose meaning was predetermined, in the AI era differential treatments can be based on vast amounts of data enabling probabilistic predictions, which may trigger algorithmically predetermined responses. In many cases, such differential treatment can be beneficial for the concerned individuals (consider for instance how patients may benefit from personalised health care, or how individuals in situations of social hardship can profit from the early detection of their issues and the provision of adequate help). However, such a differential treatment may on the contrary exacerbate the difficulties and inequalities that it detects. The impacts of such practices can go beyond the individuals concerned, and affect important social institutions, in the economical as well as in the political sphere. An example on point is the recourse to AI for generating grades based on past performance for students in the UK, given the inability to examine students on the relevant materials they should have learned during the COVID-19 crisis. Students reacted negatively to the decision, in part because the very idea of an exam is based on individual performance at the exam itself and substituting this data point by going to past practices, reproduces past group-based inequalities.Footnote 62

8.14 Opaqueness and Explainability (Due Process and Fairness)

A key issue concerning the use of machine learning in the public sector also concerns the fact that some of the most effective technologies for learning (in a particular neural network) tend to be opaque – that is, it is very difficult to explain, according to human-understandable reasons, their predictions in individual cases (e.g., why the machine says that an application should be rejected or that a person is likely to escape from parole). So not only can such machines fail to provide adequate justifications to the individuals involved, but their opacity may also be an obstacle to the identification of their failures and the implementation of improvements.Footnote 63

An example for this conflict is the discussion concerning ‘COMPAS’ (Correctional Offender Management Profiling for Alternative Sanctions) – a software used by several US courts, in which an algorithm is used to assess how and whether a defendant is likely to become a recidivist. Critics have pointed both to the inaccuracy of the system (claiming that in a large proportion of cases, the predictions that released individuals would or would not engage in criminal activities were proved to be mistaken) and on its unfairness.Footnote 64 On the latter point, it was observed that the proportion of black people mistakenly predicted to reoffend (relative to all black people) was much higher than the corresponding proportion of white people. Thus it was shown that black people have a higher chance of being mistakenly predicted to reoffend and be subject to the harsh consequences of this prediction. Consequently, detractors of the system accused it of being racially biased. Supporters of the system replied by pointing out that the accuracy of the system had to be matched against the accuracy of human judgments, which was apparently inferior. On the point of fairness, they responded that the system was fair, from their perspective: it treated equally blacks and whites in the sense that its indication that a particular individual would reoffend was equally related, for both blacks and whites, to the probability that the person would in reality reoffend: the proportion of black people which were correctly predicted to reoffend (relative to all black people who were predicted, correctly or incorrectly to reoffend) were similar to the same proportions for white people. The same was the case with regard to those who were predicted not to reoffend.Footnote 65

The use of COMPAS was the object of a judicial decision, in the Loomis v. Wisconsin case, where it was claimed that the opacity of the system involved a violation of due process, and that the system might have been racially biased. The Court, however, concluded that the use of the algorithm did not violate due process, since it was up to the judge, as part of his or her judicial discretion, to determine what use to make of the recidivism assessment, and what weight to accord to other data. The Court also stated that the judges should be informed of the doubts being raised about the racial fairness of the system.

As noted, COMPAS presented the problem of the opacity of the algorithm, since defendants faced considerable hurdles in understanding the basis upon which the assessment in their case has been reached. This issue is compounded by an additional problem – the IP rights of the private companies that developed the system. Invoking IP rights proved to be an obstacle in obtaining the code, which may be necessary for providing a meaningful opportunity for challenging the outcomes of the system.Footnote 66

Further issues concerning automated decisions in the justice domain pertain not so much to the accuracy and fairness of automated predictions, but rather to the use of such predictions. It has been argued that predictions of recidivism should be integrated by causal analyses of modifiable causes of recidivism. This would open the space for interventions meant to mitigate the risk of recidivism, rather than using the predictions only for aggravating the condition of the concerned individuals.Footnote 67

The debate on automated decision-making within the justice system is part of a broader discussion of the multiple criteria for measuring the fairness of a predictive system relative to the equal treatment of individuals and groups,Footnote 68 a debate which adds a level of analytical clarity to the discussion on fairness and affirmative action, not only in connection with algorithms.Footnote 69

Some initiatives to mitigate the issues related to automated decision models, by both public and private actors, were introduced in recent years. The European General Data Protection Regulations (GDPR)Footnote 70 – the goal of which is to ‘supervise’ the movement of data in the European Union, and mostly to protect the ‘fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data – addresses automated decision-making at Article 22. It establishes the right ‘not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her’. However, automated decision-making is permissible when explicitly consented by the data subject, when needed for entering into or performing a contract, or when ‘authorised by Union or Member State law to which the controller is subject’. The laws that authorise automated decision-making must lay down ‘suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests’. Thus a legality requirement for the use of automated decision-making by state authorities is established.

Another idea is Explainable Artificial Intelligence (xAI): this concept seeks to alleviate the ‘black box’ problem,Footnote 71 at least to an extent, by providing some human-understandable meaning to the process of decision-making and data analysis. Thus it may contribute to reducing the due-process problem. This approach may also address problems of machine-learning biasFootnote 72, as in the COMPAS example noted previously, and answer questions of fairness in an algorithm’s decision-making process. This approach is not free from difficulties, in particular relating to the question of the scope of the desired explanations. Do we wish, for example, for the explanation to be an ‘everyday’ explanation, that would lack in scientific, professional details, but provide an accessible explanation any individual would be able to understand? Or would we rather have a ‘scientific’ explanation, only meaningful to certain proficient and sufficiently educated individuals, though by far more reflective of the process? Related, is the question of process: do we need to know the exact process that led to the decision, or are we satisfied that an ex-post-facto explanation is available, namely that a backward-looking analysis can find a match between the decision reached and demonstrable salient factors, that can be understood by the data subject as relevant? There is also a question whether an explanation should be provided regarding the entire model or only to the specific decision or prediction. Furthermore, some explanations have the potential of being misleading, manipulative, or incoherent.Footnote 73

Others also argue that human decision-making itself suffers from a lack of explainability, in the sense that one never fully knows what a human truly thought about while making a decision. Yet explanation and reason-giving are required especially because humans tend to suffer from bias and errors; the prior knowledge that they are required to provide an explanation for their decision offers a path for accountability, as it generates awareness and focuses the attention of the decision-maker, at the very least, on the need to reach a decision that fits criteria that can be explained, given the facts of the case.Footnote 74 Another argument stipulates that the main importance of the duty to provide reasoning is not having a ‘casual’ or a ‘scientific’ explanation – but having a legal explanation – claiming that an algorithm should be able to explain the rationale behind its decision and fulfil the legal requirements for an explanation, due process and other obligations set by administrative law or any other law.Footnote 75 Therefore, a fully detailed explanation does not necessarily provide a legal explanation. Attention should be paid also to the specific justification requirements for administrative and judicial decision-making (i.e., in particular, that a decision is grounded in legally acceptable rationales, based on legal sources).

Lastly, some argue that too much transparency may be bad. From an explainability perspective, more transparency does not mean better explainability. Informing an individual about every minor calculation does the exact opposite of what the idea of explainable AI is seeking to achieve: it saturates and ends up obfuscating. Moreover, too much transparency could reveal private data collected by the machine-learning software, hence infringing the right to privacy for many individuals and in this sense doing more harm than good. Some also claim that increased transparency will reduce private incentives and delay progress by forcing the exposure of certain key elements of a developer’s intellectual property.Footnote 76 These problems are important to keep in mind when thinking about xAI.

8.15 The Specific Problems of ‘Zero Price’ and ‘The Score’ (or ‘The Profile’)

Of particular concern is the lack of valuation of data for citizens/users/consumers, given that the collection of data is not attached to any tangible price. In most services online, whether offered by the industry or the state, there is no option to obtain the service while paying for the data not to be collected and analysed or rather to obtain the service without those aspects of it (e.g., personalised recommendations) that require our personal data.Footnote 77 The sense that we are getting an optimised service by giving up our private data and by subjecting ourselves to personalised information that would be fed back to us seems like a good deal in part because we have no way of fully understanding the value, in monetary terms, of the data we provide the system, and the value, in monetary terms, of the nudging that may be associated with the manner in which the personalised information is presented to us. We may be aware, of course, that the data lumps us with "people like us", thereby creating a filter buble, but it is almost impossible to figure out how much are would we be willing to pay in order to ascertain better control over this batching process. In other words, our consent is given in a highly suboptimal context: we lack important anchors for making an informed decision. Moreover, some may even argue that since we are already immersed in a saturated environment premised on surveillance capitalism, it is not easy to ensure that we have not been nudged to accept the loss of privacy (in both senses, the collection of data and the feedback of analysed data) as inevitable.

Furthermore, as noted, the logic of the algorithmic eco-system is that providing the data enhances the service. Each data point provided by users/citizens assists both the data subject and other data subjects. In our daily lives, we seem to acquiesce by participating in the AI ecosystem, which is premised on constant surveillance, data collection, and data analysis, which is then fed back to us in correlation with the filter bubble to which we belong, thereby further shaping, nudging, and sculpting our outlook, attitude, mood, and preferences.

But with it comes a hidden price related to the accumulation of data and, more importantly, to the construction of a profile (often including a ‘score’). The matter of the ‘score’ is pernicious. Attaching a certain number to a person, indicating the extent to which that person is predicted to have a certain desired or undesired feature, attitude, or capacity, brings to the fore a clash between the underlying feature of a liberal democracy – premised on the unrestricted authorship of any individual to write her or his own life story, express, experience, and expand their agency by interacting with others in a meaningful manner – with the bureaucratic logic of the regulatory state, even when this logic aims at achieving certain liberal goals (particularly the promotion of collective goods and the protection of human rights, including, ironically, the optimisation of human dignity). As soon as such goal-driven attitude is translated into a ‘score’, broadly understood (a classification as a good or bad customer or contractor, a quantification of a probability or propensity, such as the likelihood of recidivism, or a grade, as is the assessment of the merit of citizens or officers), and as soon such a score is attached to any move within the social matrix or to any individual interacting with another or with a state agency, then a dignitary component is lost. Yet without that score (or classification, or grade), we may be worse off, in the sense that the value brought about by the AI revolution may be sub-optimal, or lost altogether. Given this tension, greater attention needs to be given not only to the processes through which these ‘scores’ (or classifications, or grades, or profiles) are generated, including the power to understand and contest, and not only to the spheres and contexts in which such scores may be used, but also to the social meaning of the score, so that it is clear that we are not governed by a profile, but are seeking ways to constantly write, change, and challenge it.

An extreme example of the usage of ‘score’ would be the Chinese Social Credit System (SCS).Footnote 78 This system, used by the Chinese government, creates an extremely extensive database of personal data for every citizen, regarding most aspects of one’s life. This data is then used to create a social credit score, which rates an individual’s ‘trustworthiness’ and is then used by both authorities and business entities for their benefit.Footnote 79 A lower social credit score may lead to legal, economic, and reputational sanctions, while a higher social credit score would allegedly provide an individual with more opportunities and a better life.Footnote 80 This almost dystopian reality may seem to be a distant problem, relevant only to non-democratic societies such as China, but many believe that these ideas and especially technologies are not very unlikely to ‘leak’ into democratic Western societies as well.Footnote 81 The credit score systems used in most Western democracies for assessing the reliability of prospective borrowers, although less invasive and thorough, may not be as different from the SCS as one might think: In both systems, credit score is an index for an individual’s reputation or trustworthiness. Also, both credit systems have many similar implications on an individual’s life for good or for bad.Footnote 82

8.16 The Data (and AI Derivatives)

On a more basic level, to date, it is not clear that the data sets used for training and calibrating the algorithms are sufficiently sound, in the sense that the data is not corrupt by either being inaccurate or reflecting past or pre-existing wrongs which, normatively, should be discounted and not reinforced.Footnote 83 For example, as noted previously, criticisms were raised against the extent to which predictive systems might reproduce and expand social bias. Various critics observed that systems trained on human decisions affected by prejudice (e.g., officers treating with harshness the member of certain groups), or on data sets that reflected different attitudes relative to different groups (e.g., data sets of past convictions, given different level of control over subpopulations), or on variables that disregarded the achievements of certain groups (e.g., results obtained in less selective educational environments) could lead to replicate iniquities and prejudice.

The state of the data, therefore, casts serious doubts regarding the reasonableness of relying on the assumption that the algorithm is capable of achieving the preferred result, taking into consideration broader concerns of overall dignity and equal, meaningful membership in a society of moral agents. It then becomes a policy question of comparing the propensity of the algorithm to get it wrong because of corrupt data, to the propensity of humans to get it wrong on account of other errors, including bias, as well as our ability to generate social change so as to recognise past wrongs and work towards their remedy, rather than reification.

Data-related issues do not end here. The data may be problematic if it is not sufficiently reflective (or representative). This may be a product of a data market in which the data-collecting pipelines (and sensors) are owned or otherwise controlled by entities that erect barriers for economic reasons. The logic of surveillance capitalism tends to lead to the amalgamation of collection lines up to a point, precisely because the value of the data is related to it being reflective and therefore useful. But when a data-giant emerges – an entity the data it owns is sufficiently ‘big’ to allow for refined mining and analysis – the incentive to further collaborate and share data decreases. To the extent that data giants (or ‘super-users’) already have sufficient control over a certain data market, they may have an incentive to freeze competition out. Such access barriers may hamper optimised regulation.Footnote 84

The dependence on data raises further concerns: data has to be continuously updated (for the algorithms to keep ‘learning’). While with respect to some algorithms, the marginal utility of more data may be negligible (and in that sense, the algorithm has already ‘learned’ enough), the dynamic changes in technology, and more importantly, the changes in society as it interacts with technological developments and with new applications – many of which are designed to ‘nudge’ or otherwise affect people and thus generate further change – suggest that the demand for data (and updated AI based on that data) is un likely to diminish, at least in some contexts. This leads to accelerating pressures towards surveillance capitalism and the surveillance state. It also raises data-security concerns: data stored (for the purposes of ‘learning’) attracts hackers, and the risk for data breaches is ever-present. The state then has to decide on a data collection and retention policy: would it be agency specific? Or may agencies share data? The answer is far from easy. On the one hand, generating one database from which all state agencies draw data (and use these data for algorithmic purposes) is more efficient. It is easier to protect and to ensure all access to the data is logged and monitored. It also saves contradictions among different state agencies and, one may assume, reduces the rate of erroneous data, as it increases the chances of wrong data being corrected by citizens or by a state agency. To the extent that the data are indeed ‘cleaner’, the data analysis will be less prone to error. On the other hand, consolidating all data in one place (or allowing access to many or all agencies to data collected and stored by co-agencies) increases the allure for hackers, as the prize for breach is greater. Moreover, the consolidation of data raises separation-of-powers concerns. Access to data is power, which may be abused. The algorithmic state can be captured by special interests and/or illiberal forces, which may use algorithms for retaining control. Algorithms may assist such forces in governance as they may be used to manage public perception and nudge groups in an illiberal fashion. In other words, algorithms make social control easier. They may also be used to provide preferential treatment to some or discriminate against others. And they may infringe fundamental rights. Consequently, concentrating data in one central pot increases the risk of capture. Reasons underlying separation of powers – between the three branches, between state and federal powers, and within the various executive agencies – call for ‘data federalism’ whereby checking mechanisms are applied prior to data-sharing within various state agencies. Such sharing requires justification and should allow for real-time monitoring and ex-post review. The price is clear: access to data will be more cumbersome, and monitoring costs will increase. More importantly, the technical protocols will have to support effective review, so as to increase the likelihood of detecting, at least ex post, illegitimate use. To the best of our knowledge, the regulatory incentives are such that to date this field is under-developed.

8.17 Predicting Predictions

Rule of law concerns, fundamental rights infringements, and data-regulation questions are not the only issues facing the state. At this point in time, as the ‘predictive’ agencies are beginning to flex their algorithmic muscles, another use for predictive machine learning emerges: one that predicts how agencies make decisions. This approach can be deployed by the industry – as will be discussed later – but also by the state agencies themselves. In order to better manage their regulatory resources, and ease regulatory burden, agencies are seeking ways to separate the wheat from the chaff by knowing which regulatory problems deserve regulatory attention, versus other tasks that can be managed as a matter of routine. ‘97% of cases like that are decided in this or that way’ is a message that is attached now to certain agency procedures, a product of an algorithm that follows state practice and designed to assist bureaucrats in deciding on which issues or decisions to focus, and which can be summarily decided one way or another. This approach is premised on the importance of having a human in the loop, or over the loop, so that decisions are not fully made by machines, but algorithms may nonetheless reflect useful information to the decision makers.

Such predictive algorithms, often designed and run by private entities, raise not only the familiar ‘private-public-interface’ conundrum, as the agencies partner with the industry, but also pose an interesting problem of path dependency: to the extent that the algorithm captures the bureaucratic practice as it is currently administered, and to the extent the predictive information it provides is indeed followed by state officials, the normative power of the actual is solidified, not necessarily as a product of thorough consideration, and in any event, in a manner that affects the path of future developments. As noted previously with regard to predictive justice (the use of algorithms to predict judicial decisions based on past cases), it remains to be seen how such algorithms will influence the expectations of those interacting with officers, as well as the behaviour of officers themselves. Algorithms developed by private entities and used by governments can create an accountability gap regarding the government’s lack of ability to understand or explain the decision that has been made.Footnote 85

As noted previously, the second main concern with the rise of the algorithmic society is that algorithms are (also, if not mainly) used by the regulated industry. Therefore, their use must also be regulated by checking the degree to which it conflicts with the regulatory regime (including statutory or constitutional rights). In that respect, the algorithmic state is facing the challenge to regulate the algorithmic industry, and determine the appropriate way to go about this task, given regulatory challenges related to informational asymmetries, intellectual property rights of the regulated industry, and privacy rights of its customers.

8.18 Regulating the Industry, Regulating the Executive – Regulating Algorithms with Algorithms?

The challenge of regulating the algorithmic market by the algorithmic state is technological, administrative, and legal. Technological, in the sense that the development of auditing tools for algorithms, or of statistical equivalents thereof, becomes an integral part of the evidence-gather process upon which any auditing regulatory scheme is premised. Recall that the state currently develops algorithms to audit the non-algorithmic activities of industries – for example, it develops auditing tools to monitor health-related records, or pollution-related records, or, for that matter, any record that is relevant for its auditing capacity. It now faces a challenge to develop auditing tools (algorithmic or non-algorithmic) in order to audit algorithmically developed records. The challenge is to have the technological tools to uncover illegal algorithms, namely algorithms that deploy processes or criteria that violate the law or algorithms that are used to pursue outcomes that violate the law. Technologically, this is a complex task, but it may be feasible. Put differently, if algorithms are the problem, they may also be the solution, provided the relevant ecosystem is developed and nurtured. It is also administratively challenging, because it requires qualified personnel, person-hours, and other resources, as well as the awareness and institutional incentives to follow through. Legally, it is challenging both institutionally and normatively. Institutionally, it may be the case that some procedures may need to be tweaked or modified in order to provide judicial or quasi-judicial bodies with the necessary procedural infra-structure with which inquiries into the misuses of algorithms can be conducted. This is not to suggest that a wholesale revolution is necessary, but neither is it to say that the current procedural tools are necessarily optimal. Normatively, it may be the case that some new rules may need to be introduced, in order to align the modalities of regulation with the challenges of regulating the algorithmic industry.Footnote 86

More specifically, the risks – some of which identified in this chapter – need to be defined in a manner precise enough to enable the regulators to design appropriate regulatory means. This may be taxing, given the polycentric nature of the problem, as various goals – sometimes conflicting – may be at play. For example, as has been stated, data-driven AI markets tend to concentrate, and hence generate competition-related harms, but also democracy-related risks. Such concentration, and the potential to ‘nudge’ people within certain echo-chambers by deploying AI-driven manipulations, are certainly a challenge, to the extent we care about the integrity of the democratic process. Addressing each concern – the anti-trust challenge and the social-control worry – may point to different regulatory approaches.

Turning our attention to the available modalities, regulating information sharing seems to be important, in order to cut down disruptive information barriers by defining the relevant informational communities that should have access to certain information regarding regulated algorithms (defined as including the data pipelines that feed them and the output they produce). Some information should be shared with the state agency, while others with the customers/users. Similarly, licensing regimes may be relevant, to the extent that some algorithms require meeting some defined standards (such as privacy or accountability by design). This modality may apply also to the state regulating its own licensing agencies. Furthermore, the structure of civil and criminal liability may have to be refined, in order to match the responsibility of the relevant agents, as well as their incentives to comply. Criminal liability specifically might pose a serious problem with the further development of artificial intelligence and might require both the lawmakers and the court to find new solutions that will fit the technological changes.Footnote 87 Tax and subsidy modalities also come to mind, as the state may resort to taxing elements of the algorithmic eco-system (e.g., taxing opacity or providing subsidies for greater explainabilityFootnote 88). In that respect, it would appear that an algorithm that tracks other algorithms in order to detect the saliency of certain criteria may be useful. And, finally, insurance may be a relevant regulatory factor, both as a tool to align incentives of the industry, and because insurance itself is algorithmic (in the sense that the insurance industry itself relies on machine learning to predict individualised risks and determine corresponding insurance premiums, which may affect risk-sharing).

In short, as the regulator and an actor itself, the state may be pulled by the algorithmic logic state in different directions. As an executive, it may seek to flex its algorithmic muscles so as to optimise its executive function. As a regulator, it may seek to harness algorithms in order to tame their far-reaching and perhaps unintended consequences. This requires policy formation processes that are attuned to the different pulls, as well as to the different modalities that can be put to bear in order to align the incentives.

8.19 Regulation and the Market – The Background of Civil Liability

Before concluding, it is important to revisit regulation and policy making by situating these concepts in a larger context. According to some voices, technology at large and algorithms in particular are better off being ‘deregulated’. We want to resist these calls not only for normative reasons (namely, that regulation is a good thing) but mainly because the term ‘de-regulation’ is misleading. There is always at least one regulatory modality which covers any field of human activity. At the very least, any human interaction raises questions of civil liability. The contours of such liability are a form of regulation, from the perspective of the state. If state agencies are still debating how to proceed with a specialised regulatory regime (including modalities other than civil liability), the residual nature of property, tort, contract, and unjust enrichment is always present.

Take two examples. To the extent the state does not regulate the production, distribution, and deployment of malicious software (which detects and exploits vulnerabilities algorithmically), at the end of the day a civil lawsuit may generate the boundaries of liability. This is exemplified by the civil suit brought by Facebook against NSO, for using the Facebook platform in order to plant malicious software (worms) which allow the attackers the ability to access information on the attacked device. This, of course, is a subject matter upon which a certain regulatory agency should have a say. But to the extent it does not, regulation is still present – in the form of civil liability. Likewise, a civil lawsuit opposed the pharmaceutical company Teva against Abbot Israel (the importer and distributor of Similac, a baby-food formula) and Agam Leaders Tech, a marketing firm. The suit alleges that the defendants engaged in a ‘mendacious and covert slur campaign’ by using fake profiles to distribute false information about Teva’s product (Nutrilon) which caused Teva considerable damage. Such marketing campaigns rely on algorithms to detect relevant ‘conversations’ where either fake profiles or real people are rallied to put forward a certain position, almost always without the audience (or other participants in the conversation) being aware of the algorithmic rally being deployed (let alone being deployed for money). In such cases, a civil lawsuit will have to determine the boundaries of such algorithmic campaigns (and the potential duties to disclose their source) and the relevant regime of civil liability (including consumer protection).

8.20 Conclusion

While in legal literature usually a very sceptical approach is adopted toward a mechanical application of rules, different indications come from other disciplines, which focus on the limits of intuitive judgments. For instance, the economist and psychologist Daniel Kahneman observes that in many cases, simple algorithms provide better results than human intuition, even when human capacities and attitudes have to be assessed.Footnote 89 What should be the procedures and norms according to which the state ought to regulate the adoption of AI to its own decision and assessment infrastructure? Should there be a difference between mere application AI, relevant for implementation decisions in highly codified contexts, to discretionary AI, which is designed to address regulatory decisions in a more open legal landscape, to policy-making algorithms, which are designed to assist in the policy formation level?

In the previous section, we considered in detail many issues emerging from the intermingling of AI and government, concluding that the law and its principles such as human rights and the rule of law are not averse to AI-based innovation, but that nonetheless serious concerns emerge. AI, where appropriately deployed, can contribute to more informed, efficient, and fair state action, provided some safeguards are maintained. For this purpose, human judgment must not be substituted by the ‘blind thought’ of AI systems, which process whatever kind of information is provided to them without understanding its meaning and the human goals and values at stake. Humans must be in the loop or at least over the loop in every deployment of AI in the public domain, and should be trained so as to be mindful of the potential risks associated with being influenced by scores and profiles in a manner inconsistent with what must ultimately be human judgment. The level of human involvement should therefore be correlated to the extent to which essentially human capacities are required, such as empathy, value judgments, and capacity to deal with unpredictable circumstances and exceptions.

A broader speculation concerns whether, or to what extent, the impact of AI will generate a change in the manner by which we are governed. More specifically, it concerns whether the law, as we understand it now, particularly in connection with the value of the rule of law, may be supplemented or even substituted by different ways to guiding human action, driven by the extensive deployment of AI.Footnote 90

The law, in its current form, is based on authoritative verbal messages, which are enacted in written form by legislative and administrative bodies. Such messages usually convey general instructions which order, prohibit or permit certain courses of action, and in so doing also convey a normative or moral position with respect to these actions. Interwoven within the legal apparatus are further norms that perform ancillary functions, by ascribing legal outcomes (or sanctions), qualifications to people and objects, as well as by creating institutional facts, institutions, and procedures. Legislation and administration are complemented by other written verbal messages, namely judicial or quasi-judicial decisions – which apply the law to specific cases, developing and specifying it – as well as by doctrinal writings which interpret and develop the norms, close gaps between the code and social life, and again generate expressive content pertaining to morality and identity. To become active, such verbal messages have to be understood by humans (the addressees of legal provisions), and this may require an act of interpretation. An act of human understanding is also required to comprehend and apply non-written sources of the law, such as customs and other normative practices. Once that the citizens or officers concerned have understood what the law requires from them in their circumstances, they will usually comply with the law, acting as it requires, but they may also choose to evade or violate the law (though this may entail the possibility of suffering the consequences of violation) or otherwise sidestep the legal rule by relying on a standard which may conflict with the rule or potentially mute its application. They may also approve or disapprove of the norms in question and voice their opposition. Thus, the law assumes, at least in a democratic state, that citizens are both free agents and critical reasoners.

It is unclear whether the law will preserve this form in the future, as AI systems are increasingly deployed by the state. This is problematic. In a world in which the governance of human action is primarily delegated to AI, citizens could either no longer experience genuine legal guidance (or experience it to a lesser extent), being rather nudged or manipulated to act as desired (and thus the law as such would be rendered irrelevant or much less relevant), or they would only or mainly experience the law through the mediation of AI systems.

The first option – the substitution of normativity with technology – would take place if human action were influenced in ways that prescind from the communication of norms.Footnote 91 The state might rather rely on ‘technoregulation’.Footnote 92 Such an architecture may open or preclude possibilities to act (enable or disable actions, as when access to virtual or digital facilities require automated identification), open or preclude possibilities to observe human action (enable or disable surveillance), facilitate or make more difficult, or more or less accessible certain opportunities (as it is the case for default choices which nudge users into determined options), directly perform action that impacts on the interests of concerning individuals (e.g., apply a tax or a fine, disabling the functioning of a device, such as a car, etc.), or may direct individuals through micro-targeted rewards and punishments towards purposes that may not be shared by or that are not even communicated to the concerned individuals. This is troubling, even dystopian, to the extent we care about human agency (and human dignity) as currently understood.

The second option, AI-mediated normativity, would take place if the state were to delegate to AI systems the formulation of concrete indications to citizens – on the predicted outcome of cases, or the actions to be done or avoided in a given context – without citizens having access to understandable rules and principles that support and justify such concrete indications. The citizens would just know that these concrete indications have been devised by the AI system itself, in order to optimise the achievement of the policy goals assigned to it. Citizens would be in a situation similar to that of a driver being guided step by step by a GPS system, without having access to the map showing the territory and the available routes toward the destination. Again, the implications regarding agency, meaningful participation in a community of moral agents, and human dignity are obvious (and troubling).

In summary, if these scenarios materialise, and especially if they materialise in a concentrated market (characterised by states and monopolies), we fear that humans may lose a significant component of control over the normative framework of their social action, as well as the ability to critically address such a normative framework. In this context, the state may no longer address its citizens (and lower officers as well) as fully autonomous agents, capable of grasping the law’s commands (and acting accordingly, based on such understanding, and on the reasons they have for complying).Footnote 93 This concern holds also for office-holders, who are often the direct subject of such instructions.Footnote 94 Moreover, it is unclear whether the state would still consider its citizen as agents capable of critical reflection, able to grasp the rationales of the commands (or instructions) and subject them to scrutiny, debate, and deliberation. Such a transformation entails a fundamental shift in the structure of communicationFootnote 95 underlying the legal system and thus raises significant moral legitimacy concerns.

We believe therefore that it is essential that the state continues to express its regulatory norms in human language, and that the human interpretation of such instructions, in the context of legal principles and political values, represents the reference for assessing the way in which the law is applied through AI systems, and more generally, the way in which the operation of such systems affects individual interests and social values.

In conclusion, AI puts forward significant opportunities but also a deep challenge to the state, as the latter debates the uses and misuses of AI.

9 AI, Governance and Ethics Global Perspectives

Angela Daly , Thilo Hagendorff , Li Hui , Monique Mann , Vidushi Marda , Ben Wagner , and Wayne Wei Wang Footnote *
9.1 Introduction

Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, but there is much disquiet over problematic and dangerous implementations of AI, or indeed even AI itself deciding to do dangerous and problematic actions. These developments have led to concerns about whether and how AI systems currently adhere to and will adhere to ethical standards, stimulating a global and multistakeholder conversation on AI ethics and the production of AI governance initiatives. Such developments form the basis for this chapter, where we give an insight into what is happening in Australia, China, the European Union, India and the United States.

We commence with some background to the AI ethics and regulation debates, before proceedings to give an overview of what is happening in different countries and regions, namely Australia, China, the European Union (including national level activities in Germany), India and the United States. We provide an analysis of these country profiles, with particular emphasis on the relationship between ethics and law in each location.

Overall we find that AI governance and ethics initiatives are most developed in China and the European Union, but the United States has been catching up in the last eighteen months. India remains an outlier among these ‘large jurisdictions’ by not articulating a set of AI ethics principles, and Australia hints at the challenges a smaller player may face in forging its own path. The focus of these initiatives is beginning to turn to producing legally enforceable outcomes, rather than just purely high-level, usually voluntary, principles. However, legal enforceability also requires practical operationalising of norms for AI research and development, and may not always produce desirable outcomes.

9.2 AI, Regulation and Ethics

AI has been deployed in a range of contexts and social domains, with mixed outcomes, including in finance, education, employment, marketing and policing.Footnote 1 At this relatively early stage in AI’s development and implementation, the issue has arisen of AI adhering to certain ethical principles.Footnote 2 The ability of existing laws to govern AI has emerged as another key question as to how future AI will be developed, deployed and implemented.Footnote 3 While originally confined to theoretical, technical and academic debates, the issue of governing AI has recently entered the mainstream with both governments and private companies from major geopolitical powers including the United States, China and the European Union formulating statements and policies regarding AI and ethics.Footnote 4

A host of questions are raised by these developments. For one, what are the ethical standards to which AI should adhere? The transnational nature of digitised technologies, the key role of private corporations in AI development and implementation and the globalised economy give rise to questions about which jurisdictions and actors will decide on these standards. Will we end up with a ‘might is right’ approach where it is these large geopolitical players which set the agenda for AI regulation and ethics for the whole world? Further questions arise regarding the enforceability of ethics statements regarding AI, both in terms of whether they reflect existing fundamental legal principles and are legally enforceable in specific jurisdictions, and the extent to which the principles can be operationalised and integrated into AI systems and applications in practice.

Ethics itself is seen as a reflection theory of morality or as the theory of the good life. A distinction can be made between fundamental ethics, which is concerned with abstract moral principles, and applied ethics.Footnote 5 The latter also includes ethics of technology, which contains in turn AI ethics as a subcategory. Roughly speaking, AI ethics serves for the self-reflection of computer and engineering sciences, which are engaged in the research and development of AI or machine learning. In this context, dynamics such as individual technology development projects, or the development of new technologies as a whole, can be analysed. Likewise, causal mechanisms and functions of certain technologies can be investigated using a more static analysis.Footnote 6 Typical topics are self-driving cars, political manipulation by AI applications, autonomous weapon systems, facial recognition, algorithmic discrimination, conversational bots, social sorting by ranking algorithms and many more.Footnote 7 Key demands of AI ethics relate to aspects such as research goals and purposes, research funding, the linkage between science and politics, the security of AI systems, the responsibility for the development and use of AI technologies, the inscription of values in technical artefacts, the orientation of the technology sector towards the common good and much more.Footnote 8

In this chapter, we give an overview of major countries and regions’ approaches to AI, governance and ethics. We do not claim to present an exhaustive account of approaches to this issue internationally, but we do aim to give a snapshot of how some countries and regions, especially large ones like China, the European Union, India and the United States, are (or are not) addressing the topic. We also include some initiatives at the national level, of EU Member State Germany and Australia, all of which can be considered as smaller (geo)political and legal entities. In examining these initiatives, we look at one particular aspect, namely the extent to which these ethics/governance initiatives from governments are legally enforceable. This is an important question given concerns about ‘ethics washing’: that ethics and governance initiatives without the binding force of law are mere ‘window dressing’ while unethical uses of AI by governments and corporations continue.Footnote 9

These activities, especially of the ‘large jurisdictions’, are important given the lack of international law explicitly dealing with AI. There has been some activity from international organisations such as the OECD’s Principles on AI, which form the basis for the G20’s non-binding guiding principles for using AI.Footnote 10 There are various activities that the United Nations (UN) and its constituent bodies are undertaking which relate to AI.Footnote 11 The most significant activities are occurring at UNESCO, which has commenced a two-year process ‘to elaborate the first global standard-setting instrument on ethics of artificial intelligence’, which it aims to produce by late 2021.Footnote 12 However, prospects of success for such initiatives, especially if they are legally enforceable, may be dampened by the fact that an attempt in 2018 to open formal negotiations to reform the UN Convention on Certain Conventional Weapons to govern or prohibit fully autonomous lethal weapons was blocked by the United States and Russia, among others.Footnote 13 In June 2020, various states – including Australia, the European Union, India, the United Kingdom and the United States, but excluding China and Russia – formed the Global Partnership on Artificial Intelligence (GPAI), an ‘international and multistakeholder initiative to guide the responsible development and use of AI, grounded in human rights, inclusion, diversity, innovation, and economic growth’.Footnote 14 GPAI’s activities, and their convergence or divergence with those in multilateral fora such as UN agencies, remain to be seen.

In the following sections, we give overviews of the situation in each country/region and the extent to which legally binding measures have been adopted. We have specifically considered government initiatives which frame and situate themselves in the realm of ‘AI governance’ or ‘AI ethics’. We acknowledge that other initiatives, from corporations, NGOs and other organisations on AI ethics and governance, and other initiatives from different stakeholders on topics relevant to ‘big data’ and the ‘Internet of Things’, may also be relevant to AI governance and ethics. Further work should be conducted on these and on ‘connecting the dots’ between some predecessor digital technology governance initiatives and the current drive for AI ethics and governance.

9.3 Country/Region Profiles
Australia

While Australia occupies a unique position as the only western liberal democracy without comprehensive enforceable human rights protections,Footnote 15 there has been increasing attention on the human rights impacts of technology and the development of an AI ethics framework.

The Australian AI Ethical Framework was initially proposed by Data 61 and CSIRO in the Australian Commonwealth (i.e., federal) Department of Industry, Innovation and Science in 2019.Footnote 16 A discussion paper from this initiative commenced with an examination of existing ethical frameworks, principles and guidelines and included a selection of largely international or US-based case studies, which overshadowed the unique Australian socio-political-historical context. It set out eight core principles to form an ethical framework for AI. The proposed framework was accompanied by a ‘toolkit’ of strategies, as attempts to operationalise the high-level ethical principles in practice, including impact and risk assessments, best practice guidelines and industry standards. Following a public consultation process, which involved refinement of the eight proposed principles (for example, merging two and adding a new one), the Australian AI Ethics Principles are finalised as: human, social and environmental wellbeing; human-centred values; fairness; privacy protection and security; reliability and safety; transparency and explainability; contestability; and accountability.Footnote 17 The Principles are entirely voluntary and have no legally binding effect. The Australian government released some guidance for the Principles’ application, but this is scant compared to other efforts in, for example, Germany (as discussed later).Footnote 18

One further significant development is the Human Rights and Technology project that is being led by the Australian Human Rights Commissioner Edward Santow, explicitly aimed at advancing a human rights–based approach to regulating AI.Footnote 19 The Australian Human Rights Commission (AHRC) has made a series of proposals, including: the development of an Australian National Strategy on new and emerging technologies; that the Australian government introduce laws that require an individual to be informed where AI is used and to ensure the explainability of AI-informed decision-making; and that where an AI-informed decision-making system does not produce reasonable explanations, it should not be deployed where decisions can infringe human rights. The AHRC has also called for a legal moratorium on the use of facial recognition technology until an appropriate legal framework has been implemented. There is the potential for these proposals to become legally binding, subject to normal parliamentary processes and the passage of new or amended legislation.

China

China has been very active in generating state-supported or state-led AI governance and ethics initiatives along with its world-leading AI industry. Until the 2019 Trump Executive Order stimulating AI governance and ethics strategy development in the United States, China combined both this very strong AI industry with governance strategising, contrasting with its main competitor.

In 2017, China’s State Council issued the New-Generation AI Development Plan (AIDP), which advanced China’s objective of high investment in the AI sector in the coming years, with the aim of becoming the world leader in AI innovation.Footnote 20 An interim goal, by 2025, is to formulate new laws and regulations, and ethical norms and policies related to AI development in China. This includes participation in international standard setting, or even ‘taking the lead’ in such activities as well as ‘deepen[ing] international cooperation in AI laws and regulations’.Footnote 21 The plan introduced China’s attitude towards AI ethical, legal and social issues (ELSI), and prescribed that AI regulations should facilitate the ‘healthy development of AI’.Footnote 22 The plan also mentioned AI legal issues including civil and criminal liability, privacy and cybersecurity. Its various ethical proposals include a joint investigation into AI behavioural science and ethics, an ethical multi-level adjudicative structure and an ethical framework for human-computer collaboration.

To support the implementation of ‘Three-Year Action Plan to Promote the Development of a New Generation of Artificial Intelligence Industry (2018–2020)’, the 2018 AI Standardization Forum released its first White Paper on AI Standardization.Footnote 23 It signalled that China would set up the National AI Standardization Group and the Expert Advisory Panel. Public agencies, enterprises and academics appear to be closely linked to the group, and tech giants like Tencent, JD, Meituan, iQiyi, Huawei and Siemens China are included in the Advisory Panel on AI ethics. The 2019 report on AI risks then took the implications of algorithms into serious consideration by building upon some declarations and principles proposed by international, national and technical communities and organisations concerning algorithmic regulation.Footnote 24 The report also proposes two ethical guidelines for AI. The first is the principle of human interest, which means that AI should have the ultimate goal of securing human welfare; the second is the principle of liability, which implies that there should be an explicit regime for accountability in both the development and deployment of AI-related technologies.Footnote 25 In a broader sense, liability ought to be considered as an overarching principle that can guarantee transparency as well as consistency of rights and responsibilities.Footnote 26

There have been further initiatives on AI ethics and governance. In May 2019, the Beijing AI Principles were released by the Beijing Academy of Artificial Intelligence, which depicted the core of its AI development as ‘the realization of beneficial AI for humankind and nature’.Footnote 27 The Principles have been supported by various elite Chinese universities and companies including Baidu, Alibaba and Tencent. Another group comprising top Chinese universities and companies and led by the Ministry of Industry and Information Technology’s (MIIT’s) China Academy of Information and Communications Technology, the Artificial Intelligence Industry Alliance (AIIA) released its Joint Pledge on Self Discipline in the Artificial Intelligence Industry, also in May 2019. While the wording is fairly generic when compared to other ethics and governance statements, Webster points to the language of ‘secure/safe and controllable’ and ‘self-discipline’ as ‘mesh[ing] with broader trends in Chinese digital governance’.Footnote 28

An expert group established by the Chinese Government Ministry of Science and Technology released its eight Governance Principles for the New Generation Artificial Intelligence: Developing Responsible Artificial Intelligence in June 2019.Footnote 29 Again, international cooperation is emphasised in the principles, along with ‘full respect’ for AI development in other countries. A possibly novel inclusion is the idea of ‘agile governance’, that problems arising from AI can be addressed and resolved ‘in a timely manner’. This principle reflects the rapidity of AI development and the difficulty in governing it through conventional procedures, for example through legislation which can take a long time to pass in China, by which time the technology may have already changed. While ‘agile policy-making’ is a term also used by the European Union High-Level Expert Panel, it is used in relation to the regulatory sandbox approach, as opposed to resolving problems, and is also not included in the Panel’s Guidelines as a principle.

While, as mentioned previously, Chinese tech corporations have been involved in AI ethics and governance initiatives both domestically in China and internationally in the form of the Partnership on AI,Footnote 30 they also appear to be internally considering ethics in their AI activities. Examples include Toutiao’s Technology Strategy Committee, which partially acts as an internal ethics board.Footnote 31 Tencent also has its AI for Social Good programme and ARCC (Available, Reliance, Comprehensible, Controllable) Principles but does not appear to have an internal ethics board to review AI developments.Footnote 32

Although the principles set by these initiatives initially lacked legal enforcement/enforceability and policy implications, China highlighted in the 2017 AIDP three AI-related applied focuses, namely international competition, economic growth and social governance,Footnote 33 which have gradually resulted in ethical and then legal debates.

First, China’s agile governance model is transforming AI ethics interpreted in industrial standards into the agenda of national and provincial legislatures. After the birth of a gene-edited-baby caused the establishment of the National Science and Technology Ethics Committee in late 2019, the Ethics Working Group of the Chinese Association of Artificial Intelligence is planning to establish and formulate various ethical regulations for AI in different industries, such as self-driving, data ethics, smart medicine, intelligent manufacturing and elders-aiding robot specifications.Footnote 34 National and local legislation and regulation have been introduced or are being experimented upon to ensure AI security in relation to drones, self-driving cars and fintech (e.g., robot advisors).Footnote 35

Second, AI ethics has had a real presence in social issues and judicial cases involving human-machine interaction and liability. One instance has involved whether AI can be recognised as the creator of works for copyright purposes, where two courts in 2019 came to opposing decisions on that point.Footnote 36 Another has involved regulatory activity on the part of the Cyberspace Administration of China to address deepfakes. It has issued a draft policy on Data Security Management Measures which proposes requiring as part of their platform liability service providers that use AI to automatically synthesise ‘news, blog posts, forum posts, comments etc’, to clearly signal such information as ‘synthesized’ without any commercial purposes or harms to others’ pre-existing interests.Footnote 37

European Union

Perceived to be lacking the same level of industrial AI strength as China and the United States, the European Union has been positioning itself as a frontrunner in the global debate on AI governance and ethics from legal and policy perspectives. The General Data Protection Regulation (GDPR), a major piece of relevant legislation, came into effect in 2018, and has a scope (Art 3) which extends to some organisations outside of the European Union in certain circumstances,Footnote 38 and provisions on the Right to Object (Article 21) and Automated Individual Decision-Making Including Profiling (Article 22). There is significant discussion as to precisely what these provisions entail in practice regarding algorithmic decision-making, automation and profiling and whether they are adequate to address the concerns that arise from such processes.Footnote 39

Among other prominent developments in the European Union is the European Parliament Resolution on Civil Law Rules on Robotics from February 2017.Footnote 40 While the Resolution is not binding, it expresses the Parliament’s opinion and requests the European Commission to carry out further work on the topic. In particular, the Resolution ‘consider[ed] that the existing Union legal framework should be updated and complemented, where appropriate, by guiding ethical principles in line with the complexity of robotics and its many social, medical and bioethical implications’.Footnote 41

In March 2018, the European Commission issued a Communication on Artificial Intelligence for Europe, in which the Commission set out ‘a European initiative on AI’ with three main aims: of boosting the European Union’s technological and industrial capacity, and AI uptake; of preparing for socio-economic changes brought about by AI (with a focus on labour, social security and education); and of ensuring ‘an appropriate ethical and legal framework, based on the Union’s values and in line with the Charter of Fundamental Rights of the European Union’.Footnote 42

The European Union High-Level Expert Group on Artificial Intelligence, a multistakeholder group of fifty-two experts from academia, civil society and industry produced the Ethics Guidelines for Trustworthy AI in April 2019, including seven key, but non-exhaustive, requirements that AI system ought to meet in order to be ‘trustworthy’.Footnote 43 The Expert Group then produced Policy and Investment Recommendations for Trustworthy AI in June 2019.Footnote 44 Among the recommendations (along with those pertaining to education, research, government use of AI and investment priorities) is strong criticism of both state and corporate surveillance using AI, including that governments should commit not to engage in mass surveillance and the commercial surveillance of individuals including via ‘free’ services should be countered.Footnote 45 This is furthered by a specific recommendation that AI-enabled ‘mass scoring’ of individuals be banned.Footnote 46 The Panel called for more work to assess existing legal and regulatory frameworks to discern whether they are adequate to address the Panel’s recommendations or whether reform is necessary.Footnote 47

The European Commission released its White Paper on AI in February 2020, setting out an approach based on ‘European values, to promote the development and deployment of AI’.Footnote 48 Among a host of proposals for education, research and innovation, industry collaboration, public sector AI adoption, the Commission asserts that ‘international cooperation on AI matters must be based on an approach which promotes the respect of fundamental rights’ and more bullishly asserts that it will ‘strive to export its values across the world’.Footnote 49

A section of the White Paper is devoted to regulatory frameworks, with the Commission setting out its proposals for a new risk-based regulatory framework for AI targeting ‘high risk’ applications. These applications would be subject to additional requirements including vis-à-vis: training data for AI; the keeping of records and data beyond what is currently required to verify legal compliance and enforcement; the provision of additional information than is currently required, including whether citizens are interacting with a machine rather than a human; ex ante requirements for the robustness and accuracy of AI applications; human oversight; and specific requirements for remote biometric identification systems.Footnote 50 The White Paper has been released for public consultation and follow-up work from the Commission is scheduled for late 2020.

Alongside this activity, the European Parliament debated various reports prepared by MEPs on civil liability, intellectual property and ethics aspects of AI in early 2020.Footnote 51 Issues such as a lack of harmonised approach among EU Member States and lack of harmonised definitions of AI giving rise to legal uncertainty were featured in the reports and debates, as well as calls for more research on specific frameworks such as IP.Footnote 52 MEPs are due to debate and vote on amendments to the reports later in 2020. It is unclear whether COVID-19 disruptions will alter these timelines.

In addition to this activity at the supranational level, EU Member States continue with their own AI governance and ethics activities. This may contribute to the aforementioned divergence in the bloc, a factor which may justify EU-level regulation and standardisation. Prominent among them is Germany, which has its own national AI Strategy from 2018.Footnote 53 In light of competition with other countries such as the United States and China, Germany – in accordance with the principles of the European Union Strategy for Artificial Intelligence – intends to position itself in such a way that it sets itself apart from other, non-European nations through data protection-friendly, trustworthy, and ‘human centred’ AI systems, which are supposed to be used for the common good.Footnote 54 At the centre of those claims is the idea of establishing the ‘AI Made in Germany’ ‘brand’, which is supposed to become a globally acknowledged label of quality. Behind this ‘brand’ is the idea that AI applications made in Germany or, to be more precise, the data sets these AI applications use, come under the umbrella of data sovereignty, informational self-determination and data safety. Moreover, to ensure that AI research and innovation is in line with ethical and legal standards, a Data Ethics Commission was established which can make recommendations to the federal government and give advice on how to use AI in an ethically sound manner.

The Data Ethics Commission issued its first report written by 16 Commission experts, intended as a group of ethical guidelines to ensure safety, prosperity and social cohesion amongst those affected by algorithmic decision-making or AI.Footnote 55 Among other aims promoting human-centred and value-oriented AI design, the report introduces ideas for risk-oriented AI regulation, aimed at strengthening Germany and Europe’s ‘digital sovereignty’. Seventy-five rules are detailed in the report to implement the main ethical principles the report draws upon, namely human dignity, self-determination, privacy, security, democracy, justice, solidarity and sustainability. Operationalising these rules is the subject of a current report ‘From Principles to Practice – An Interdisciplinary Framework to Operationalize AI ethics’, resulting from the work of the interdisciplinary expert Artificial Intelligence Ethics Impact Group (AIEIG), which describes in detail how organisations conducting research and development of AI applications can implement ethical precepts into executable practice.Footnote 56 Another example of this practical approach can be seen in the recent Lernende Systeme (German National Platform for AI) report launching certification proposals for AI applications, which are aimed at inter alia creating legal certainty and increasing public trust in AI through, for example, a labelling system for consumers.Footnote 57 These certification proposals may serve as predecessors for future legal requirements, such as those which may be proposed at the EU level.

India

India’s approach to AI is substantially informed by three initiatives at the national level. The first is Digital India, which aims to make India a digitally empowered knowledge economy;Footnote 58 the second is Make in India, under which the government of India is prioritising AI technology designed and developed in India;Footnote 59 and the third is the Smart Cities Mission.Footnote 60

An AI Task Force constituted by the Ministry of Commerce and Industry in 2017 looked at AI as a socio-economic problem solver at scale. In its report, it identified ten key sectors in which AI should be deployed, including national security, financial technology, manufacturing and agriculture.Footnote 61 Similarly, a National Strategy for Artificial Intelligence was published in 2018 that went further to look at AI as a lever for economic growth and social development, and considers India as a potential ‘garage’ for AI applications.Footnote 62 While both documents mention ethics, they fail to meaningfully engage with issues of fundamental rights, fairness, inclusion and the limits of data-driven decision-making. These are also heavily influenced by the private sector, with civil society and academia, rarely, if ever, being invited into these discussions.

The absence of an explicit legal and ethical framework for AI systems, however, has not stalled deployment. In July 2019, the Union Home Ministry announced plans for the nationwide Automated Facial Recognition System (AFRS) that would use images from CCTV cameras, police raids and newspapers to identify criminals, and enhance information sharing between policing units in the country. This was announced and subsequently developed in the absence of any legal basis. The form and extent of the AFRS directly violates the four-part proportionality test laid down by the Supreme Court of India in August 2017, which laid down that any violation of the fundamental right to privacy must be in pursuit of a legitimate aim, bear a rational connection to the aim and be shown as necessary and proportionate.Footnote 63 In December 2019, facial recognition was reported to have been used by Delhi Police to identify ‘habitual protestors’ and ‘rowdy elements’ against the backdrop of nationwide protests against changes in India’s citizenship law.Footnote 64 In February 2020, the Home Minister stated that over a thousand ‘rioters’ had been identified using facial recognition. Footnote 65

These developments are made even more acute given the absence of data protection legislation in India. The Personal Data Protection Bill carves out significant exceptions for state use of data, with the drafters of the bills themselves publicly expressing concerns about the lack of safeguards in the latest version. The current Personal Data Protection Bill also fails to adequately engage with the question of inferred data, which is particularly important in the context of machine learning. These issues arise in addition to crucial questions for how sensitive personal data is currently processed and shared. India’s biometric identity project, Aadhaar, could also potentially become a central point of AI applications in the future, with a few proposals for use of facial recognition in the last year, although that is not the case currently.

India recently became one of the founding members of the aforementioned Global Partnership on AI.Footnote 66 Apart from this, there is no ethical framework or principles published by the government at the time of writing. It is likely that ethical principles will emerge shortly, following global developments in the context of AI, and public attention on data protection law in the country.

United States of America

Widely believed to rival only China in its domestic research and development of AI,Footnote 67 the US government had been less institutionally active regarding questions of ethics, governance and regulation compared to developments in China and the European Union, until the Trump Administration Executive Order on Maintaining American Leadership in Artificial Intelligence in February 2019.Footnote 68 Prior to this activity, the United States had a stronger record of AI ethics and governance activity from the private and not-for-profit sectors. Various US-headquartered/-originating multinational tech corporations have issued ethics statements on their AI activities, such as Microsoft and Google Alphabet group company DeepMind. Some US-based not-for-profit organisations and foundations have also been active, such as the Future of Life Institute with its twenty-three Asilomar AI Principles.Footnote 69

The 2019 Executive Order has legal force, and created an American AI Initiative guided by five high-level principles to be implemented by the National Science and Technology Council (NSTC) Select Committee on Artificial Intelligence.Footnote 70 These principles include the United States driving development of ‘appropriate technical standards’ and protecting ‘civil liberties, privacy and American values’ in AI applications ‘to fully realize the potential for AI technologies for the American people’.Footnote 71 Internationalisation is included with the view of opening foreign markets for US AI technology and protecting the United States’s critical AI technology ‘from acquisition by strategic competitors and adversarial nations’. Furthermore, executive departments and agencies that engage in AI-related activities including ‘regulat[ing] and provid[ing] guidance for applications of AI technologies’ must adhere to six strategic objectives including protection of ‘American technology, economic and national security, civil liberties, privacy, and values’.

The US Department of Defense also launched its own AI Strategy in February 2019.Footnote 72 The Strategy explicitly mentions US military rivals China and Russia investing in military AI ‘including in applications that raise questions regarding international norms and human rights’, as well as the perceived ‘threat’ of these developments to the United States and ‘the free and open international order’. As part of the Strategy, the Department asserts that it ‘will articulate its vision and guiding principles for using AI in a lawful and ethical manner to promote our values’, and will ‘continue to share our aims, ethical guidelines, and safety procedures to encourage responsible AI development and use by other nations’. The Department also asserted that it would develop principles for AI ethics and safety in defence matters after multistakeholder consultations, with the promotion of the Department’s views to a more global audience, with the seemingly intended consequence that its vision will inform a global set of military AI ethics.

In February 2020, the White House Office of Science and Technology Policy published a report documenting activities in the previous twelve months since the Executive Order was issued.Footnote 73 The report frames activity relating to governance under the heading of ‘Remove Barriers to AI Innovation’, which foregrounds deregulatory language but may be contradicted in part by the need for the United States to ‘providing guidance for the governance of AI consistent with our Nation’s values and by driving the development of appropriate AI technical standards’.Footnote 74 However, there may be no conflict if soft law non-binding ‘guidance’ displaces hard law binding regulatory requirements. In January 2020, the White House published the US AI Regulatory Principles for public comment, which would establish guidance for federal agencies ‘to inform the development of regulatory and non-regulatory approaches regarding technologies and industrial sectors that are empowered or enabled by artificial intelligence (AI) and consider ways to reduce barriers to the development and adoption of AI technologies’.Footnote 75 Specifically, federal agencies are told to ‘avoid regulatory or non-regulatory actions which needlessly hamper AI innovation and growth’, they must assess regulatory actions against the effect on AI innovation and growth and ‘must avoid a precautionary approach’.Footnote 76 Ten principles are set out to guide federal agencies’ activities (reflecting those in the Executive Order), along with suggested non-regulatory approaches such as ‘voluntary consensus standards’ and other activities outside of rulemaking which would fulfil the direction to reduce regulatory barriers (such as increasing public access to government-held data sets).Footnote 77

During 2019 and 2020, the US Food and Drug Administration (FDA) proposed regulatory frameworks for AI-based software as a medical device and draft guidance for clinical decision support software.Footnote 78 The US Patent and Trademark Office (USPTO) issued a public consultation on whether inventions developed by AI should be patentable. These activities could be framed as attempts to clarify how existing frameworks apply to AI applications but do not appear to involve the ‘removal’ of regulatory ‘barriers’.

9.4 Analysis

From the country and region profiles, we can see that AI governance and ethics activities have proliferated at the government level, even among previously reticent administrations such as the United States. India remains an outlier as the only country among our sample with no set of articulated AI governance or ethics principles. This may change, however, with India’s participation in the GPAI initiative.

Themes of competition loom large over AI policies, as regards competition with other ‘large’ countries or jurisdictions. The AI competition between China and the United States as global forerunner in research and development may be reflected in the United States Executive Order being framed around preserving the United States’s competitive position, and also China’s ambition to become the global AI leader in 2030. We now see the European Union entering the fray more explicitly with its wish to export its own values internationally. However, there are also calls for global collaboration on AI ethics and governance, including from all of these actors. In practice, these are not all taking place through traditional multilateral fora such as the UN, as can be seen with the launch of GPAI. Smaller countries such as the Australian example show how they may be ‘followers’ rather than ‘leaders’ as they receive ethical principles and approaches formulated by other similar but larger countries.

In many of the AI ethics/governance statements, we see similar if not the same concepts reappear, such as transparency, explainability, accountability and so forth. Hagendorff has pointed out that these frequently encountered principles are often ‘the most easily operationalized mathematically’, which may account partly for their presence in many initiatives.Footnote 79 Some form of ‘privacy’ or ‘data protection’ also features frequently, even in the absence of robust privacy/data protection laws as in the United States example. In India, AI ethical principles might follow the development of binding data protection legislation which is still pending. Nevertheless, behind some of these shared principles may lie different cultural, legal and philosophical understandings.

There are already different areas of existing law, policy and governance which will apply to AI and its implementations including technology and industrial policy, data protection, fundamental rights, private law, administrative law and so forth. Increasingly the existence of these pre-existing frameworks is being acknowledged in the AI ethics/governance initiatives, although more detailed research may be needed, as the European Parliament draft report on intellectual property and AI indicates. It is important for those to whom AI ethics and governance guidelines are addressed to be aware that they may need to consider, and comply with, further principles and norms in their AI research, development and application, beyond those articulated in AI-specific guidelines. Research on other novel digital technologies suggests that new entrants may not be aware of such pre-existing frameworks and may instead believe that their activities are ‘unregulated’.Footnote 80

On the question of ‘ethics washing’ – or the legal enforceability of AI ethics statements – it is true that almost all of the AI ethics and governance documents we have considered do not have the force of binding law. The US Executive Order is an exception in that regard, although it constitutes more of a series of directions to government agencies rather than a detailed set of legally binding ethical principles. In China and the European Union, there are activities and initiatives to implement aspects of the ethical principles in specific legal frameworks, whether pre-existing or novel. This can be contrasted with Australia, whose ethical principles are purely voluntary, and where discussions of legal amendment for AI are less developed.

However, the limits of legal enforceability can also be seen in the United States example, whereby there is the paradox of a legally enforced deregulatory approach mandated by the Executive Order and the processes it has triggered for other public agencies to forbear from regulating AI in their specific domains unless necessary. In practice, though, the FDA may be circumventing this obstacle by ‘clarifications’ of its existing regulatory practices vis-à-vis AI and medical devices.

In any event, the United States example illustrates that the legal enforceability of AI governance and ethics strategies does not necessarily equate to substantively better outcomes as regards actual AI governance and regulation. Perhaps in addition to ethics washing, we must be attentive towards ‘law washing’, whereby the binding force of law does not necessarily stop unethical uses of AI by government and corporations; or to put it another way, the mere fact that an instrument has a legally binding character does not ensure that it will prevent unethical uses of AI. Both the form and substance of the norms must be evaluated to determine their ‘goodness’.Footnote 81

Furthermore, legal enforceability of norms may be stymied by a lack of practical operationalisation by AI industry players – or that it is not practical to operationalise them. We can see that some governments have taken this aspect seriously and implemented activities, initiatives and guidance on these aspects, usually developed with researchers and industry representatives. It is hoped that this will ensure the practical implementation of legal and ethical principles in AI’s development and avoid situations where the law or norms are developed divorced from the technological reality.

9.5 Conclusion

In this chapter, we have given an overview of the development of AI governance and ethics initiatives in a number of countries and regions, including the world AI research and development leaders China and the United States, and what may be emerging as a regulatory leader in form of the European Union. Since the 2019 Executive Order, the United States has started to catch up China and the European Union regarding domestic legal and policy initiatives. India remains an outlier, with limited activity in this space and no articulated set of AI ethical principles. Australia, with its voluntary ethical principles, may show the challenges a smaller jurisdiction and market faces when larger entities have already taken the lead on a technology law and policy topic.

Legal enforceability of norms is increasingly the focus of activity, usually through an evaluation of pre-existing legal frameworks or the creation of new frameworks and obligations. While the ethics-washing critique still stands to some degree vis-à-vis AI ethics, the focus of activity is moving towards the law – and also practical operationalisation of norms. Nevertheless, this shift in focus may not always produce desirable outcomes. Both the form and substance of AI norms – whether soft law principles or hard law obligations – must be evaluated to determine their ‘goodness’.

A greater historical perspective is also warranted regarding the likelihood of success for AI ethics/governance initiatives, whether as principles or laws, by, for instance, examining the success or otherwise of previous attempts to govern new technologies, such as biotech and the Internet, or to insert ethics in other domains such as medicine.Footnote 82 While there are specificities for each new technology, different predecessor technologies from which it has sprung, as well as different social, economic and political conditions, looking to the historical trajectory of new technologies and their governance may teach us some lessons for AI governance and ethics.

A further issue for research may arise around regulatory or policy arbitrage, whereby organisations or researchers from a particular country or region which does have AI ethics/governance principles engage in ‘jurisdiction shopping’ to a location which does not or has laxer standards to research and develop AI with less ‘constraints’. This offshoring of AI development to ‘less ethical’ countries may already be happening and is something that is largely or completely unaddressed in current AI governance and ethics initiatives.

10 EU By-Design Regulation in the Algorithmic Society A Promising Way Forward or Constitutional Nightmare in the Making?

Pieter Van Cleynenbreugel
10.1 Introduction

Algorithmic decision-making fundamentally challenges legislators and regulators to find new ways to ensure algorithmic operators and controllers comply with the law. The European Union (EU) legal order is no stranger to those challenges, as self-learning algorithms continue to develop at an unprecedented pace.Footnote 1 One of the ways to cope with the rise of automated and self-learning algorithmic decision-making has been the introduction of by-design obligations.

By-design regulation refers to the array of regulatory strategies aimed at incorporating legal requirements into algorithmic design specifications. Those specifications would have to be programmed/coded into existing or newly developed algorithms.Footnote 2 That may be a necessity, as the European Commission in its February 2020 White Paper on Artificial Intelligence recognised the insufficiency of existing EU legislation on product safety and the protection of fundamental rights in that context.Footnote 3 Against that background, different open questions remain as to the modalities of this kind of regulation, ranging from who is competent to how to ensure compliance with those specifications. Those obligations demand economic operators to program their algorithms in such a way as to comply with legal norms. Related to existing co-regulation initiatives, by-design obligations present a new and potentially powerful way to push economic operators more directly into ensuring respect for legal norms and principles.

This chapter will explore the potential for a more developed by-design regulatory framework as a matter of EU constitutional law. To that extent, it first conceptualises by-design regulation as a species of co-regulation, which is a well-known EU regulatory technique. The first part of this chapter revisits the three most common EU co-regulation varieties and argues that each of them could offer a basis for more enhanced by-design obligations. The second part of the chapter identifies the opportunities and challenges EU constitutional law would present in that context. In revisiting some basic features and doctrines of the EU constitutional order, this chapter aims to demonstrate that by-design regulation could be implemented if and to the extent that certain constitutional particularities of the EU legal order are taken into account.

10.2 By-Design Obligations as a Species of Co-regulation

Although by-design regulation sounds novel, it actually constitutes a species of a well-known regulatory approach of co-regulation (Section 10.2.1). That approach appears in at least three varieties in the EU legal order (Section 10.2.2), each lending itself to algorithmic by-design regulatory approaches (Section 10.2.3).

10.2.1 By-Design Regulation as Co-regulation

The notion of by-design regulation may appear vague and perhaps confusing at first glance.Footnote 4 In its very essence, however, by-design regulation refers to nothing more than an obligation imposed on businesses, as a matter of law, to program or code their technologies in such ways that they comply automatically or almost automatically with certain legal obligations.Footnote 5 A pro-active form of compliance through regulation, the law basically requires businesses to design or redesign their technologies so that certain values or objectives are respected by the technology itself. In algorithmic design, this regulatory approach would require translating legal obligations into algorithmic specifications. By-design regulation would thus require, as a matter of hard law, developers/designers to translate legal obligations into workable engineering or design specifications and principles.Footnote 6

The origins of by-design obligations as a regulatory technique originate in the privacy by design approach. According to that approach, respect for privacy must ideally become any (business) organisation’s default mode of operation.Footnote 7 When setting up technical and physical infrastructure and networks, privacy has to be designed into the operations of those networks.Footnote 8 More particularly, it was proposed to businesses to have in place privacy-enhancing technologies (PETs).Footnote 9 Within the context of its General Data Protection Regulation (GDPR), the EU additionally imposed data protection via design obligation on data processors.Footnote 10

The successful implementation of privacy by design faces two difficulties. First, given the varying conceptions of privacy maintained in different legal orders, questions arose quickly as to the exact requirements that needed to be implemented.Footnote 11 Second, beyond the difficulties to envisage the implementation of privacy by design, questions equally arose as to the liability of those designers and operators not having made or implemented a privacy-enhancing technological framework. The idea of privacy by design is appealing, yet without a legal obligation on particular businesses or public authorities to implement it and to oversee its application, the whole idea rests on shaky ground.

Despite the practical by-design problems highlighted here, the classification of by-design obligations is less complicated from a regulatory theory perspective. It is submitted indeed that by-design obligations in their very essence always imply some form of co-regulation. Co-regulation essentially refers to a regulatory framework that involves both private parties and governmental actors in the setting, implementation, or enforcement of regulatory standards.Footnote 12 The EU is familiar with this type of regulation and has been promoting it consistently over the course of past decades. It cannot therefore be excluded that the EU could be willing further to develop and refine that approach in the context of algorithmic design obligations as well.

10.2.2 Co-regulation within the European Union

The EU’s former 2003 Interinstitutional Agreement on Better Lawmaking refers to co-regulation as ‘the mechanism whereby a [Union] legislative act entrusts the attainment of the objectives defined by the legislative authority to parties which are recognised in the field (such as economic operators, the social partners, non-governmental organisations, or associations)’.Footnote 13 In contrast with self-regulation, where private actors have been entrusted overall responsibility to determine the content, applicability, and enforcement of different rules, co-regulation still accords a certain role to governmental actors.

Within the EU legal order, one can distinguish three implicitly present formats of co-regulation currently present. Those formats differ on the basis of three distinguishing criteria: the actual norm-setter, the implementation of co-regulatory obligations, and the enforcement of respect for the regulatory requirements.Footnote 14

The first format concerns the framework applicable in the context of technical standardisation. It is well-known that, at the EU level, standards to a large extent are being developed by so-called standardisation bodies. Those bodies, essentially of a private nature, have been mandated by the EU institutions to adopt norms that have some force of law. The EU’s new approach to technical harmonisationFootnote 15 best illustrates that tendency. In this standardised co-regulation scheme, standardisation organisations play a pivotal role as norm-setters. They assemble different experts and ask those experts to set up and design a standard. Their regulatory mandate justified by them assembling experts to design technical and technocratic standards, the EU legislator can suffice in delegating to those organisations the task to come up with those highly technical standards. Following and implementing a standard thus creates a presumption that the product is safe. This system has remained in place ever since, even though a 2012 update has sought to increase the transparency over the standard-setting process.Footnote 16 Within that framework, the Court of Justice has stated that harmonised European standards, though adopted by private standardisation bodies, are to be assimilated to acts of the EU institutions.Footnote 17

The second format of EU co-regulation introduces a certification-centred approach. That approach is related closely to how the EU legislator has envisaged data protection by design in its GDPR. In that format of co-regulation, there is no pre-defined norm setter. The legislator sets out particular values or principles to be designed into certain technologies, but further leaves it up to designers or importers of technologies to ensure compliance with those values. As co-regulation allows for a more intensified administrative or judicial review over co-set standards or rules, this format presumes an ex post control of public authorities over the rules adopted. Although businesses may create or rely on standardisation organisations to translate the predetermined values into workable principles, respect for such standards does not automatically trigger a presumption of conformity. In this format, the intervention of standardisation organisations is not sufficient to trigger a presumption of conformity with the predetermined values. On the contrary, a lack of respect for the principles and values laid out by the legislator may result in a command-and-control type of sanctioning. In that case, a public authority can impose sanctions by means of a decision, which could be contested before the courts. As such, the actual content of the decision remains to be determined by the businesses responsible, yet the enforcement fully enters the traditional command and control realm.

A third possible format goes beyond the voluntary standardisation or certification approaches by allowing the legislator to impose certain designs on technology developers. More particularly, this format would see the EU institutions outline themselves in more detail than the previous varieties the values that need to be protected by and coded into the technology at hand. It would then fall upon the designers/developers concerned to implement those values. In doing so, they would respect the legal norms posited by the EU legislator. Those by-design obligations would most likely be inserted in instruments of delegated or implementing legislation. A similar approach is taken in the context of financial services regulation.Footnote 18 It would be perfectly imaginable to envisage expert groups or expert bodies assisting the European Commission in developing and fine-tuning by-design obligations in the realm of algorithmic decision-making as well. This could be coupled with a mix of traditional command and control enforcement techniques (administrative and judicial enforcement) currently also in place within that context.Footnote 19 It would indeed not seem impossible that those governance structures could also accompany the setup of by-design obligations.

The three varieties distinguished here should be understood as ideal-typical features resembling somehow similar regulatory initiatives in the EU. Those varieties actually reflect a sliding scale of regulatory possibilities, as the following table shows.

Co-regulation varietiesNorm-settingImplementationEnforcement
StandardisationStandardisation bodiesNon-binding harmonised general interest standardsPresumption of conformity + supplementary judicial enforcement
CertificationBusinesses themselves (aided by certification bodies)Non-binding individualised or certified general interest standardsSubsidiary administrative and judicial enforcement?
Control-centred co-regulationEU institutions (delegated or implementing acts, involving stakeholders)Binding technical rules + ex ante approval of technologies?Administrative and judicial enforcement
10.2.3 Room for Enhanced By-Design Co-regulation Strategies at the EU Level?

All three co-regulation varieties start from the premise that designers/developers have to construct or structure their algorithms in order to ensure compliance with applicable legal norms. If that starting point is accepted, the three varieties depict a variety of intensities with which compliance with those obligations into the design of algorithms can be guaranteed. Overall, they represent different degrees of public intervention in determining the scope and in enforcing the way in which algorithms have been designed. Given the prevalence of those different regulatory strategies in different fields of EU policy, it would seem that those varieties of by-design co-regulation could also be introduced or developed within the context of algorithmic decision-making.

That framework of standard-setting by standardisation bodies clearly lends itself to the context of algorithmic regulation and the imposition of by-design obligations on their developers/designers. It can indeed be imagined that EU legislation would require any coder, programmer, or developer to respect all privacy, individual liberty, or other protective values the EU as an organisation holds dear. Those ‘general interest’ requirements, as they would be referred to under the New approach,Footnote 20 would have to be respected by every producer seeking to make available or use a certain algorithm to customers falling within the scope of EU law. The actual implementation and coding-in of those values into the algorithms concerned would have to take place in accordance with general interest standards adopted by standardisation organisations. It is not entirely impossible to envisage that similar bodies to CEN, CENELEC, or ETSI could be designated to develop general interest standards in the realm of algorithmic governance.

In the same way, a certification mechanism could be set up. By way of example, the GDPR refers to the possibility of having in place a certification mechanism that would include data protection concerns in the standardisation process of technologies. In order for that system to work, data protection certification bodies have to be set up. Those private bodies would be responsible for reviewing and attesting to the conformity of certain data protection technologies with the values and principles of the GDPR.Footnote 21 So far, those mechanisms are still in the process of being set up and much work needs to be done in order to extract from the GDPR a set of workable principles that would have to be integrated in the technologies ensuring data processing and in the algorithms underlying or accompanying those technologies.Footnote 22

The more enhanced control-centred co-regulation framework could also be made to fit algorithmic by-design regulation. In that case, the EU legislator or the European Commission, or any other type of EU executive body that would be responsible for the drafting and development of by-design obligations, would need to be involved in the regulation of algorithms. It could be expected that some type of involvement of businesses concerned would be useful in the drafting of the by-design obligations. Ex ante approval mechanisms or ex post enforcement structures could be envisaged to guarantee that businesses comply with those requirements.

10.3 By-Design-Oriented Co-regulation: A Promising Way Forward or EU Constitutional Law Nightmare in the Making?

It follows from the previous section that, in light of its co-regulation experiences, the EU legal order would not be as such hostile to the introduction of by-design obligations. In order for a regulatory approach to be made operational, regulatory strategists have to ensure a sufficient amount of constitutional fit,Footnote 23 if only to legitimise the regulatory approach offered in this context.

It is submitted that at least three challenges in an increasing order of relevance can be highlighted in that regard. First, the principle of competence conferral may impose constraints on the introduction and development of by-design obligations, which deserve to be qualified (Section 10.3.1). Second, in the same way, the by-design system setup would amount to a delegation of certain powers to private or public actors. From that point of view, concerns regarding compliance with the so-called Meroni doctrine arise (Section 10.3.2). Third, and most fundamentally, however, the major challenge of by-design regulation lies in its enforcement. In a constitutional order characterised itself by the lack of a common administrative enforcement framework, questions can be raised regarding the effectiveness of control over the respect of by-design regulations (Section 10.3.3). Although the EU constitutional framework raises challenges in this regard, it is submitted that those challenges are not in themselves insurmountable. As a result, by-design regulation could become a complementary and useful regulatory strategy aimed at responding to challenges raised by the algorithmic society (Section 10.3.4).

10.3.1 Competence Conferral Challenges

A first constitutional challenge that the setting-up of a more developed by-design regulation framework would encounter concerns the EU’s system of competence conferral.Footnote 24 The Treaty contains different legal bases which could grant the Union the competence to set up a co-regulatory framework focused on by-design obligations.

The principal challenge with those different legal bases is that one has to verify what kind of values one wants to programme into algorithms as a matter of EU law. Absent any discussion so far beyond data protection, that remains a very important preliminary issue to be determined. It could be submitted that values of non-discrimination, consumer protection, free movement principles, or others would have to be coded in. In this respect, it will appear that the EU can go farther in some domains than in others.

The most appropriate Treaty bases are the transversal provisions containing a list of values that need to be protected across the board by EU policies and offering the EU the power to take action to protect those values. It would seem that those values could also be developed into technical specifications to be coded into algorithmic practice.

First, Article 10 of the Treaty on the Functioning of the European Union (TFEU) holds that in defining and implementing its policies and activities, the Union shall aim to combat discrimination based on sex, racial or ethnic origin, religion or belief, disability, age, or sexual orientation. Article 18 TFEU complements that provision by stating that within the scope of application of the Treaties, and without prejudice to any special provisions contained therein, any discrimination on grounds of nationality shall be prohibited. To that extent, the European Parliament and the Council, acting in accordance with the ordinary legislative procedure, may adopt rules designed to prohibit such discrimination. Article 19 adds that the Council, acting unanimously in accordance with a special legislative procedure and after obtaining the consent of the European Parliament, may take appropriate action to combat discrimination based on sex, racial or ethnic origin, religion or belief, disability, age, or sexual orientation. In that context, the European Parliament and the Council, acting in accordance with the ordinary legislative procedure, may adopt the basic principles of Union incentive measures, excluding any harmonisation of the laws and regulations of the Member States, to support actions taken by the Member States in order to contribute to the achievement of the objectives of non-discrimination. To the extent that non-discrimination is one of the key values of the European Union, it can take action either to harmonise non-discrimination on the basis of nationality, or to incentivise Member States to eradicate all forms of discrimination. The notion of incentivising is important here; it would indeed appear that, under the banner of non-discrimination, the EU could take measures to stimulate non-discriminatory by-design approaches. At the same time, however, the EU may not harmonise laws regarding non-discrimination on grounds other than nationality. It follows from this that EU rules could only incite Member States to take a more pro-active and by-design oriented compliance approach. A full-fledged ex ante or ex post algorithmic design control approach in the realm of non-discrimination would potentially go against Article 19 TFEU. It would thus appear that the EU is competent to put in place particular incentive mechanisms, yet not necessarily to set up a complete law enforcement framework in this field. Regarding discrimination on the basis of nationality, setting up such a by-design framework would still be constitutionally possible, as Article 18 TFEU grants broader legislative powers to the EU institutions.

Second, Article 11 TFEU holds that environmental protection requirements must be integrated into the definition and implementation of the Union policies and activities, in particular, with a view to promoting sustainable development. Article 12 refers to consumer protection. Both provisions are accompanied by specific legal bases that would allow for co-regulatory by-design mechanisms to be set up.Footnote 25

Third, Article 16 refers to the right to personal data protection. According to that provision, the European Parliament and the Council, acting in accordance with the ordinary legislative procedure, shall lay down the rules relating to the protection of individuals with regard to the processing of personal data by Union institutions, bodies, offices and agencies, and by the Member States when carrying out activities which fall within the scope of Union law, and the rules relating to the free movement of such data. Compliance with these rules shall be subject to the control of independent authorities. This provision constituted the legal basis for the GDPR and the data protection by design framework outlined in that Regulation.Footnote 26 Neither during negotiations, nor after its entry into force, has the choice of a legal basis for this type of by-design obligations been contested. It could be concluded, therefore, that this provision could serve as a legal basis for data protection by design measures. Beyond data protection, however, this provision would be of no practical use.

Fourth, Articles 114 and 352 TFEU seem to be of limited relevance. Article 114 TFEU allows the EU to adopt the measures for the approximation of the provisions laid down by law, regulation, or administrative action in Member States which have as their object the establishment and functioning of the internal market. That provision essentially aims at harmonising Member States’ regulatory provisions rather than imposing specific design obligations on algorithmic designers. However, it cannot be excluded that the imposition of specific obligations can be a means to avoid obstacles to trade from materialising. In that understanding, this provision may serve as an additional basis to adopt measures setting up a co-regulatory by-design framework.Footnote 27 Article 352 states that if action by the Union should prove necessary, within the framework of the policies defined in the Treaties, to attain one of the objectives set out in the Treaties. According to the Court,

recourse to Article [352 TFEU] as a legal basis is … excluded where the Community act in question does not provide for the introduction of a new protective right at Community level, but merely harmonises the rules laid down in the laws of the Member States for granting and protecting that right.Footnote 28

In other words, Article 352 TFEU can be relied on to create a new Union right, or body, that leaves the national laws of the member states unaffected and imposes additional rights.Footnote 29 That provision seems less relevant for the introduction of by-design obligations. Those obligations essentially aim to implement certain policies and to ensure better compliance with certain rights, rather than to create new ones.

It follows from the foregoing analysis that the Treaty does contain several values and legal bases allowing those values to be protected in a by-design way. From the previous cursory overview, it now seems more than ever necessary to catalogue the values the EU holds dear and to question what actions the EU could take in terms of by-design regulation for them. In addition, the Charter of Fundamental Rights, a binding catalogue of EU fundamental rights, could play a complementary role in that regard.Footnote 30

10.3.2 Implementation and Delegation Challenges

The setup of by-design regulatory mechanisms requires the involvement of either government actors or private bodies (standardisation or certification bodies). Even when the European Union has the competence to set up a particular regulatory framework which includes the imposition of by-design obligations, EU constitutional law also limits or circumscribes the delegation of powers conferred on the EU to public (Section 10.3.2.1) or private bodies (Section 10.3.2.2). In both instances, delegation is not entirely impossible, yet additional conditions need to be met.

10.3.2.1 Delegation of Technical Rules to the Commission and Expert Committees

According to Article 290 TFEU, a legislative act may delegate to the Commission the power to adopt non-legislative acts of general application to supplement or amend certain non-essential elements of the legislative act.Footnote 31 A delegation of power under that provision confers power on the Commission to exercise the functions of the EU legislature, in that it enables it to supplement or amend non-essential elements of the legislative act. Such a supplementary or amending power needs to emanate from an express decision of the legislature and its use by the Commission needs to respect the bounds the legislature has itself fixed in the basic act. For that purpose, the basic act must, in accordance with that provision, lay down the limits of its conferral of power on the Commission, namely the objectives, content, scope, and duration of the conferral.Footnote 32 In addition, Article 291 TFEU states that where uniform conditions for implementing legally binding Union acts are needed, those acts shall confer implementing powers on the Commission. A 2011 Regulation outlines the basic framework for doing so.Footnote 33 Any delegation to the Commission or to an expert committee has to respect that framework.Footnote 34

10.3.2.2 Delegation to Private Standardisation Bodies?

The questions noted previously all remain regarding the delegation of by-design standardisation or certification powers to private organisations, such as standardisation bodies. Those questions go back to case law dating from 1958. In its Meroni judgment, the Court invalidated a delegation of discretionary regulatory competences by the European Commission to a private body.Footnote 35 Meroni limited the delegation of regulatory powers to private bodies in two ways. First, it limited the delegation of powers. Delegation of rule-making powers was to be expressly provided for in a legal instrument, only powers retained by a delegating body could be delegated, the exercise of these powers was subject to the same limits and procedures as they would have been within the delegating body and such delegation needed to be necessary for the effective functioning of the delegating institution.Footnote 36 Second, the judgment limited the scope of powers delegated. It maintained that the powers delegated could only include clearly defined executive powers that were capable of being objectively reviewed by the delegating body.Footnote 37 A delegation of powers by the High Authority to a private body outside the realm of supranational law would not fit that image. The 1981 Romano judgment was said to have confirmed that position in relation to the Council, although that judgment focused on public authorities to which powers delegated would escape judicial review as to their compliance with EU law.Footnote 38

The Meroni doctrine may be problematic from the point of view of setting up a by-design regulation framework.Footnote 39 The delegation of standardisation or certification powers to private bodies without any possibility of judicial oversight by the EU Courts has been considered particularly problematic in this regard. Although the EU framework of delegating standardisation powers to private organisations in the realm of product safety has been in operation for more than thirty years, its compatibility with EU law has recently come under scrutiny.Footnote 40 It is to be remembered that the Court of Justice in that context held that standards adopted by private organisations following an EU mandate to do so, are to be considered norms which can be reviewed by the Court of Justice, despite them formally not being EU legal acts.Footnote 41 Although the practical consequences of those rulings remain far from clear, the Court has succeeded in opening a debate on the constitutionality of delegation to private organisations. In the wake of this case law, it now seems that standards set up by private organisations should by some means be subject to judicial control.

That background is of direct relevance to discussions on the possibility to introduce by-design obligations. To the extent that delegation of standard-setting powers to private standardisation bodies is problematic under EU law, the setup of a standardised co-regulatory by-design regime would be a less likely choice to make. Prior to setting up this kind of legal regime, additional guarantees will have to be put in place in order to ascertain some kind of judicial oversight over those standards. Given that it is unclear at present how far such oversight should go, setting up a standardisation-based regime seems more difficult to attain. The alternative of certification-based co-regulation, which asks every designer/developer individually to integrate the EU law-compatible values into their algorithms, avoids such delegation and would seem a more viable alternative in the current state of EU law, should the control-centred model and the accompanying delegation to public authorities be considered a less preferred option.

10.3.3 Enforcement Challenges

A third EU constitutional law challenge concerns the enforcement of the by-design regimes set up. Even when the EU is competent and when certain by-design regulatory tasks can be delegated to public or private authorities, the actual application and enforcement of those by-design obligations are likely to raise additional constitutional law problems. It is to be remembered in this regard that the EU has not set up an administrative enforcement system to guarantee the application and implementation of its norms. Quite on the contrary, Article 291 TFEU explicitly obliges the Member States to guarantee this.Footnote 42 As a result, it falls in principle upon Member States to set up and organise surveillance and sanctioning mechanisms. This has resulted in a wide diversity of institutional and organisational practices, giving rise to EU law enforcement being differently structured and understood in different Member States.Footnote 43

In order to overcome somehow the Member States’ diversity in this realm, the European Union has in some domains tried to streamline the enforcement of EU rules. To that extent, EU agencies or networks of Member States’ supervisory authorities have been set up.Footnote 44 Within those agencies or networks, representatives of Member States’ authorities assemble and determine policy priorities or decide upon non-binding best practices.Footnote 45 In the realm of financial services regulation, EU agencies representing all Member States’ authorities even have the power to impose sanctions in cases where Member States’ authorities are unable or unwilling to do so.Footnote 46 As such, a complex regime of coordinated or integrated administration is set up.Footnote 47 Alternatively, the European Commission itself has taken on responsibility for the direct enforcement of EU law, whenever it has been conferred such role by the Treaties. In the field of EU competition law, the Commission thus plays a primary role in that regard.Footnote 48 Decisions taken by the Commission and/or EU agencies are subject to judicial oversight by the EU Courts, oftentimes following an internal administrative review procedure.Footnote 49 To a much more marginal extent, the EU envisages the private enforcement of its norms. Under that scheme, private individuals would invoke EU norms in their private interest, thus resulting in those norms being enforced against perpetrators of them. It generally falls upon national judges to apply the law in those contexts. The fields of competition law and consumer protection law are particularly open to this kind of enforcement,Footnote 50 which nevertheless remains of a subsidiary nature compared to public enforcement. The presence of those different frameworks allows one to conclude that a patchwork of different EU enforcement frameworks has been set up, depending on the policy domain and the felt need for coordinated application of EU legal norms.

The existence of this patchwork of enforcement frameworks has an impact on debates on whether and how to set up a by-design enforcement structure. Three observations can be made in that respect.

First, a standardisation-focused co-regulation framework would rely on essentially private standards and a presumption of conformity. That presumption could be invoked before Member States’ courts and authorities to the extent that it has been established by an EU legislative instrument. This form of essentially private enforcement has worked for technical standards, yet has recently come under scrutiny from the Court, calling for some kind of judicial oversight over the process through which norms are set. Questions can therefore be raised to what extent this system would also fit by-design obligations as envisaged here. It would be imaginable that the EU legislator would decide to set up a two-step enforcement procedure in this regard. On the one hand, it would delegate the setting of by-design specifications translating EU legal obligations to a standardisation body. The procedures of that body would have to be transparent, and norms adopted by it could be subject to judicial – or even administrative – review. Once the deadline for such review would have passed, the norms are deemed valid and compliance with them in the design of algorithms would trigger a presumption of legality, which could be rebutted on the basis of concrete data analysis. As this system would mix public and private enforcement to some extent, it would seem likely that it can be made to fit the EU’s enforcement system. It is essential, however, that the legal instrument establishing the features of by-design regulation clearly establishes how the different enforcement features would relate to each other.

Second, a more control-centred EU enforcement framework could also be envisaged. In order to set up that kind of framework, it is important to take stock of the limits of the EU enforcement structure. In essence, the imposition of fines will generally have to be entrusted to Member States’ authorities, as the GDPR showcases.Footnote 51 Those authorities’ powers and procedures can be harmonised to some extent,Footnote 52 and their operations could be complemented by a formal network of national authorities or an EU agency overseeing those activities.Footnote 53 As other sectors have demonstrated, it does take time, however, before such a regime is operational and functions smoothly.Footnote 54 From that point of view, it could also be questioned whether it would not be a good idea to entrust the European Commission with sanctioning powers in this field. Article 291 TFEU could be interpreted as allowing for this to happen by means of secondary legislation, if a sufficient majority is found among the Member States.Footnote 55 Entrusting the European Commission with those powers would require a significant increase in terms of human and financial resources. It remains to be questioned whether the Member States would indeed be willing to allocate those resources to the Commission, given that this has not happened in other policy fields. More generally, however, whatever institution would apply and enforce those rules, in-depth knowledge of both law and of coding/programming would be required, in order meaningfully to assess how the by-design obligations would have been integrated into an algorithm’s functioning. That again would require a significant investment in training both programmers and lawyers to work at the EU level in the general interest.

Third, what is often lacking in discussions on EU law enforcement is the attention that needs to be paid to compliance with legal rules. Compliance refers to the act of obeying an order, rule, or request,Footnote 56 and is a preliminary step in ensuring effective enforcement. If one can ensure an addressee of a legal norm respects that norm, no ex post enforcement by means of fines or other sanctions would be possible. It is remarkable, therefore, that EU administrative governance pays little transversal attention to compliance. In some domains, such as the free movement of goods produced lawfully in one Member StateFootnote 57 or in the realm of competition law,Footnote 58 the EU has taken some modest steps to ensure compliance. It is submitted, however, that compliance needs to be the keystone of any enforcement framework, should the EU indeed wish to pursue a by-design regulatory approach on a more general scale. By-design obligations by their very nature are indeed meant to ensure compliance with EU legal norms. By coding into existing or new algorithms certain specifications that would lead to lawfully functioning algorithms, by-design regulation essentially seeks to avoid that people are harmed by algorithms and would have to claim compensation or other types of sanctions ex post. From that point of view, by-design regulatory obligations are in themselves a form of compliance. It thus would appear strange to emphasise too much the possibility of sanctions or other public enforcement tools, without giving a central place to the need for businesses to implement the specifications in their algorithms. In that context, it could be imagined that the EU would like to put in place some kind of ex ante authorisation mechanism. Technical specifications or designs authorised by the European Commission would then be presumed to be legal, triggering the presumption of conformity as well. Such authorisation mechanisms exist in other fields of European Union law. It would seem that, at least in theory, the introduction of a similar mechanism would also be possible in this context as well.

It follows from those observations that the introduction of a by-design regulatory framework would necessitate a debate on how those obligations will be enforced, what the relationship will be between compliance programmes and ex post sanctions, and how the different enforcement approaches would relate to each other. No matter what by-design framework would be opted for, discussions on compliance and the tools to ensure and enforce such compliance would have to be laid out in a more developed way. An ex ante authorisation mechanism appears to offer the possibility to ensure compliance of certain technical specifications with EU values from the very outset. Integrating those authorised tools in newly designed algorithms could thus be conceived of as a valuable strategy for enhancing the enforcement of by-design obligations.

10.4 Conclusion

This chapter analysed to what extent the EU would have the competence to set up a by-design regulatory approach and, if so, whether the EU constitutional framework would pose certain limits to it. Although the EU has not been conferred explicit competences in the realm of algorithmic by-design regulation, different legal bases may be relied on in order to establish a more general by-design co-regulatory framework. That does not mean, however, that the EU constitutional framework would not tolerate any new by-design regulatory frameworks. If certain key principles are taken into account, the EU may very well proceed with the development of those frameworks. It thus would only require a certain political will to proceed in this regard. Should that will exist, one can conclude there is a strong chance to integrate by-design obligations better in the EU regulatory framework.

11 What’s in the Box? The Legal Requirement of Explainability in Computationally Aided Decision-Making in Public Administration

Henrik Palmer Olsen
Footnote *, Jacob Livingston Slosser , Footnote ** and Thomas Troels Hildebrandt Footnote
11.1 Introduction

As the quality of AIFootnote 1 improves, it is increasingly applied to support decision-making processes, including in public administration.Footnote 2 This has many potential advantages: faster response time, better cost-effectiveness, more consistency across decisions, and so forth. At the same time, implementing AI in public administration also raises a number of concerns: bias in the decision-making process, lack of transparency, and elimination of human discretion, among others.Footnote 3 Sometimes, these concerns are raised to a level of abstraction that obscures the legal remedies that exist to curb those fears.Footnote 4 Such abstract concerns, when not coupled with concrete remedies, may lead to paralysis and thereby unduly delay the development of efficient systems because of an overly conservative approach to the implementation of ADM. This conservative approach may hinder the development of even safer systems that would come with wider and diverse adoption. The fears surrounding the adoption of ADM systems, while varied, can be broadly grouped into three categories: the argument of control, the argument of dignity, and the argument of contamination.Footnote 5

The first fear is the loss of control over systems and processes and thus of a clear link to responsibility when decisions are taken.Footnote 6 In a discretionary system, someone must be held responsible for those decisions and be able to give reasons for them. There is a legitimate fear that a black box system used to produce a decision, even when used in coordination with a human counterpart or oversight, creates a system that lacks responsibility. This is the fear of the rubber stamp: that, even if a human is in the loop, the deference given to the machine is so much that it creates a vacancy of accountability for the decision.Footnote 7

The second fear of ADM systems is that they may lead to a loss of human dignity.Footnote 8 If legal processes are replaced with algorithms, there is a fear that humans will be reduced to mere ‘cogs in the machine’.Footnote 9 Rather than being in a relationship with other humans to which you can explain your situation, you will be reduced to a digital representation of a sum of data. Since machines cannot reproduce the whole context of the human and social world, but only represent specific limited data about a human (say age, marital status, residence, income, etc.), the machine cannot understand you. Removing this ability to understand and to communicate freely with another human and the autonomy which this represents can lead to alienation and a loss of human dignity.Footnote 10

Third, there is the well-documented fear of ‘bad’ data being used to make decisions that are false and discriminatory.Footnote 11 This fear is related to the ideal that decision-making in public administration (among others) should be neutral, fair, and based on accurate and correct factual information.Footnote 12 If ADM is implemented in a flawed data environment, it could lead to systematic deficiencies such as false profiling or self-reinforcing feedback loops that accentuate irrelevant features that can lead to a significant breach of law (particularly equality law) if not just societal norms.Footnote 13

While we accept that these fears are not unsubstantiated, they need not prevent existing legal remedies from being acknowledged and used. Legal remedies should be used rather than the more cursory reach towards general guidelines or grand and ambiguous ethical press releases, that are not binding, not likely to be followed, and do not provide much concrete guidance to help solve the real problems they hope to address. In order to gain the advantages of AI-supported decision-making,Footnote 14 these concerns must be met by indicating how AI can be implemented in public administration without undermining the qualities associated with contemporary administrative procedures. We contend that this can be done by focusing on how ADM can be introduced in such a way that it meets the requirement of explanation as set out in administrative law at the standard calibrated by what we expect legally out of human explanation.Footnote 15 In contradistinction to much recent literature, which focuses on the right to an explanation solely under the GDPR,Footnote 16 we add and consider the more well-established traditions in administrative law. With a starting point in Danish law, we draw comparisons to other jurisdictions in Europe to show the common understanding in administrative law across these jurisdictions with regard to assuring administrative decisions are explained in terms of the legal reasoning on which the decision is based.

The chapter examines the explanation requirement by first outlining how the explanation should be understood as a legal explanation rather than a causal explanation (Section 11.2). We dismiss the idea that the legal requirement to explain an ADM-supported decision can be met by or necessarily implies mathematical transparency.Footnote 17 To illustrate our point about legal versus causal explanations, we use a scenario based on real-world casework.Footnote 18 We consider that our critique concerns mainly a small set of decisions that focus on legal decision-making: decisions that are based on written preparation and past case retrieval. These are areas where a large number of similar cases are dealt with and where previous decision-making practice plays an important role in the decision-making process (e.g., land use cases, consumer complaint cases, competition law cases, procurement complaint cases, applications for certain benefits, etc.). This scenario concerns an administrative decision regarding the Danish law on the requirement on municipalities to provide compensation for loss of earnings to a parent (we will refer to them as Parent A) who provides care to a child with a permanent reduced physical or mental functioning (in particular whether an illness would be considered ‘serious, chronic or long-term’). The relevant legislative text reads:

Persons maintaining a child under 18 in the home whose physical or mental function is substantially and permanently impaired, or who is suffering from serious, chronic or long-term illness [shall receive compensation]. Compensation shall be subject to the condition that the child is cared for at home as a necessary consequence of the impaired function, and that it is most expedient for the mother or father to care for the child.Footnote 19

We will refer to the example of Parent A to explore explanation in its causal and legal senses throughout.

In Section 11.3, we look at what the explanation requirement means legally. We compare various national (Denmark, Germany, France, and the UK) and regional legal systems (EU law and the European Convention of Human Rights) to show the well-established, human standard of explanation. Given the wide range of legal approaches and the firm foundation of the duty to give reasons, we argue that the requirements attached to the existing standards of explanation are well-tested, adequate, and sufficient to protect the underlying values behind them. Moreover, the requirement enjoys democratic support in those jurisdictions where it is derived from enacted legislation. In our view, ADM can and should be held accountable under those existing legal standards and we consider it unnecessary to public administration if this standard were to be changed or supplemented by other standards or requirements for ADM and not across all decision makers, whether human or machine. ADM, in our view, should meet the same minimum explanation threshold that applies to human decision-making. Rather than introducing new requirements designed for ADM, a more dynamic communicative process aimed at citizen engagement with the algorithmic processes employed by the administrative agency in question will be, in our view, more suitable to protecting against the ills of using ADM technology in public administration. ADM in public administration is a phenomenon that comes in a wide range of formats: from the use of automatic information processing for use as one part of basic administrative over semi-automated decision-making, to fully automated decision-making that uses AI to link information about facts to legal rules via machine learning.Footnote 20 While in theory a full spectrum of approaches is possible, and fully automated models have attracted a lot of attention,Footnote 21 in practice most forms of ADM are a type of hybrid system. As a prototype of what a hybrid process that would protect against many of the fears associated with ADM might look like, we introduce a novel solution, that we, for lack of a better term, call the ‘administrative Turing test’ (Section 11.4). This test could be used to continually validate and strengthen AI-supported decision-making. As the name indicates, it relies on comparing solely human and algorithmic decisions, and only allows the latter when a human cannot immediately tell the difference between the two. The administrative Turing test is an instrument to ensure that the existing (human) explanation requirement is met in practice. Using this test in ADM systems aims at ensuring the continuous quality of explanations in ADM and advancing what some research suggests is the best way to use AI for legal purposes – namely, in collaboration with human intelligence.Footnote 22

11.2 Explanation: Causal versus Legal

As mentioned previously, we focus on legal explanation – that is, a duty to give reasons/justifications for a legal decision. This differs from causal explainability, which speaks to an ability to explain the inner workings of that system beyond legal justification. Much of the literature on black-box AI has focused on the perceived need to open up the black box.Footnote 23 We can understand that this may be because it is taken for granted that a human is by default explainable, where algorithms in their many forms are not, at least in the same way. We propose that, perhaps counter-intuitively, that even if we take the blackest of boxes, it is the legal requirement of explanation in the form of sufficient reasons that matter for the protection of citizens. It is, in our view, the ability to challenge, appeal, and assess decisions against their legal basis, which ensures citizens of protection. It is not a feature of being able to look into the minutiae of the inner workings of a human mind (its neuronal mechanisms) or a machine (its mathematical formulas). The general call for explainability in AI – often conflated with complete transparency – is not required for the contestation of the decision by a citizen. This does not mean that we think that the quest for transparent ADM should be abandoned. On the contrary, we consider transparency to be desirable, but we see this as a broader and more general issue that links more to overall trust in AI technology as a wholeFootnote 24 rather than something that is necessary to meet the explanation requirement in administrative law. The requirement of explanation for administrative decisions can be found, in one guise or another, in most legal systems. In Europe, it is often referred to as the ‘duty to give reasons’ – that is, a positive obligation on administrative agencies to provide an explanation (‘begrundelse’ in Danish, ‘Begründung’ in German, and ‘motivation’ in French) for their decisions. The explanation is closely linked to the right to legal remedies. Some research indicates that its emergence throughout history has been driven by the need to enable the citizen affected by an administrative decision to effectively challenge it before a court of law.Footnote 25 This, in turn, required the provision of sufficient reasons for the decision in question: both towards the citizen, who as the immediate recipient should be given a chance to understand the main reasoning behind the decision, and the judges, who will be charged with examining the legality of the decision in the event of a legal challenge. The duty to give reasons has today become a self-standing legal requirement, serving a multitude of other functions beyond ensuring effective legal remedies, such as ensuring better clarification, consistency, and documentation of the decisions, self-control of the decision-makers, internal and external control of the administration as a whole, as well as general democratic acceptance and transparency.Footnote 26

The requirement to provide an explanation should be understood in terms of the law that regulates the administrative body’s decision in the case before it. It is not a requirement that any kind of explanation must or should be given but rather a specific kind of explanation. This observation has a bearing on the kind of explanation that may be required for administrative decision-making relying on algorithmic information analysis as part of the process towards reaching a decision. Take, for instance, our example of Parent A. An administrative body issues a decision to Parent A in the form of a rejection explaining that the illness the child suffers from does not qualify as serious within the meaning of the statute. The constituents of this explanation would generally cover a reference to the child’s disease and the qualifying components of the category of serious illness being applied. This could be, for example, a checklist system of symptoms or a reference to an authoritative list of formal diagnoses that qualify combined with an explanation of the differences between the applicant disease and those categorised as applicable under the statute. In general, the decision to reject the application for compensation of lost income would explain the legislative grounds on which the decision rests, the salient facts of the case, and the most important connection points between them (i.e., the discretionary or interpretive elements that are attributed weight in the decision-making process).Footnote 27 It is against this background that the threshold for what an explanation requires should be understood.

In a human system, at no point would the administrative body be required to describe the neurological activity of the caseworkers that have been involved in making the decision in the case. Nor would they be required to provide a psychological profile and biography of the administrator involved in making the decision, giving a history of the vetting and training of the individuals involved, their educational backgrounds, or other such information, to account for all the inputs that may have been explicitly or implicitly used to consider the application. When the same process involves an ADM system, must the explanation open up the opaqueness of its mathematical weighting? Must it provide a technical profile of all the inputs into the system? We think not. In the case of a hybrid system with a human in the loop, must the administrators set out – in detail – the electronic circuits that connect the computer keyboard to the computer hard drive and the computer code behind the text-processing program used? Must it describe the interaction between the neurological activity of the caseworker’s brain and the manipulation of keyboard tabs leading to the text being printed out, first on a screen, then on paper, and finally sent to the citizen as an explanation of how the decision was made? Again, we think not.

The provided examples illustrate the point that causal explanation can be both insufficient and superfluous. Even though it may be empirically fully accurate, it does not necessarily meet the requirement of legal explanation. It gives an explanation – but it does likely not give the citizen the explanation he or she is looking for. The problem, more precisely, is that the explanation provided by causality does not, in itself, normatively connect the decision to its legal basis. It is, in other words, not possible to see the legal reasoning leading from the facts of the case and the law to the legal decision, unless, of course, such legal reasoning is explicitly coded in the algorithm. The reasons that make information about the neurological processes inside the brains of decision-makers irrelevant to the legal explanation requirement are the same that can make information about the algorithmic processes in an administrative support system similarly irrelevant. This is not as controversial of a position as it might seem on first glance.

Retaining the existing human standard for explanation, rather than introducing a new standard devised specifically for AI-supported decision-making, has the extra advantage that the issuing administrative agency remains fully responsible for the decision no matter how it has been produced. From this also follows that the administrative agency issuing the decision can be queried about the decision in ordinary language. This then assures a focus on the rationale behind the explanation being respected, even if the decision has been arrived at through some algorithmic calculation that is not transparent. If the analogy is apt in comparing algorithmic processes to human neurology or psychological history, then requiring algorithmic transparency in legal decisions that rely on AI-supported decision-making would fail to address the explanation requirement at the right level. Much in line with Rahwan et al., who argue for a new field of research – the study of machine behaviour akin to human behavioural researchFootnote 28 – we argue that the inner workings of an algorithm are not what is in need of explanation but, rather, the human interaction with the output of the algorithm and the biases that lie in the inputs. What is needed is not that algorithms should be made more transparent, but that the standard for intelligibility should remain undiminished.

11.3 Explanation: The Legal Standard

A legal standard for the explanation of administrative decision-making exists across all main jurisdictions in Europe. We found, when looking at different national jurisdictions (Germany, France, Denmark, and the UK) and regional frameworks (EU law and European Human Rights law), that explanation requirements differ slightly among them but still hold as a general principle that never requires the kind of full transparency advocated for. While limited in scope, the law we investigated includes a variety of different legal cultures across Europe at different stages of developing digitalised administrations (i.e., both front-runners and late-comers in that process). They also diverge on how they address explanation: in the form of a general duty in administrative law (Denmark and Germany) or a patchwork of specific legislation and procedural safeguards, partly developed in legal practice (France and the UK). Common for all jurisdictions is that the legal requirement put on administrative agencies to provide reasons for their decisions has a threshold level (minimum requirement) that is robust enough to ensure that if black box technology is used as part of the decision-making process, recipients will not be any worse off than if decisions were made by humans only. In the following discussion, we will give a brief overview of how the explanation requirement is set out in various jurisdictions.Footnote 29

In Denmark, The Danish Act on Public Administration contains a section on explanation (§§22-24).Footnote 30 In general, the explanation can be said to entail that the citizen to whom the decision is directed must be given sufficient information about the grounds of the decision. This means that the explanation must fully cover the decision and not just explain parts of the decision. The explanation must also be truthful and in that sense correctly set forth the grounds that support the decision. Explanations may be limited to stating that some factual requirement in the case is not fulfilled. For example, in our parent A example, perhaps a certain age has not been reached, a doctor’s certificate is not provided, or a spouse’s acceptance has not been delivered in the correct form. Explanations may also take the form of standard formulations that are used frequently in the same kind of cases, but the law always requires a certain level of concreteness in the explanation that is linked to the specific circumstances of the case and the decision being made. It does not seem to be possible to formulate any specific standards in regards to how deep or broad an explanation should be in order to fulfil the minimum requirement under the law. The requirement is generally interpreted as meaning explanations should reflect the most important elements of the case relevant to the decision. Similarly, in Germany, the general requirement to explain administrative decisions can be found in the Administrative Procedural Code of 1976.Footnote 31 Generally speaking, every written (or electronic) decision requires an explanation or a ‘statement of grounds’; it should outline the essential factual and legal reasons that gave rise to the decision.

Where there was not a specific requirement for explanation,Footnote 32 we found – while perhaps missing the overarching general administrative duty – a duty to give reasons as a procedural safeguard. For example, French constitutional law does not by itself impose a general duty on administrative bodies to explain their decisions. Beyond sanctions of a punitive character, administrative decisions need to be reasoned, as provided by a 1979 statuteFootnote 33 and the 2016 Code des Relations entre le Public et l’Administration (CRPA). The CRPA requires a written explanation that includes an account of the legal and factual considerations underlying the decision.Footnote 34 The rationale behind the explainability requirement is to strengthen transparency and trust in the administration, and to allow for its review and challenge before a court of law.Footnote 35 Similarly, in the UK, a recent study found, unlike many statements to the contrary and even without a general duty, in most cases, ‘the administrative decision-maker being challenged [regarding a decision] was under a specific statutory duty to compile and disclose a specific statement of reasons for its decision’.Footnote 36 This research is echoed by Jennifer Cobbe, who found that ‘the more serious the decision and its effects, the greater the need to give reasons for it’.Footnote 37

In both the UK as well as the above countries, there are ample legislative safeguards that provide specific calls for reason giving. What is normally at stake is the adequacy of reasons that are given. As Marion Oswald has pointed out, the case law in the UK has a significant history in spelling out what is required when giving reasons for a decision.Footnote 38 As she recounts from Dover District Council, ‘the content of [the duty to give reasons] should not in principle turn on differences in the procedures by which it is arrived at’.Footnote 39 What is paramount in the UK conception is not a differentiation between man and machine but one that stands by enshrined and tested principles of being able to mount a meaningful appeal, ‘administrative law principles governing the way that state actors take decisions via human decision-makers, combined with judicial review actions, evidential processes and the adversarial legal system, are designed to counter’ any ambiguity in the true reasons behind a decision.Footnote 40

The explanation requirement in national law is echoed and further hardened in the regional approaches, where for instance Art. 41 of the Charter of Fundamental Rights of the European Union (CFR) from 2000 provides for a right to good administration, where all unilateral acts that generate legal consequences – and qualify for judicial review under Art. 263 TFEU – require an explanation.Footnote 41 It must ‘contain the considerations of fact and law which determined the decision’.Footnote 42 Perhaps the most glaring difference that would arise between automated and non-automated scenarios is the direct application of Art. 22 of the General Data Protection Regulation (GDPR), which applies specifically to ‘Automated individual decision making, including profiling.’ Art. 22 stipulates that a data subject ‘shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her’,Footnote 43 unless it is proscribed by law with ‘sufficient safeguards’ in place,Footnote 44 or by ‘direct consent.’Footnote 45 These sufficient safeguards range from transparency in the input phase (informing and getting consent) to the output-explanation phase (review of the decision itself).Footnote 46 The GDPR envisages this output phase in the form of external auditing through Data Protection Authorities (DPAs), which have significant downsides in terms of effectiveness and efficiency.Footnote 47 Compared to this, we find the explanation standard in administrative law to be much more robust, for it holds administrative agencies to a standard for intelligibility irrespective of whether they use ADM or not. Furthermore, under administrative law, the principle of ‘the greater interference on the recipients life a decision has, the greater the need to give reasons in justification of the decision’ applies. Furthermore, the greater the discretionary power of the decision maker, the more thorough the explanation has to be.Footnote 48 Focusing on the process by which a decision is made rather than the gravity of its consequences seems misplaced. By holding on to these principles, the incentive should be to develop ADM technology that can be used under this standard, rather than inventing new standards that fit existing technologies.Footnote 49

ADM in public administration does not and should not alter existing explanation requirements. The explanation is not different now that it is algorithmic. The duty of explanation, although constructed differently in different jurisdictions, provides a robust foundation across Europe for ensuring that decision-making in public administration remains comprehensible and challengeable, even when ADM is applied. What remains is asking how ADM could be integrated into the decision-making procedure in the organisation of a public authority to ensure this standard.

11.4 Ensuring Explanation through Hybrid Systems

Introducing a machine-learning algorithm in public administration and using it to produce drafts of decisions rather than final decisions to be issued immediately to citizens, we suggest, would be a useful first step. In this final section of the chapter, we propose an idea that could be developed into a proof of concept for how ADM could be implemented in public authorities to support decision-making.

In contemporary public administration, much drafting takes place using templates. ADM could be coupled to such templates in various ways. Different templates require different kinds of information. Such information could be collected and inserted into the template automatically, as choices are made by a human about what kind of information should be filled into the template. Another way is to rely on automatic legal information retrieval. Human administrators often look to previous decisions of the same kind as inspiration for deciding new cases. Such processes can be labour intensive, and the same public authority may not all have the same skills in finding a relevant, former decision. Natural Language Processing technology may be applied to automatically retrieve relevant former decisions, if the authority’s decisions are available in electronic form in a database. This requires, of course, that the data the algorithm is learning from is sufficiently large and that the decisions in the database are generally considered to still be relevant ‘precedent’Footnote 50 for new decisions. Algorithmically learning from historical cases and reproducing their language in new cases by connecting legal outcomes to given fact descriptions is not far from what human civil servants would do anyway: whenever a caseworker is attending to a new case, he or she will seek out former cases of the same kind to use as a compass to indicate how the new case should be decided.

One important difference between a human and an algorithm is that humans have the ability to respond more organically to past cases because they have a broader horizon of understanding: They are capable of contextualizing the understanding of their task to a much richer extent than algorithms, and humans can therefore adjust their decisions to a broader spectrum of factors – including ones that are hidden from the explicit legislation and case law that applies to the case at hand.Footnote 51 Resource allocation, policy signals, and social and economic change are examples of this. This human contextualisation of legal text precisely explains why new practices sometimes develop under the same law.Footnote 52. Algorithms, on the other hand operate, without such context and can only relate to explicit texts. Hence they cannot evolve in the same way. Paradoxically, then, having humans in the legal loop serves the purpose of relativizing strict rule-following by allowing sensitivity to context.

This limited contextualization of algorithmic ‘reasoning’ will create a problem if all new decisions are drafted on the basis of a machine learning algorithm that reproduces the past, and if those drafts are only subjected to minor or no changes by its human collaboratorFootnote 53. Once the initial learning stage is finalized and the algorithm is used in output mode to produce decision drafts, then new decisions will be produced in part by the algorithm. One of two different situations may now occur: One, the new decisions are fed back into the machine-learning stage. In this case, a feedback loop is created in which the algorithm is fed its own decisions.Footnote 54 Or, two, the machine-learning stage is blocked after the initial training phase. In this case, every new decision is based on what the algorithm picked up from the original training set, and the output from the algorithm will remain statically linked to this increasingly old data set. None of these options are in our opinion optimal for maintaining an up-to-date algorithmic support system.

There are good reasons to think that a machine learning algorithm will only keep performing well in changing contexts (which in this case is measured by the algorithm’s ability to issue usable drafts of a good legal quality) – if it is constantly maintained by fresh input which reflects those changing contexts. This can be done in a number of different ways, depending on how the algorithmic support system is implemented in the overall organization of the administrative body and its procedures for issuing decisions. As mentioned previously, our focus is on models that engage AI and human collaboration. We propose two such models for organizing algorithmic support in an administrative system that aim at issuing decisions that we think are particularly helpful because they address the need for intelligible explanations of the outlined legal standard.

In our first proposed model, the caseload in an administrative field that is supported by ADM assistance is randomly split into two loads, such that one load is fed to an algorithm for drafting and another load is fed to a human team, also for drafting. Drafts from both algorithms and humans are subsequently sent to a senior civil servant (say a head of office), who finalizes and signs off on the decisions. All final decisions are pooled and used to regularly update the algorithm used.

By having an experienced civil servant interact with algorithmic drafting in this way, and feeding decisions, all checked by human intelligence, back into the machine-learning process, the algorithm will be kept fresh with new original decisions, a percentage of which will be written by humans from scratch. The effect of splitting the caseload and leaving one part to through a ‘human only’ track is that the previously mentioned sensitivity to broader contextualization is fed back into the algorithm and hence allows a development in the case law that could otherwise not happen. To use our Parent A example as an illustration: Over time, it might be that new diseases and new forms of handicaps are identified or recognized as falling under the legislative provision because it is being diagnosed differently. If every new decision is produced by an ADM system that is not updated with new learning on cases that reflect this kind of change, then the system cannot evolve to take the renewed diagnostic practices into account. To avoid this ‘freezing of time’, a hybrid system in which the ADM is constantly being surveyed and challenged is necessary. Furthermore, if drafting is kept anonymous, and all final decisions are signed off by a human, recipients of decisions (like our Parent A) may not know how his/her decision was produced. Still, the explanation requirement assures that recipients can at any time challenge the decision, by inquiring further into the legal justification.Footnote 55 We think this way of introducing algorithmic support for administrative decisions could advance many of the efficiency and consistency (equality) gains sought by introducing algorithmic support systems, while preserving the legal standard for explanation.

An alternative method – our second proposed model – is to build into the administrative system itself a kind of continuous administrative Turing test. Alan Turing, in a paper written in 1950,Footnote 56 sought to identify a test for artificial intelligence. The test he devised consisted of a setup in which (roughly explained) two computers were installed in separate rooms. One computer was operated by a person; the other was operated by an algorithmic system (a machine). In a third room, a human ‘judge’ was sitting with a third computer. The judge would type questions on his computer, and the questions would then be sent to both the human and the machine in the two other rooms for them to read. They would then in turn write replies and send those back to the judge. If the judge could not identify which answers came from the person and which came from the machine, then the machine would be said to have shown the ability to think. A model of Turing’s proposed experimental setup is seen in Figure 11.1:

Figure 11.1 Turing’s experimental setup

Akin to this, an administrative body could implement algorithmic decision support in a way that would imitate the setup described by Turing. This could be done by giving it to both a human administrator and an ADM. Both the human and the ADM would produce a decision draft for the same case. Both drafts would be sent to a human judge (i.e., a senior civil servant who finalizes and signs off on the decision). In this setup, the human judge would not know which draft came from the ADM and which came from the human,Footnote 57 but would proceed to finalize the decision based on which draft was most convincing for deciding the case and providing a satisfactory explanation to the citizen. This final decision would then be fed back to the data set from which the ADM system learns.

The two methods described previously are both hybrid models and can be used either alone or in combination to assure that ADM models are implemented in a way that is both productive, because drafting is usually a very time-consuming process and safe (even if not mathematically transparent) because there is a human overseeing the final product and a continuous human feedback to the data set from which the ADM system learns. Moreover, using this hybrid approach helps overcome the legal challenges that a fully automated system would face from both EU law (GDPR) and some domestic legislation.

11.5 Conclusion

Relying on the above models keeps the much-sought-after ‘human in the loop’ and does so in a way that is systematic and meaningful because our proposed models take a specific form: they are built around the idea of continuous human-AI collaboration in producing explainable decisions. Relying on this model makes it possible to develop ADM systems that can be introduced to enhance the effectiveness, consistency (equality) without diminishing the quality of explanation. The advantage of our model is that it allows ADM to be continuously developed and fitted to the legal environment in which it is supposed to serve. Furthermore, such an approach may have further advantages. Using ADM for legal information retrieval allows for analysis across large numbers of decisions that have been handed down across time. This could grow into a means for assuring better detection of hidden biases and other structural deficiencies that would otherwise not be discoverable. This approach may help allay the fears of the black box.

In terms of control and responsibility, our proposed administrative Turing test allows for a greater scope of review of rubber stamp occurrences by being able to compare differences in pure human and pure machine decisions by a human arbiter. Therefore the model may also help in addressing the concern raised about ‘retrospective justifications’.Footnote 58 Because decisions in the setup we propose are produced in collaboration between ADM and humans, the decisions issued are likely to be more authentic than either pure ADM or pure human decision-making, since the use of ADM allows for a more efficient and comprehensive inclusion of existing decision-making practice as inputting the new decision-making through automated information retrieval and recommendation. With reference to human dignity, our proposed model retains human intelligibility as the standard for decision-making. The proposed administrative Turing model also continually adds new information into the system, and undergoes a level of supervision that can protect against failures that are frequently associated with ADM systems. Applying the test developed in this chapter to develop a proof of concept for the implementation of ADM in public administration today is the most efficient way of overcoming the weaknesses of purely human decision-making tomorrow.

ADM does not solve the inequalities built into our societal and political institutions, nor is it their original cause. There are real questions to be asked of our systems, and we would rather not bury those questions with false enemies. To rectify those inequalities, we must be critical of our human failings and not hold hostage the principles we have developed to counter injustice. If those laws are deficient, it is not the fault of a new technology. We are, however, aware that this technology can not only reproduce but even heighten injustice if it is used thoughtlessly. But we would also like to flag that the technology offers an opportunity to bring legal commitments like the duty of explanation up to a standard that is demanded by every occurrence of injustice: a human-based standard.

12 International Race for Regulating Crypto-Finance Risks A Comprehensive Regulatory Framework Proposal

Yaiza Cabedo Footnote *
12.1 Regulatory Responses to Financial Innovation from a Regulatory Competition Perspective

States are in continuous competition to attract business, wealth and innovation through the quality of their administration and courts and their capacity to provide specialised, innovative and efficient regulatory solutions to ensure a level playing field and an adequate level of protection for their citizens.Footnote 1 In this international regulatory race, the US legal system was a pioneer in regulating new rights, such as civil rights, women’s rights, environmental regulations or traffic safety rights – all successful regulatory innovations that other countries imported. The US administrative model was inspired by the German and English administrative law principles, and at a later time, the US functioning between the fifty states and the federal government also inspired the functioning of the European Union and globalisation through what we call the globalisation of law phenomenon.Footnote 2

The European Union (EU), with its regulatory initiatives and the development of its own process for regional and global integration, also became progressively an essential element for global checks and balances, able to correct and prevent distortions to the US legal and federal principles, such as antitrust law and the control of monopolies, deeply entrenched in the political and legal tradition of economic federalism.Footnote 3 The European Commissioner for Competition, Vestager, and the antitrust case against Google illustrates the EU as a countervailing power to limit US companies’ malpractice.Footnote 4

One of the most potent administrative innovations in the United States since its Constitution is the independent regulatory agency (or authority as it is referred to in the EU). While the ‘Constitution was designed to make lawmaking cumbersome, representative, and consensual[,] the regulatory agency was a workaround, designed to make lawmaking efficient, specialized, and purposeful’ with fewer internal hierarchy conflicts and with pre-ordained missions.Footnote 5

Wilson’s presidency in the United States laid the foundations for an innovative decentralised system of independent regulatory agencies; the Massachusetts Board of Railroad Commissioners (1869) was the first of its kind. The Commission was formed to request information and issue recommendations without holding any enforcement power yet with capacity for publicity and admonition, which proved to be a more powerful antidote for corruption than force and compulsion.Footnote 6 This system was reproduced at state and federal levels and across sectors, creating a new regulatory model (e.g., the Federal Trade Commission, created in 1914, or the Federal Reserve, created in 1913).Footnote 7

President Roosevelt, when reforming financial markets after the 1929 crash, created the Federal Deposit Insurance Corporation in 1933 and the Securities Exchange Commission (SEC) in 1934. Similarly, President Obama, after the 2008 crisis caused by the deregulation of over-the-counter (OTC) markets, expanded the powers of the SEC and the Commodity Futures Trading Commission (CFTC) and set up the Consumer Protection Financial Bureau (CPFB) for the protection of financial consumers as part of its Dodd-Frank Act reform package.Footnote 8

In the EU, the 2008 financial crisis fostered the creation of supranational and very specialised administrations for the early detection and prevention of financial risks, less bureaucratised bodies than the three EU co-legislatorsFootnote 9 and able adapt quickly to new market challenges. The Single Resolution Board or the three European Supervisory Authorities – the European Securities and Markets Authority (ESMA) in charge of regulation and supervision of securities and financial markets, the European Banking Authority (EBA) for the supervising banking entities and the European Insurance and Occupational Pensions Authority (EIOPA) – are good examples. At the same time, the post-crisis reform also reinforced the EU decentralised regulatory model for financial markets, expanding the scope of action of each EU Member State’s independent regulatory agencies for the surveillance and regulation of financial products and markets.

From an international regulatory competition perspective, the system of independent regulatory agencies is a solid structure to enable countries to anticipate responses to risks and opportunities stemming from financial innovation and technological developments such as crypto-finance. Countries with the most advanced regulatory framework and most efficient and specialised regulatory bodies and courts will attract crypto-finance businesses and investors. ESMA’s advice to the European Commission on ICOs and cryptocurrencies points out this competition between two financial blocs – the European Union and the United States – which may not be on the same page, with the European Union seeing mostly risks for regulators, investors and markets, and the United States being more open to the blockchain technology and crypto-assets.Footnote 10

Indeed, states, far away from a passive-supervisory role, can and do play an essential role as precursors and innovation pioneers. Moreover, states can go well beyond the mere race for attracting business and rather contribute to generating new markets.Footnote 11 Crypto-finance is yet another example of states’ driven innovation, and one of the technological key components of blockchain, the unique ‘fingerprint’ or hashFootnote 12 of each block of information in the chain, is generated using the standard cryptographic hashing functions invented by the US National Security Agency,Footnote 13 an administration whose research is financed with public funds.

Ultimately, economic development and financial stability depend on states’ capacity to anticipate needs and prevent emerging risks by reaching innovative solutions. DLT systems such as blockchain, thanks to their immutability of records, traceability and transparency, offer potential enhancements of legal, financial and administrative processes for private companies and also for governments.Footnote 14 However, this transition to DLT-based systems requires new regulatory actors and legal changes. In this regulatory race, states can choose to join a race to the top and use these technologies to compete in excellence or, on the contrary, go for a race to the bottom and compete in lenient and more permissive regulatory frameworks.

Innovative financial markets have always been a challenge and an opportunity for regulators from a competitive and regulatory perspective. The last paradigmatic example of a transformation of financial markets driven by the combination of financial innovation and lack of specific regulation or specialised surveillance bodies occurred with the rise of OTC derivatives markets, which consequently put at risk the global financial stability,Footnote 15 with a cost of trillions of dollars for taxpayers around the world.Footnote 16

12.2 The Unregulated OTC Derivative Markets and the TBTFFootnote 17: Lessons from a Regulatory Race to the Bottom

In 1933, after the 1929 crash, Roosevelt introduced a package of regulatory measures to reform financial markets and increase their transparency and resilience. In addition, the SEC was created as a specialised independent regulatory agency for the surveillance and regulation of securities markets, and the Securities Exchange Act was enacted to regulate securities transactions, laying the foundations for the prosecution of insider trading. The SEC’s A-1 form, the first disclosure document introduced, required issuers of stocks to provide

a narrative description of their businesses, details of corporate incorporation, management, properties, capital structure, terms of outstanding debt, the purpose of the new issue and associated expenses. It also demanded disclosure of topics not contained in listing applications, including management’s compensation, transactions between the company and its directors, officers, underwriters and promoters, a list of principal shareholders and their holdings and a description of any contracts not made in the ordinary course of business.Footnote 18

The SEC’s success inspired the creation in 1974 of the CFTC, another specialised independent regulatory agency for the surveillance and regulation of futures markets.

Roosevelt’s reform introduced principles for a regulated, more transparent and accountable capitalism, which provided financial stability and are still applicable today. However, starting from the late eighties in the UKFootnote 19 and in the mid- to late nineties in the United States, new private markets in the form of OTC derivative markets emerged without administrative or judicial surveillance, introducing innovative and highly risky financial instruments that allowed betting on the future value of any underlying asset (stocks, interest rates, currencies, etc). These OTC markets have grown exponentially since 2000, reaching $680 trillion of notional value in 2008Footnote 20 and becoming an epicentre of systemic risk,Footnote 21 with New York and London concentrating 90 per cent of the market. This market transformation and its dramatic growth were possible due to a deregulatory race-to-the-bottom strategy.

In 1999, in the United States, the Gramm Act removed restrictions that prevented deposit-taking entities from acting as investment banks.Footnote 22 In 2000, the Commodities and Futures Modernisation Act permitted corporations other than banks to trade as investment banks. In addition, it was established that the regulatory and surveillance capacity of the SEC and the CFTC would not apply to OTC derivatives markets. Indeed, all disclosure and identification requirements for regulated markets (stocks and futures) did not apply in OTC derivative markets, and instruments and behaviours that would have been considered a crime on Wall Street and any other regulated market, such as insider trading, were not prosecuted in OTC markets. Another restriction on banks’ power, limiting the territorial scope of their banking services,Footnote 23 was also lifted and generated a massive wave of mergers among financial institutions. While in 1970 12,500 small banks held 46 per cent of total US banking assets, by 2010, more than 7,000 small banks had disappeared and the few small banks still running only represented 16 per cent of all US banking assets.Footnote 24 This is how banks became TBTF,Footnote 25 so big and powerful that they could easily capture the system – either through revolving doors or through information asymmetry (releasing technical information only favourable to their interests),Footnote 26 and they succeeded in keeping regulators away.

In the absence of administrative regulation and the lack of surveillance of OTC markets, the major OTC derivatives market players created the International Swaps and Derivatives Association (ISDA),Footnote 27 which became the standards setter in OTC derivative markets, providing standardised documentation for OTC transactions and able to seduce governments to maintain OTC markets self-regulated. As ISDAs Chair said by then, ‘Markets can correct excess far better than any government. Market discipline is the best form of discipline there is.’Footnote 28

After the 2008 financial crisis, the Special Report of the United States Congressional oversight panel concluded:

After fifty years without a financial crisis – the longest such stretch in the nation’s history – financial firms and policy makers began to see regulation as a barrier to efficient functioning of the capital markets rather than a necessary precondition for success. This change in attitude had unfortunate consequences. As financial markets grew and globalised, often with breath-taking speed, the US regulatory system could have benefited from smart changes. But deregulation and the growth of unregulated, parallel shadow markets were accompanied by the nearly unrestricted marketing of increasingly complex consumer financial products that multiplied risk at every stratum of the economy, from the family level to the global level. The result proved disastrous.Footnote 29

The regulatory response to prevent this from happening again was to regulate for disclosure with independent agencies and specialised regulation for OTC derivatives. International leaders agreed at the 2009 Pittsburgh Summit on a decentralised international regulatory framework; in the United States, the Dodd-Frank Act (2010) and in the EU the European Markets Infrastructure Regulation (2012) mandated the use of a Legal Entity Identifier or LEI (similar to an ID) for the identification of the parties to an OTC derivative contract and the obligation to report and make visible to competent authorities all OTC derivative transactions taking place in the market. In addition, systemic risk controls were adopted internationally, such as the clearing obligation for standardised OTC products and the need to provide guarantees when transacting bilaterally OTC derivatives.Footnote 30

Initiatives for standardised transactional documentation for crypto-finance, such as the Simple Agreement for Future Tokens (SAFT), are being developed by market participants. Regulators should not miss the opportunity to engage since the start to introduce checks and balances and to further develop specialised knowledge while providing legal and contractual certainty to investors.

An argument used to advocate for self-regulation in OTC derivative markets was complexity. New technological developments such as blockchain and crypto-finance are also highly complex systems. As Supreme Court Justice Louis Brandeis warned a century ago:

Business men have been active in devising other means of escape from the domain of the courts, as is evidenced by the widespread tendency to arbitrate controversies through committees of business organisations. An inadequate Remedy. The remedy so sought is not adequate, and may prove a mischievous one. What we need is not to displace the courts, but to make them efficient instruments of justice; not to displace the lawyer, but to fit him for his official or judicial task. And indeed, the task of fitting the lawyer and the judge to perform adequately the functions of harmonising law with life is a task far easier of accomplishment than that of endowing men, who lack legal graining, with the necessary qualifications.Footnote 31

The emergence of new and innovative financial markets is an opportunity to apply lessons learned and prevent abuses arising from new and sophisticated crypto-assets. In addition, there is an increasing presence of tech giants in payment systems and crypto markets that will require new regulatory solutions. Big tech companies (e.g., Alibaba, Amazon, Facebook, Google and Tencent) have the potential to loom systemically relevant financial institutions very quickly; their business model builds on their large number users’ data to offer a range of financial services that exploit natural network effects, generating further user activity.Footnote 32 The Economist warns they can be too BAADD (big, anti-competitive, addictive and destructive to democracy),Footnote 33 as they are a data-opolyFootnote 34 with the potential to bring together new ways of tyranny.

12.3 The Emergence of Crypto-finance: A Race to the Top or a Race to the Bottom?

Crypto-finance uses DLT systems such as blockchain to trade assets or ‘crypto-assets’. At its core, blockchain is a decentralised database maintained by a distributed network of computers that use a variety of technologies, including peer-to-peer networks, cryptography and consensus mechanisms. The consensus mechanism is the set of strict rules for validating blocks that makes it difficult and costly for any one party to unilaterally modify the data stored, ensuring the orderly recordation of information and enhancing security.Footnote 35,Footnote 36 Participants in the network are incentivised to proceed according to the protocol by a fee paid per block validated by the transaction originator. Miners select the unprocessed transactions and engage in computations until the first miner emerges with a valid proof-of-work which allows the miner to add a block of transactions to the blockchain, collecting the reward fees.Footnote 37 The new blockchain is shared among the network of miners and other users, who verify the proof-of-work, the signatures and the absence of double-spending. If this new blockchain emerges as the consensus version, the majority of miners keep on adding to it.Footnote 38

DLT systems are built upon a cryptographic system that uses a public key, publicly known and essential for identification, and a private key (similar to a password that enables to transfer assets) kept secret and used for authentication and encryption.Footnote 39 Losing this password is equivalent to losing the right to access or move these assets. Blockchains are pseudonymous, and the private key does not reveal a ‘real life’ identity.Footnote 40

How does owner X transfer a crypto-asset to Y? X generates a transaction including X and Y’s address and X’s private key (without disclosing the private key). The transaction is broadcast to the entire network, which can verify thanks to X’s private key that X has the right to dispose of the crypto assets at a given address. What makes the system safe is the impossibility of inferring the public key from the address or inferring the private key from the public key. Meanwhile, the entire network can derive the public key from the private key and hence authenticate a given transaction.Footnote 41

By combining blockchains with ‘smart contracts’, computer processes which can execute autonomously, people can construct their own systems of rules enforced by the underlying protocol of a blockchain-based network. These systems create order without law and implement what can be thought of as private regulatory frameworks or lex cryptographica.Footnote 42 As the CFTC Commissioner Quintez notes,

Smart contracts are easily customized and are almost limitless in their applicability. For example, individuals could create their own smart contracts to bet on the outcome of future events, like sporting events or elections, using digital currency. If your prediction is right, the contract automatically pays you the winnings.… This could look like what the CFTC calls a ‘prediction market’, where individuals use so-called ‘event contracts’, binary options, or other derivative contracts to bet on the occurrence or outcome of future events [which the CFTC generally prohibits in non-crypto markets].Footnote 43

There are a wide variety of crypto-assets: the ‘investment type’ which has profit rights attached, like equities; the ‘utility type’ which provides some utility or consumption rights; and the ‘payment type’, which has no tangible value but the expectation to serve as means of exchange outside their ecosystem – and there are also hybrid types.Footnote 44 Examples range from so-called crypto-currencies like Bitcoin to digital tokens that are issued through Initial Coin Offerings (ICOs). Crypto-finance is rapidly evolving since Bitcoin was launched in 2009,Footnote 45 and Central Banks are under pressure to improve the efficiency of traditional payment systems.Footnote 46 According to the ESMA, as of the end of December 2018, there were more than 2,050 crypto-assets outstanding, representing a total market capitalisation of around EUR 110bn – down from a peak of over EUR 700bn in January 2018. Bitcoin represents half of the total, with the top 5 representing around 75 per cent of the reported market capitalisation.Footnote 47

Blockchain-based finance is taking a bite out of public markets, as it enables parties to sell billions of dollars of cryptographically secured ‘tokens’ – some of which resemble securities – and trade OTC derivatives and other financial products by using autonomous and unregulated code-based exchanges. Moreover, ‘these blockchain-based systems often ignore legal barriers supporting existing financial markets and undercut carefully constructed regulations aimed at limiting fraud and protecting investors’.Footnote 48 Blockchain allows for anonymity in transactional relationships governed solely by the network protocols, where code is law.Footnote 49 Moreover, crypto markets (like OTC derivative markets) are global and can avoid jurisdictional rules by operating transnationally. If not adequately regulated, crypto-finance can be used to circumvent the existing financial regulation and investors’ protection safeguards to commit fraud and engage in money laundering, terrorist financing or other illicit activities.

Besides the obvious differences referred to the underlying technology, the emergence of crypto-finance represents, from a regulatory perspective, the emergence of the ‘new over-the-counter market’ with yet no specific regulation and no administrative surveillance. Instruments and behaviours that are no longer accepted neither in stock markets nor in the OTC derivative markets since their post-crisis reform are found in the new anomic crypto space.

The lessons learnt from the unregulated OTC derivative markets and how they became an epicentre of systemic risk should be applied to crypto-finance by regulating for disclosure and identification, setting up independent regulatory bodies with highly specialised officials and international coordination plans for establishing mechanisms for checks and balances that strike a careful balance between encouraging digital innovations and addressing underlying risks.Footnote 50

12.4 Attempts to Regulate Crypto-Assets

The assignment of an object to one category (or none) initiates a whole cascade of further legal consequences, and as not all crypto-assets have the same features, not all of them need the same legal consideration. Crypto-currencies resemble currency in that they are exchanged ‘peer-to-peer’ in a decentralised manner, rather than through the accounting system of a central institution, but are distinguished from currency (i.e., cash) in that they are created, transferred and stored digitally rather than physically; they are issued by a private entity rather than a central bank or other public authority; and they are not ‘legal tender’.Footnote 51

Most regulators first steps towards crypto consisted in the analogue application of existing regulations. While the SEC attempts to treat some crypto-assets as securities, Bitcoin and Ether are considered commodities. Both the head of the SEC and the chairman of the CFTC have said Bitcoin and Ether are exempt from Securities LawFootnote 52 application and that they should be considered commodities under the Commodity Exchange Act.Footnote 53 A recent decision from the trade court of Nanterre in France (Tribunal de Commerce Nanterre)Footnote 54 qualifies for the first time the legal nature of Bitcoin, considering it as an intangible and fungible asset that is interchangeable – like a grain of rice or a dollar note – implying it has the features of money.Footnote 55 In 2018, in Wisconsin Central Ltd. v. United States, the United States Supreme Court introduced a passing reference to Bitcoin, implying Bitcoin is a kind of money; Justice Breyer wrote ‘what we view as money has changed over time.… Our currency originally included gold coins and bullion.… Perhaps one day employees will be paid in Bitcoin or other types of cryptocurrency’.

In relation to the tokens of an ICO, the SEC has been proactive in bringing ICOs within the scope of the Securities Act of 1933, mandating to comply with the extensive regulatory requirements in place when offering securities to the public.Footnote 56 An ICO is a pre-sell of tokens that allows a person to finance the creation of the infrastructure needed to develop an entrepreneurial project. Let’s imagine we want to build a central infrastructure for the storage of data. To finance it, we issue a token. Users seeking storage would be incentivised to buy tokens to exchange them for storage space; other users would be incentivised to provide storage by the prospect of getting tokens. The designer of infrastructure would not have the property or the control over the infrastructure, but rather, it would be collectively run by the users. Nevertheless, providers would have incentives to do a good job – providing storage and maintaining the network – because if they want their tokens to be valuable, they need their network to be useful and well maintained.Footnote 57

An ICO is to crypto-finance what an IPO (Initial Public Offering) is to the traditional or mainstream investment world, and both share the purpose of raising capital. However, they are not fully equivalent: in an IPO, a company offers securities to raise capital through middlemen (investment banks, broker dealers, underwriters), while in an ICO, a company offers digital tokens directly to the public. During the ICO boom of 2017 and 2018, nearly 1,000 enterprises raised more than $22 billionFootnote 58 while being largely unregulated. Yet they have also been associated with fraud, failing firms and alarming lapses in information sharing with investors.Footnote 59

The SEC’s investigation and subsequent DAO ReportFootnote 60 in 2017 was the first attempt to address the treatment of ICOs. The DAO (a digital decentralised autonomous organisation with open-source code, and a form of investor-directed venture capital fund) was instantiated on the Ethereum blockchain, had no conventional management structure and was not tied to any particular state, yet its token’s sale in 2016 set the record for the largest crowdfunding campaign in history. The SEC’s Report argued that the tokens offered by the DAO were securities and that federal securities laws apply to those who offer and sell securities in the United States, regardless of whether (i) the issuing entity is a traditional company or a decentralised autonomous organisation, (ii) those securities are purchased using US dollars or virtual currencies, or (iii) they are distributed in certificated form or through DLT.Footnote 61

Under US law, securities are identified using the ‘Howey Test’ according to the Supreme Court ruling on SEC v. HoweyFootnote 62, which established that a security is a contract involving ‘an investment of money in an enterprise with a reasonable expectation of profits to be derived from the entrepreneurial or managerial efforts of others’. Presumably, an investor buys tokens expecting an increase of the value, however, the reasonable expectation of profits derived from the efforts of others is more complex to analyse, as it varies case by case.Footnote 63

In the EU, the definition of securities is less straightforward, where the term is defined differently in EU languages, against the background of national legal systems. Even harmonised definitions of securities such as those found in MiFiD, the Market Abuse Directive 2003/6/EC and the Prospectus Directive 2003/71/EC appear susceptible to different interpretations among Member States.Footnote 64

Parangon and Airfox ICO’sFootnote 65 were the first cases where the SEC imposed civil penalties for violation of rules governing the registration of securities. Both issuers settled the charges and agreed to return funds to harmed investors, register the tokens as securities, file periodic reports with the SEC and pay penalties of $250,000 penalties each. The SEC also initiated an inquiry into the ICO launched by Kik Service (which owns the messaging app Kik with over 300 million users worldwideFootnote 66) and raised $100 millionFootnote 67 in 2017 selling a crypto-asset called Kin.Footnote 68 Instead of settling, Kik responded to the SEC by defending Kik as a currency or ‘utility token’, designed as a medium of exchange within Kin’s ecosystem, and citing that currencies are exempted from securities regulation. However, SEC regulators seek an early summary judgment against the firm, arguing the company was aware of issuing securities and had also assured investors the tokens could be easily resold. This case is relevant because if Kik carries on with its argumentation, the final decision would further clarify the boundaries of securities and currencies.

Despite the need for specific regulation for crypto-assets, fraud still remains fraud regardless of the underlying technology. In the action against ‘Shopin’,Footnote 69 the SEC alleged that the issuer, Shopin, and its CEO conducted a fraudulent and unregistered offering of digital securities, where tokens would raise capital to personal online shopping profiles that would track customers’ purchase histories across numerous online retailers and link those profiles to the blockchain. However, Shopin allegedly never had a functioning product, and the company’s pivot to the blockchain only resulted from its struggles to stay in business as a non-blockchain business.Footnote 70

Qualifying crypto-tokens as securities instead of working on customised regulatory solutions for crypto-assets risks failing to provide an adequate level of protection. In a decentralised model, where the entrepreneur does not aim to keep control over the network, but rather build it to release it, if it is required to furnish financial statements and risk factors about the enterprise to potential investors (as for securities), then these financial statements will only show some expenses and no revenues for the first quarters and, once the infrastructure is built, nothing forever, which does not serve the purpose of protecting investors.Footnote 71

12.5 Proposal for a Comprehensive Administrative Framework for Crypto-Finance
12.5.1 A Crypto-Finance Specialised Regulation

As illustrated by the cases presented and the attempts of financial regulators to bring crypto-assets under some of the existing regulations, new financial products and new forms of fraud and abuse involving crypto-assets justify a renewed demand (as following the stock market crash of 1929 or the 2008 financial crisis) for specialised crypto regulation and preventive action that enables investors to make better-informed capital allocation decisions and reduce their vulnerability to wrongdoers.

Regulatory action requires a full understanding of the specific characteristics of financial products based on DLT systems. Moreover, the determinants of utility token prices are not the same as in traditional securities like stocks and bonds, and therefore financial requirements on traditional securities fail to provide the kind of useful information an investor needs when investing in crypto-assets. It is in the general interest to set up standards for the quality of the information to make investors less vulnerable to scams and allow investors to decide based on economic fundamentals, instead of driven by factors such as popularity and social media marketing, as is the case for ICO investment according to academic studies.Footnote 72

Designing a specialised disclosure framework that considers the specific characteristics of crypto-finance requires more than just extending an existing regulatory regime to a new asset class, but it does not require starting from scratch. One of the key aspects when designing regulations for crypto is how to identify who is responsible for ensuring that activity on the blockchain complies with the law. As the CFTC Commissionaire Brian Quintez notes,Footnote 73 in the past, the CFTC has supervised derivatives markets through the registration of market intermediaries. Indeed, much of the CFTC’s regulatory structure for promoting market integrity and protecting customers revolves around the regulation of exchanges, swap dealers, futures commission merchants, clearinghouses and fund managers, and we need to find new ways to preserve accountability in the disintermediated world of blockchain.

In addition, new financial service providers using DLT have entered the crypto financial market and may well require different regulatory treatment than traditional banks or non-bank financial institutions. The rapid growth of Big-Tech services in finance can enhance financial inclusion and contribute to the overall efficiency of financial services. Conversely, given large network effects and economies of scale and scope, Big Tech represents a concentration risk and could give rise to new systemic risks. These particularities need to be specifically addressed in the regulation.Footnote 74

The SEC has named Valerie Szczepanik Senior Advisor for Digital Assets and Innovation, the first ‘Crypto-Tsar’.Footnote 75 Szczepanik is optimistic about boosting the cryptocurrency market through better regulation. She highlights the importance of taking an initial principles-based approach towards new technologies while following and studying them closely to avoid a new precipitous regime. Acknowledging the international regulatory competition aspect at stake, even if some companies might go outside the United States in search of more lenient regulatory regimes – in her words – the real opportunity is with companies that abide by the stronger rules: ‘There are benefits to doing it the right way. And when they do that they will be the gold standard.’Footnote 76 Allegedly, SEC’s strategy for crypto-finance is looking towards a race to the top.

As G20 leaders agreed on a regulatory reform to increase transparency in OTC derivative markets and prevent future crises at the 2009 Pittsburgh Summit,Footnote 77 a joined international effort to regulate crypto-finance defining key principles would serve as the basis for establishing a decentralised regulatory framework for disclosure and for a coordinated surveillance of crypto-finance markets. In the same line, the International Organization of Securities Commissions (IOSCO) is working on key considerations to regulate crypto-assets,Footnote 78 ensuring minimal coordination without being prescriptive and allowing competent authorities to implement their own strategies to reach common goals, and the FATF, the global money laundering and terrorist financing watchdog, issued guidance for monitoring crypto-assets and service providers.Footnote 79 More needs to be done in this area.

12.5.2 An Independent Regulatory Agency Specialised in Crypto-Finance to Foster Innovation within a Safe Environment

Regulatory agencies represent an independent regulatory power, more effective in solving new situations and preventing emerging risks thanks to its less bureaucratised structure combined with a high degree of expertise and specialisation among its officers. These agencies can adopt regulation and recommendations, yet in some cases, they lack the most stringent enforcement and punitive tools.Footnote 80 Nevertheless, guidance and recommendations can have a strong effect in shaping market participants’ behaviour and can trigger peer-pressure mechanisms that intensify the agency’s impact.

Notably, in the case of financial institutions, which are in constant interaction with the regulator, compliance with guidelines and recommendations has a greater impact because, on the one hand, regulatory agencies have licensing capacity, which is a powerful inducement to comply with guidance pronouncements. On the other hand, this continuous interaction between financial entities and agencies ‘facilitates regulators’ ability to retaliate on numerous dimensions through supervision and examination, in addition to their ability to bring enforcement actions for noncompliance with a specific policy’.Footnote 81 An agency overviewing crypto-finance should seek a constant interaction relationship with its supervised entities.

In addition, agencies are also a guarantee for transparency and market participation in the policy-making process. The US Administrative Procedure Act establishes that agencies’ rule-making requires three procedural steps: information, participation and accountability.Footnote 82 The EU agencies or authorities apply the equivalent public consultation procedure. In addition, there is an extra step envisaged for the EU agencies that mandates the inclusion of a costs and benefits analysis for each proposed regulatory measure. As Professor Roberta Romano highlights, this participative administrative procedure is linked to the political legitimacy of rule-making, given its management by unelected officials. Public participation ‘can illuminate gaps in an agency’s knowledge and provide an understanding of real-world conditions, as well as assist an agency in gauging a rule’s acceptance by those affected’.Footnote 83

James M. Landis, advisor to President Roosevelt and one of the designers of the post-crash regulations, understood that market stability should come from the creation of agencies in charge of monitoring business day-to-day life. Leaving all control to Courts through judicial review of cases did not allow for precautionary and preventive measures. Moreover, Courts and judges cannot carry out the constant task of following and analysing market trends as a dedicated agency can do. Landis asserted in The Administrative Process, published in 1938, that

the administrative process is, in essence, our generation’s answer to the inadequacy of the judicial and legislative processes. It represents our effort to find an answer to those inadequacies by some other method than merely increasing executive power. If the doctrine of the separation of power implies division, it also implies balance, and balance calls for equality. The creation of administrative power may be the means for the preservation of that balance.

In addition,

efficiency in the processes of governmental regulation is best served by the creation of more rather than less agencies’. Administrative agencies should by all means be independent and not be simply an extension of executive power or of legislative power. This view is based upon the desire of obtaining supervision and exploration with ‘uninterrupted interest in a relatively narrow and carefully defined area of economic and social activity.Footnote 84

When speaking about an independent and specialised agency for crypto-finance, we do not necessarily imply the creation of new agencies from scratch. On the contrary, it proves more beneficial to build on the reputation of an existing specialised authority that is already known by the market, which broadens its scope to create a special arm or body within its remit and recruits crypto experts to focus exclusively on finding regulatory solutions to be applied in the crypto field. The LabCFTC, for instance, is set up to bring closer the Washington regulator (historically focused on commodity markets rather than digital assets) and Silicon Valley. The new director of LabCFTC, Melissa Netram, comes from the software company Intuit and illustrates CFTC Chairman Tarbert’s philosophy that you ‘can’t really be a good regulator unless you are hiring people who actually know and understand these markets’.Footnote 85

Crypto-finance also introduces new mechanics that can translate into new risks of collusion, which need to be understood and specifically addressed. Collusion needs trust between market players and blockchain can play a key role in this respect by allowing more cooperation between the players. The question then becomes whether blockchain can be used to set up a system of binding agreements, and accordingly, to change the game into a cooperative collusive one. Combined with smart contracts, blockchain makes colluders trust each other because the terms of the agreement are immutable. Competition and antitrust agencies’ task is to create a prisoner’s dilemma in which each player shares the same dominant strategy: to denounce the agreement. Blockchain can help the players to build a reserve of trust, which in turn requires a greater effort from competition agencies.Footnote 86

12.5.2.1 Regulatory Sandboxes

A regulatory sandbox is a scheme set up by a competent authority that provides regulated and unregulated entities with the opportunity to test, pursuant to a testing plan agreed and monitored by the authority, innovative products or services related to the carrying out of financial services.Footnote 87 Sandboxes are an important cooperation mechanism that allows entrepreneurs to develop their projects while avoiding uncertainty regarding the applicable regulatory framework, and they provide regulators the knowledge and insights they need to prepare well-balanced regulation. As noted by the Basel Committee on Banking Supervision (BCBS), sandboxes may also imply the use of legally provided discretions by the relevant supervisor.Footnote 88

As Judge Louis Brandeis said in the context of the creation of one of the Federal Trade Commission, knowledge and understanding must come before publicity and regulation:

You hear much said of correcting most abuses by publicity. We need publicity; but as a pre-requisite to publicity we need knowledge. We must know and know contemporaneously what business – what big businesses – is doing. When we know that through an authoritative source, we shall gone very far toward the prevention of the evils which attend the conduct of business.Footnote 89

The sandbox concept, as a decentralised system of experimentation, plays a key role for administrative and regulatory innovation. Judge Brandeis theorised this concept in New State Ice Co. v. Liebmann: ‘It is one of the happy incidents of the federal system that a single courageous State may, if its citizens choose, serve as a laboratory; and try novel social and economic experiments without risk to the rest of the country.’Footnote 90 This analysis advocates for administrative decentralisation as a foster of innovation. Decentralisation allows for experimenting with creative solutions in controlled spaces (or sandboxes) without endangering the global stability, and when other jurisdictions see merit in an innovation, they will then implement it without risk. This, in essence, is the same spirit inspiring crypto sandboxes.

Among other cases, the UK FCA set up a regulatory sandbox consisting of a controlled environment to test and issue securities using blockchain so the FCA and the firms learn about the impact of current regulations on new financial products. However, at this stage, one could argue a ‘sandbox is no longer an instrument for mutual learning only, but that it is becoming an original device for regulatory design where the FCA “swaps” with firms the accreditation of digital products in the UK financial market for influence in shaping the algorithms in a way which is more investor-friendly. Arguably, this strategy is producing a form of win-win regulation’.’Footnote 91

From an international regulatory competition perspective, FCA’s strategy is also instigated by concerns about firms flying to offer digital securities in a more permissive market, while for a firm, being admitted to the sandbox represents an opportunity to be formally accredited by the FCA, which opens the door to one of the largest markets around the world. According to FCA, bespoke safeguards were put in place where relevant, such as requiring all firms in the sandbox to develop an exit plan to ensure the test can be closed down at any point while minimising the potential detriment to participating consumers.Footnote 92 This collaborative strategy is already paying off, and the UK is currently ahead in authorising electronic platforms to offer crypto derivatives, such as CFDs,Footnote 93 putting certain activities under the regulator’s radar. Nevertheless, the FCA had warned in 2017 that ‘cryptocurrency CFDs are an extremely high-risk, speculative investment. You should be aware of the risks involved and fully consider whether investing in cryptocurrency CFDs’,Footnote 94 and consistent with this warning, it is to be expected that FCA, before granting authorisation to platforms trading crypto-CFDs, has implemented adequate investor’s protection safeguards and enforcement procedures.

12.5.3 The Principle of Judicial Deference in Favour of Independent Agencies’ Interpretation

The United States has long discussed the doctrine of the ‘deference principle’, which states courts should show deference in favour of specialised agencies (by dint of their expertise) when interpreting the ambiguity of a statute or law. As Cass SunsteinFootnote 95 notes, the deference principle is a two-step approach,Footnote 96 as established in Chevron v. NRDC;, Footnote 97 Courts must apply the deference principle to agency interpretations referred to legal texts when the provisions are ambiguous or unclear, so long as such interpretation is reasonable (in the sense that it is reasonable according to the agency’s remit to interpret on that matter).

This case is fundamental in the recognition and delimitation of power of independent administrative agencies. It confirms that specialisation of officers in these agencies should prevail over Courts’ judgments when it comes to interpreting statutory principles. For a subject as complex as crypto-finance, this deference principle in favour of the specialised agency would ensure better judgments and represents a precious asset in the international race between jurisdictions for becoming a financial crypto-hub.

12.5.4 An Activist Agency: The Case of the Consumer Financial Protection Bureau (CFPB)

Harvard Law Professor and Senator Elizabeth Warren has fiercely advocated for the creation of a specialised agency for the protection of financial consumers and for the introduction of disclosure requirements regarding credit and loans. Robert Shiller, who received the Nobel Prize in Economy, noted that

in correcting the inadequacies of our information infrastructure, as outlined by Elizabeth Warren, would be for the government to set up what she calls a financial product safety commission, modeled after the Consumer Product Safety Commission … to serve as an ombudsman and advocate. It would provide a resource for information on the safety of financial products and impose regulations to ensure such safety.… The National Highway Traffic Safety Administration maintains data on highway and motor vehicle safety and statistics on accidents. In the same way, we must fund a government organization empowered to accumulate information on the actual experience that individuals have with financial products – and the ‘accidents’, rare as well as commonplace, that happen with them – with an eye toward preventing such accidents in the future.Footnote 98

The Dodd-Frank Act mandated the creation of the Consumers Financial Protection Bureau (CFPB) to protect consumers from unfair, deceptive or abusive practices, arming people with the information they need to make smart financial decisions, by empowering, educating and following a very dynamic (activist) strategy. The CFPB consolidated in one agency functions that had previously been allocated across seven federal agencies. To ensure independence, the CFPB was given a comparatively anomalous autonomous structure for a US administrative agency. It is organised analogously to a cabinet department in that it has a single director, but in contrast, the CFPB director has statutory removal protection. The agency is further independent of the executive by location, as it was placed within the Fed System. However, Fed Board governors may not intervene in the CFPB’s affairs; review or delay implementation of its rules; or consolidate the bureau, its functions or its responsibilities with any other division. Also, a feature that is unique to the CFPB is its funding arrangement: it is independent of both Congress and the president, for it is not subject to the annual appropriations process. The director sets his/her own budget, which is funded by the Fed (capped at 12 per cent of the Fed’s total operating expenses). Although the CFPB director must file semi-annual reports with Congress, there is minimal leverage that Congress holds to influence the agency, given its lack of budgetary control – which is a key disciplining technique.Footnote 99

The reaction of major market participants to an agency with such a degree of independence was categorical, as Warren condemned in 2009:

The big banks are storming Washington, determined to kill the CFPB. They understand that a regulator who actually cares about consumers would cause a seismic change in their business model: no more burying the terms of the agreement in the fine print, no more tricks and traps. If the big banks lose the protection of their friendly regulators, the business model that produces hundreds of billions of dollars in revenue – and monopolizes profits that exist only in non-competitive markets – will be at risk. That’s a big change.Footnote 100

Pressure was such that although President Obama had first thought of Warren as the director of the agency, he needed to step back and look for another possible candidate with a lower profile in this matter.

There have been continuous efforts by opponents of the CFPB to restructure the agency, and the Republican House under Trump’s administration passed a bill to make the CFPB more accountable. (What are they scared of?) The CFPB is an example of a quasi-activist agency dynamic and very specialised in protecting financial consumers’ rights – a model that shall be emulated in other jurisdictions and whose strategies should inspire the creation of an activist agency for crypto-finance that not only monitors but most importantly makes information accessible to financial consumers in intelligible ways. This model is enough dynamic and participative to have forums that are constantly warning about new risks associated with crypto-assets, scams or any relevant information almost in real time.

12.5.5 Administrative Judges Specialising in Crypto-Finance

The US Supreme Court Lucia v. SECFootnote 101 decision is key, as it consolidates the role of administrative judges instituting proceedings within specialised independent agencies such as the SEC. The decision reflects on the power of administrative law judges and provides clarification on their status. The Court resolved that administrative Law Judges at the SEC are ‘officers of the United States’ rather than ‘mere employees’ and therefore need to be subject to the Appointment Clause (i.e., appointed by the president or a person with delegated power). The Supreme Court recognises that administrative judges have an important role that needs to be appointed according to a higher standard procedure in the US administration, rather than using simpler contractual means that could embed fewer guarantees in the process.

According to the Court, SEC’s administrative judges carry great responsibility and exercise significant authority pursuant to the laws of the United States (e.g., take testimony, conduct trials, rule on the admissibility of evidence and have the power to enforce compliance with discovery orders, important functions that judges exercise with significant discretion). Contrary to other specialised agencies, the SEC can decide not to review the judge’s decision, and when it does so, the judge’s decision becomes final and is deemed the action of the SEC. The SEC judge has, undoubtedly, discretion in its role and has enforcement power.Footnote 102

This precedent should inspire the inclusion of specialised administrative judges in European authorities for aspects in which they hold direct powers of supervision and enforcement. Administrative judges of the highest qualification, as per the precedent in Lucia v. SEC, improve the quality and reputation of those agencies. At the same time, such a specialised administrative and judicial body represents a competitive advantage in any given regulatory field and notably in the case of emerging markets such as crypto-finance, where general courts’ judges around the globe may lack specialised knowledge and may not yet be familiar with DLT systems.

12.5.6 Regulatory Decentralisation as a Guarantee for Independence

The global regulatory framework design for regulating crypto-assets and for the protection of consumers and investors from crypto-finance risks should be a decentralised model that promotes competition in cooperation, the so-called co-opetition. Supranational regulatory bodies representing global leaders should define international regulatory standards on crypto-finance risks and opportunities, as for instance IOSCO starts to do, leaving the implementation in the hands of each jurisdiction’s regulator. In this way, the different regulatory bodies would cooperate to achieve the internationally agreed-upon standards while competing in terms of implementation strategies and thus promoting regulatory innovation. This co-opetition has proven a very powerful tool for countervailing capture and/or deliberated inaction from regulators, as centralised structures are more vulnerable to these deviations.

The US Supreme Court Watters v. Wachovia BankFootnote 103 decision (2007) is the crowning of a pre-emption trendFootnote 104 initiated under the George W. Bush administration to prevent states from any regulatory or supervisory intervention in the banking sphere. This case is a good illustration of the risks of a centralised supervisory and regulatory approach and how it could incentivise corruption and laisser-faire behaviour.

Wachovia Mortgages, a subsidiary entity of Wachovia Bank in North Carolina, offered mortgages in Michigan and in the rest of the United States. Subsidiary entities were under the control and supervision of the federal administration. However, Michigan statutory regulation imposes the obligation for mortgage brokers and subsidiary entities to register at the State Office of Insurance and Financial Services (OIFS) of Michigan. Linda Watters, a commissioner of the OIFS, was in charge of the supervision and of handling complaints from financial consumers referred to subsidiary entities registered in Michigan, with power limited to complaints that were not properly addressed by the federal authority. Watters requested information from Wachovia Mortgages on some of those cases, and the entity replied that the commissioner had no supervisory powers to initiate any investigation because such powers had been pre-empted by the federal administration. After this incident, commissioner Watters withdrew Wachovia Mortgages’ authorisation to operate as a mortgage lender in Michigan.

The federal administrations with competences over lending activities were, on one side, the Fed with competences referred to direct supervision of federal banks, financial consumer protection and regulatory powers for transparency in credits.Footnote 105 In addition, in 1994, the Home Ownership and Equity Protection Act (HOEPA) granted absolute power to the Fed to regulate for the prevention of fraud in lending contracts. On the other side, the Office of the Comptroller of the Currency (OCC) was in charge of the surveillance of currency transactions.

Traditionally, consumer protection was a state’s domain, as the state’s administration is closer to consumers and states’ respective laws allowed for the supervision of financial institutions within each state. However, this changed under Greenspan’s presidency in the Fed. He believed that capitalist markets without restrictions create wealth levels that stimulate a more civilised existence.Footnote 106 In parallel, the OCC, under the presidency of Dugan, also started a race to the bottom, aimed at attracting banks regulated by states’ agencies to the federal scope of the OCC. To achieve this, the OCC, an administration financed directly by the fees of the banks it supervises and notoriously conflicted, took a lenient approach, deciding not to initiate investigations against banks. Moreover, the OCC appeared in proceedings initiated by states’ regulatory agencies (amicus brief) to support financial entities against the allegations of such agencies.

Not surprisingly, during this period, financial entities directly regulated by the federal administration grew rapidly in number, and major banks such as JP Chase, HSBC or Bank of Montreal switched from state to federal banks. These transfers alone translated into an increase of 15 per cent of OCC’s total budget income. As the Congressional Report on Regulatory Reforms highlighted in 2009,

Fairness should have been addressed though better regulation of consumer financial products. If the excesses in mortgage lending had been curbed by even the most minimal consumer protection laws, the loans that were fed into the mortgage backed securities would have been choked off at the source, and there would have been no ‘toxic assets’ to threaten the global economy.Footnote 107

Instead, the OCC joined Wachovia Bank against the OIFs.

The Supreme Court decision, published in 2007 (just before the start of the financial crisis), declared that the supervision of abusive conduct against consumers was a monopoly of the federal administration. It evidenced that the Supreme Court might not had known the magnitude of the frauds and abuses taking place in the mortgage market in the United States, nor to which extent federal supervisory bodies were captured. It was only after the crisis exploded when the Supreme Court changed the precedent and in Cuomo v. ClearinghouseFootnote 108 (2009) overruled the pre-emption of States’ powers, in favour of the competences of states for financial consumers’ protection. Definitely, centralisation by pre-emption of regulatory powers made the capture of regulators easier and left citizens unprotected.

Following the same line, another recent example of the countervailing power of a decentralised regulatory model is the case of manipulation of the LIBOR,Footnote 109 the benchmark that should reflect the price at which London-based financial entities borrow money and which indirectly sets the interest rates that apply to credits and loans. After the revelation of collusive practices on its fix by an article in the Wall Street Journal,Footnote 110 European and UK authorities remained indifferent and took no action. It was only after competing authorities in Canada, Switzerland, Tokyo and the United States initiated a formal investigation when the European Commission reacted. Again, international competition among regulators and peer-to-peer pressure proved the best way to foster regulatory action.

Footnotes

8 Algorithms and Regulation

1 See David Harel and Yishai Feldman, Algorithmics: The Spirit of Computing (Addison-Wesley, 2004).

2 Max Weber, Economy and Society: An Outline of Interpretive Sociology (University of California Press, 1978), 1194.

3 Roscoe Pound, ‘Mechanical Jurisprudence’ (1908) 8 Columbia Law Review 605623.

4 Lochner v. New York, 198 U.S. 45, 76 (1905) (Holmes, J., dissenting).

5 Oliver Wendell Holmes, The Common Law (1881), 1.

6 Oliver Wendell Holmes, ‘The Path of the Law’ (1896–1897) 10 Harvard Law Review 474.

7 See Louis Kaplow and Steven Shavell, Fairness versus Welfare (Harvard University Press, 2002).

8 Different, and even opposed, approaches to legal reasoning share this fundamental idea; see Ronald M. Dworkin, Law’s Empire (Kermode, Fontana Press, 1986); Duncan Kennedy, A Critique of Adjudication (Harvard University Press, 1997).

9 Ernest Weinrib, The Idea of Private Law (Harvard University Press, 1995). For expansion of this theme, see Amnon Reichman, Formal Legal Pluralism (manuscript with authors).

10 Hans Kelsen, The Pure Theory of Law (University of California Press, 1967), 349.

11 Herbert L. A. Hart, The Concept of Law, 2nd ed. (Oxford University Press, [1961] 1994).

12 See, for instance, Niklas Luhmann, ‘Der Politische Code’ (1974) 21(3) Zeitschrift Für Politik 353; Frederick Schauer, Playing by the Rules: A Philosophical Examination of Rule-Based Decision-Making in Law and Life (Clarendon Press, 1991). For the comparative assessment of rules and standards in law, see Louis Kaplow, ‘Rules versus Standards: An Economic Analysis’ (1992) 42 Duke Law Journal 557.

13 On legal argumentation in interpretation, see recently Douglas Walton, Fabrizio Macagno, and Giovanni Sartor, Statutory Interpretation. Pragmatics and Argumentation (Cambridge University Press, 2021).

14 Karl Larenz and Claus-Wilhelm Canaris, Methodenlehre der Rechtswissenschaft (Springer-Lehrbuch, 1995), 1.3.c

15 On proportionality, see Aharon Barak, Proportionality (Cambridge University Press, 2012).

16 Daniel Kahneman, Thinking: Fast and Slow (Allen Lane, 2011).

17 This idea was developed by Marvin Minsky, who sees mind as a ‘society’ resulting from the interaction of simpler non-intelligent modules doing different kinds of computations; see Marvin Minsky, The Society of Mind (Simon and Schuster, 1988).

18 Amnon Reichman, Yair Sagy, and Shlomi Balaban, ‘From a Panacea to a Panopticon: The Use and Misuse of Technology in the Regulation of Judges’ (2020) 71 Hastings Law Review 589.

19 For an account of the early evaluation of the use of ICT in public administration, see United Nations, ‘Government Information Systems: A Guide to Effective Use of Information Technology in the Public Sector of Developing Countries’, Tech. Report ST/TCD/SER.E/28, 1995. For subsequent developments, see Christopher C. Hood and Helen Z. Margetts, The Tools of Government in the Digital Age (Palgrave, 2007).

20 Raymond Kurzweil, The Age of Spiritual Machines (Orion, 1990), 14. On the notion of artificial intelligence, see Stuart J. Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 3rd ed. (Pearson, 2016), section 1.1.

21 Alan M. Turing, ‘Computer Machinery and Intelligence’ (1950) 59 Mind 433460.

22 For the history of AI, see Nils J. Nilsson, The Quest for Artificial Intelligence (Cambridge University Press, 2010).

23 Frank Van Harmelen et al., Handbook of Knowledge Representation (Elsevier, 2008).

24 Henry Prakken and Giovanni Sartor, ‘Law and Logic: A Review from an Argumentation Perspective’ (2015) 227 Artificial Intelligence 214.

25 Kevin D. Ashley, Artificial Intelligence and Legal Analytics: New Tools for Law Practice in the Digital Age (Cambridge University Press, 2017).

26 As in the COMPAS system, which will be discussed in Section 8.14.

27 Ajay Agrawal, Joshua Gans, and Avi Goldfarb, Prediction Machines: The Simple Economics of Artificial Intelligence (Harvard Business Review Press, 2018).

28 Footnote Ibid., at page 32.

29 On online-filtering, see Giovanni Sartor and Andrea Loreggia, ‘A Study: The Impact of Algorithms for Online Content Filtering or Moderation – Upload Filters’ (European Parliament, 2020), www.europarl.europa.eu/RegData/etudes/STUD/2020/657101/IPOL_STU(2020)657101_EN.pdf.

30 See recently Douglas Hofstadter, ‘The Shallowness of Google Translate’ (The Atlantic, 30 January 2018) On the automated generation of language, see also Luciano Floridi and Massimo Chiriatti, ‘Gpt-3: Its Nature, Scope, Limits, and Consequences’ (2020) 30 Minds and Machines 681.

31 The idea of ‘blind thought’ goes back to Leibniz, who speaks of blind (or symbolic) thinking to characterise the kind of thinking through which we ‘reason in words, with the object itself virtually absent from our mind’. See Leibniz, Meditations on Knowledge, Truth, and Ideas (Acta Eruditorum, 1684).

32 See Paul Scharre, Army of None: Autonomous Weapons and the Future of War (Norton, 2018).

33 Andrew G. Ferguson, ‘Policing Predictive Policing’ (2017) 94 Washington University Law Review 1109.

34 Susan Fourtané, ‘AI Facial Recognition and IP Surveillance for Smart Retail, Banking and the Enterprise’, Interesting Engineering, 27 January 2020, https://interestingengineering.com/ai-facial-recognition-and-ip-surveillance-for-smart-retail-banking-and-the-enterprise.

35 For information about using algorithms as bureaucratic agencies, see Chapter 5 in this book.

36 Kate Crawford and Jason Schultz, ‘AI Systems as State Actors’ (2019) 119 Columbia Law Review 1941, 19481957, shows few case studies of tasks performed by algorithms, including ‘Medicaid’ and disability benefit assessment, public teacher employment evaluation, criminal risk assessment, and unemployment benefit fraud detection; Maria Dymitruk, ‘The Right to a Fair Trial in Automated Civil Proceedings’ (2019) 13(1) Masaryk University Journal of Law & Technology 27, on the possibility of an algorithm carrying judicial procedures.

37 Penny Crosman, ‘How PayPal Is Taking a Chance on AI to Fight Fraud’, American Banker, 1 September 2016, www.americanbanker.com/news/how-paypal-is-taking-a-chance-on-ai-to-fight-fraud.

38 Bernard Marr, ‘How the UK Government Uses Artificial Intelligence to Identify Welfare and State Benefits Fraud’ https://bernardmarr.com/default.asp?contentID=1585.

39 See Crawford and Shultz (Footnote n 38).

40 Sanjay Das, ‘How Artificial Intelligence Could Transform Public Health’, Sd Global, 26 March 2020, www.sdglobaltech.com/blog/how-artificial-intelligence-could-transform-public-health; Brian Wahl et al., ‘Artificial Intelligence (AI) and Global Health: How Can AI Contribute to Health in Resource-Poor Settings?’ (2018) 3(4) BMJ Global Health.

41 See the discussion in Carlo Perrotta and Neil Selwyn, ‘Deep Learning Goes to School: Toward a Relational Understanding of AI in Education’ (2020) 45(3) Learning, Media and Technology 251.

42 See the discussion in Elisabete Silva and Ning Wu, ‘Artificial Intelligence Solutions for Urban Land Dynamics: A Review’ (2010) 24(3) Journal of Planning Literature 246.

43 Jackie Snow. ‘How Artificial Intelligence Can Tackle Climate Change’, National Geographic, 18 July 2018, www.nationalgeographic.com/environment/2019/07/artificial-intelligence-climate-change/.

44 See Karen Yeung, ‘Algorithmic Regulation: A Critical Interrogation’ (2017) Regulation & Governance 6–11, for a discussion regarding the capabilities and possible classifications for algorithmic regulations.

45 Christoph Busch and Alberto De Franceschi, Algorithmic Regulation and Personalized Law: A Handbook (Hart Publishing, 2020).

46 Anthony J. Casey and Anthony Niblett, ‘A. Framework for the New Personalization of Law’ (2019) 86 University of Chicago Law Review 333.

47 For an example of a discussion regarding the delegation of state power in risk assessment algorithms, see Andrea Nishi, ‘Privatizing Sentencing: A Delegation Framework for Recidivism Risk Assessment’ (2017) 119 Columbia Law Review 1617.

48 John Locke, Two Treatises of Government (1689), 163166; Lon Fuller, The Morality of Law (1964), 3339.

49 The idea of a man-machine symbiosis in creative tasks was anticipated by J. Licklider, ‘Man-Computer Symbiosis’ (March 1960) 4 IRE Transactions on Human Factors in Electronics, HFE-1. For a view that in the legal domain too software systems can succeed best as human–machine hybrid, see Tim Wu, ‘Will Artificial Intelligence Eat the Law? The Rise of Hybrid Social-Ordering Systems’ (2019) 119 Columbia Law Review.

50 John Morison and Adam Harkens, ‘Re-engineering Justice? Robot Judges, Computerized Courts and (Semi) Automated Legal Decision-Making’ (2019) 39(4) Legal Studies 618. The authors develop the idea that such automated systems would make more rigid the application of the law: legal norms would be interpreted once and for all, and this task would be delegated to the knowledge engineers creating the knowledge base of the system, who would produce once for the logical formalisation to be automatically applied (by the inferential engine of the system) to any new case. No space would the left for arguments supporting alternative interpretation, nor for the consideration of features of individual cases that were not captured by the given formalisation. The law would be ‘petrified’ and applied regardless of the social context and dynamics.

51 A possible reply to Morison and Harkens’s critique would observe that by giving to the adopted interpretation a logical form, contestation would rather be facilitated, being given a clear target (i.e., the interpretation of norms that has been formalised in the system). Moreover, the use of intelligent systems in the legal domain could promote a legal and organisational context which would ensure the accurate consideration of individual cases and the revisability of rules. Finally, improvement in the rules, once embedded in the system’s knowledge base, would be spread to all users of the system, ensuring learning and equality of application. See Surend Dayal and Peter Johnson, ‘A Web-Based Revolution in Australian Public Administration?’ (2000) 1 The Journal of Information, Law and Technology.

52 Henry Prakken and Giovanni Sartor, ‘Law and Logic: A Review from an Argumentation Perspective’ (2015) 227 Artificial Intelligence 214.

53 Kevin D. Ashley (Footnote n 27).

54 Daniel Martin Katz, Michael J. Bommarito, and Josh Blackman, ‘A General Approach for Predicting the Behavior of the Supreme Court of the United States’ (2017) 12(4) PLoS ONE.

55 Nikolaos Aletras et al., ‘Predicting Judicial Decisions of the European Court of Human Rights’ (2016) PeerJ Computer Science; Masha Medvedeva, Michel Vols, and Martijn Wieling, ‘Using Machine Learning to Predict Decisions of the European Court of Human Rights’ (2019) Artificial Intelligence and Law; For a critical discussion, see Frank Pasquale and Glyn Cashwell, ‘Prediction, Persuasion, and the Jurisprudence of Behaviourism’ (2018) 68(1) University of Toronto Law Journal 63.

56 Floris Bex and Henry Prakken, ‘The Legal Prediction Industry: Meaningless Hype or Useful Development?’ (2020), https://webspace.science.uu.nl/~prakk101/pubs/BexPrakkenAA2020English.pdf.

57 For a detailed discussion about using AI in the Law enforcement field and its impact , see Chapters 3 and 6 in this book.

58 For a discussion of autonomy and human dignity with regard to emotion-recognition algorithms, see Chapter 4 in this book. Amazon for example used a matching tool based on resumes submitted to the company over a ten-year period. This matching tool eventually favoured male candidates over females, giving every woman a lower rank. Jeffery Dastin, ‘INSIGHT – Amazon Scraps Secret AI Recruiting Tool That Showed Bias against Women (Reuters, 10 October 2018), www.reuters.com/article/amazoncom-jobs-automation/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSL2N1VB1FQ?feedType=RSS&feedName=companyNews.

59 Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, and Cass Sunstein, ‘Discrimination in the Age of Algorithm’ (2019) 10 Journal of Legal Analysis 113174; Cass Sunstein, ‘Algorithms, Correcting Biases’ (2019) 86 Social Research: An International Quarterly 499511.

60 Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan, ‘Inherent Trade-Offs in the Fair Determination of Risk Scores’ in Christos C. Papadimitriou (ed.), 8th Innovations in Theoretical Computer Science Conference (ITCS, 2017).

61 Jack M. Balkin, ‘The Constitution in the National Surveillance State’ (2008) 93 Minnesota Law Review 125.

62 Alex Hern, ‘Do the Maths: Why England’s A-Level Grading System Is Unfair’, The Guardian, 14 August 2020.

63 See Riccardo Guidotti, Anna Monreale, Franco Turini, Dino Pedreschi, and Fosca Giannotti, ‘A Survey of Methods for Explaining Black Box Models’ (2018) 51(5) ACM Computing Surveys 93, 142.

64 Julia Angwin et al., ‘Machine Bias: There’s Software Used across the Country to Predict Future Criminals. And It’s Biased against Blacks’, ProPublica, 23 May 2016, www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

65 William Dieterich, Christina Mendoz, and Tim Brennan, ‘Compas Risk Scales: Demonstrating Accuracy Equity and Predictive Parity: Performance of the Compas Risk Scales in Broward County’, Technical report, Northpointe Inc. Research Department, 8 July 2016, https://go.volarisgroup.com/rs/430-MBX-989/images/ProPublica_Commentary_Final_070616.pdf.

66 Cynthia Rudin et al., ‘The Age of Secrecy and Unfairness in Recidivism Prediction’ (2020) 2(1) Harvard Data Science Review, https://doi.org/10.1162/99608f92.6ed64b30.

67 Chelsea Barabas et al., ‘Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment’ (2018) arXiv:1712.08238.

68 Richard Berk et al., ‘Fairness in Criminal Justice Risk Assessments: The State of the Art’ (2017) 50(1) Mathematics, Psychology, Sociological Methods & Research 3.

69 Solon Barocas and Andrew D. Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671732.

70 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ L119/1, art. 1. For the question of the compatibility of the GDPR with AI, see Giovanni Sartor and Francesca Legioia, ‘ Study: The Impact of the General Data Protection Regulation on Artificial Intelligence’ (European Parliament: Panel for the Future of Science and Technology, 2020), 8689; and for a rather different opinion on the matter, see Tal Zarsky, ‘Incompatible: The GDPR in the Age of Big Data’ (2017) 47(4) Seton Hall Law Review 995.

71 ‘Black Box’ refers to the part of the algorithm that is hidden. This is generally occurring in machine-learning algorithms, when the major part of the algorithm, being the processing of the data, becomes so complex and so independent that it becomes almost impossible to understand what logical process was bringing the algorithm to a specific output and to what rationale it may correspond.

72 For instance, the Australian government has been advised to introduce laws that ensure the explainability of AI. For a critical perspective, emphasising that where an AI algorithm cannot give a reasonable explanation it cannot be used where decisions can infringe human rights , see Angela Daly et al., Artificial Intelligence Governance and Ethics: Global Perspectives (2019), 45, https://arxiv.org/abs/1907.03848; GDPR emphasising ‘right to explanation’ in order to justify a decision made by ML model Commission Regulation 2016/679, art. 13(2)(f), 2016 O.J. (L 119) 1.

73 See Brent Mittelstadt et al., ‘Explaining Explanations in AI’, Conference on Fairness, Accountability, and Transparency (2019). The article provides an extensive discussion on the question of the explanation in xAI. It also gives a rather important perspective regarding the nature of ‘everyday’ explanations and some of their downsides – being comparative, for example, and thus vulnerable to manipulation. See also Arun Rai, ‘Explainable AI: From Black Box to Glass Box’ (2020) Journal of the Academy of Marketing Science 48, 137141, for a discussion concerning a two-dimensional approach to explanation techniques.

74 For a wide discussion about reasons for explaining, see Katherine J. Strandburg, ‘Rulemaking and Inscrutable Automated Decision Tools’ (2020) 119(185) Columbia Law Review 1851, 1864.

75 See Chapter 11 in this book.

76 Adrian Weller, ‘Transparency: Motivations and Challenges’, in Wojciech Samek et al., Explainable AI: Interpreting, Explaining and Visualizing Deep Learning (Springer, 2019), 23, 30.

77 Frederik J. Zuiderveen Bourgesius et al., ‘Tracking Walls, Take-It-or-Leave-It Choices, the GDPR, and the ePrivacy Regulation’ (2017) 3(3) European Data Protection Law Review 353368.

78 For a thorough description of the Chinese credit system, its development, and implications on privacy and human rights, see Yongxi Chen and Anne Sy Cheung, ‘The Transparent Self under Big Data Profiling: Privacy and Chinese Legislation on the Social Credit System’ (2017) 12 The Journal of Comparative Law 356.

79 Footnote Ibid., at 356–360.

80 Footnote Ibid., at 362.

81 See Daithí Mac Síthigh and Mathias Siems, ‘The Chinese Social Credit System: A Model for Other Countries?’ (2019) 82 Modern Law Review 1034, for a discussion regarding the SCS, its relevance to Western societies, and its likelihood to influence them. The article also discusses different ‘score’ systems applied by Western democracies, with an emphasis on ‘creditworthiness’ ratings.

82 Footnote Ibid., at 5–11. Although a major difference is that unlike the SCS, Western credit scores encompass only the financial aspects of an individual’s life, or perfomance at work (e.g., when work activities are managed through platforms, as for Uber drivers). Nevertheless, some are considering that twenty-first-century technology, along with ever-changing and growing economies, drive Western credit scores to encompass more and more aspects of our lives. See also John Harris, ‘The Tyranny of Algorithms Is Part of Our Lives: Soon They Could Rate Everything We Do’, The Guardian, 5 March 2018, www.theguardian.com/commentisfree/2018/mar/05/algorithms-rate-credit-scores-finances-data. See also Karen Yeung, ‘Algorithmic Regulation: A Critical Interrogation’ (2017) Regulation & Governance, 2022, for another perspective of the so-called ‘western, democratic type of surveillance society’, along with some concerns and consequences.

83 See Angwin and others (Footnote n 66); Joy Buolamwini and Timnit Gebru, ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’ (2018) Proceedings of Machine Learning Research 81. For research showing that by relying on non-reflective databases, a facial recognition algorithm showed far greater accomplishments among lighter-skinned males, with an overwhelming 99.2 per cent success rate, compared to as low as 63.3 per cent of success among darker-skinned females, see Strandburg (Footnote n 76).

84 The lack of cooperation is not the only barrier raised in the big-data market. While most barriers are economic by their nature, some are more complicated to bypass, even given a sufficient economic cushion to work with. See, for example, Michal Gal and Daniel Rubinfeld, ‘Access Barriers to Big Data’ (2017) 59 Arizona Law Review 339. Also, some barriers were raised intentionally by governments in the past, with the intention to pursue a common good. For example, see also Michael Birnhack and Niva Elkin-Koren, ‘The Invisible Handshake: The Reemergence of the State in the Digital Environment’ (2003) 8(6) SSRN Electronic Journal, on public-private cooperation in fighting terrorism, resulting in a more concentrated information market.

85 Crawford and Shultz (Footnote n 38) suggest filling this gap by applying the state action doctrine to vendors who supply AI systems for government decision-making.

86 See Andrew Tutt, ‘An FDA for Algorithms’ (2017) 69 Administrative Law Review 83, for a possibly controversial solution of establishing a state agency in charge of assessing and approving algorithms for market use.

87 Nora Osmani, ‘The Complexity of Criminal Liability in AI Systems’ (2020) 14 Masaryk U. J.L. & Tech. 53.

88 See the discussion in Section 8.14 of this chapter.

89 Kahneman (Footnote n 16).

90 See Mireille Hildebrandt, Smart Technologies and the End(s) of Law: Novel Entanglements of Law and Technology (Elgar, 2016).

91 As noted by Lawrence Lessig, Code Version 2.0 (Basic Books, 2006).

92 Roger Brownsword, ‘What the World Needs Now: Techno-regulation, Human Rights and Human Dignity’ in Roger Brownsword (ed.), Global Governance and the Quest for Justice. Volume 4: Human Rights (Hart Publishing, 2004), 203234.

93 Gerald Postema, ‘Law as Command: The Model of Command in Modern Jurisprudence’ (2001) 11 Philosophical Issues 18.

94 Meir Dan Cohen, ‘Decision Rules and Conduct Rules: Acoustic Separation in Criminal Law97 Harv. L. Rev 625 (1983–1984); Edward L. Rubin, ‘Law and Legislation in the Administrative State89 Colum. L. Rev. 369 (1989).

95 Mark Van Hoecke, ‘ Law as Communication’ (Hart Publishing, 2001), engaging with the theory of Niklas Luhmann (System Theory), as further expounded by Gunter Tuebner (Law as an Autopoietic System).

9 AI, Governance and Ethics Global Perspectives

* This chapter is a revised and updated version of a report the authors wrote in 2019: Angela Daly, Thilo Hagendorff, Li Hui, Monique Mann, Vidushi Marda, Ben Wagner, Wayne Wei Wang and Saskia Witteborn, ‘Artificial Intelligence, Governance and Ethics: Global Perspectives’ (The Chinese University of Hong Kong Faculty of Law Research Paper No. 2019-15, 2019).

We acknowledge the support for this report from Angela Daly’s Chinese University of Hong Kong 2018–2019 Direct Grant for Research 2018–2019 ‘Governing the Future: How Are Major Jurisdictions Tackling the Issue of Artificial Intelligence, Law and Ethics?’.

We also acknowledge the research assistance for the report from Jing Bei and Sunny Ka Long Chan, and the comments and observations from participants in the CUHK Law Global Governance of AI and Ethics workshop, 20–21 June 2019.

1 See, e.g., Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. (Penguin Random House 2016); Andrew Guthrie Ferguson, The Rise of Big Data Policing Surveillance, Race and the Future of Law Enforcement (NYU Press 2017).

2 See, e.g., Ronald Arkin, ‘Ethical Robots in Warfare’ (2009) 28(1) IEEE Technology & Society Magazine 30; Richard Mason, ‘Four Ethical Issues of the Information Age’ in John Wekert (ed), Computer Ethics (Routledge 2017).

3 See, e.g., Ronald Leenes and Federica Lucivero, ‘Laws on Robots, Laws by Robots, Laws in Robots: Regulating Robot Behaviour by Design’ (2014) 6(2) Law, Innovation & Technology 193; Ryan Calo, ‘Robotics and the Lessons of Cyberlaw’ (2015) 103(3) California Law Review 513; Sandra Wachter, Brett Mittelstadt and Luciano Floridi, ‘Transparent, Explainable, and Accountable AI for Robotics’ (2017) 2(6) Science Robotics 6080.

4 See, e.g., European Commission, ‘European Group on Ethics in Science and New Technologies Statement on Artificial Intelligence, Robotics and “Autonomous” Systems’ (2018) https://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf accessed 21 June 2020; Sundar Pichai, ‘AI at Google: Our Principles’ (7 June 2018) www.blog.google/technology/ai/ai-principles/ accessed 21 June 2020.

5 Otfried Höffe, Ethik: Eine einführung (C. H. Beck 2013).

6 Iyad Rahwan et al., ‘Machine Behaviour’ (2019) 568(7753) Nature 477.

7 Thilo Hagendorff, ‘The Ethics of AI Ethics. An Evaluation of Guidelines’ (2020) 30 Minds & Machines 99.

8 Future of Life Institute, ‘Asilomar AI Principles’ (2017) https://futureoflife.org/ai-principles accessed 21 June 2020.

9 Ben Wagner, ‘Ethics as an Escape from Regulation: From Ethics-Washing to Ethics-Shopping?’ in Mireille Hildebrandt (ed), Being Profiled. Cogitas ergo sum (Amsterdam University Press 2018).

10 OECD, ‘OECD Principles on AI’ (2019) www.oecd.org/going-digital/ai/principles/ accessed 21 June 2020; G20, ‘Ministerial Statement on Trade and Digital Economy’ (2019) https://trade.ec.europa.eu/doclib/docs/2019/june/tradoc_157920.pdf accessed 21 June 2020.

11 ITU, ‘United Nations Activities on Artificial Intelligence’ (2018) www.itu.int/dms_pub/itu-s/opb/gen/S-GEN-UNACT-2018-1-PDF-E.pdf accessed 21 June 2020.

12 UNESCO, ‘Elaboration of a Recommendation on Ethics of Artificial Intelligence’ https://en.unesco.org/artificial-intelligence/ethics accessed 21 June 2020.

13 Janosch Delcker, ‘US, Russia Block Formal Talks on Whether to Ban “Killer Robots”’ (Politico, 1 September 2018) www.politico.eu/article/killer-robots-us-russia-block-formal-talks-on-whether-to-ban/ accessed 21 June 2020.

14 Government of Canada, ‘Joint Statement from Founding Members of the Global Partnership on Artificial Intelligence’ (15 June 2020) www.canada.ca/en/innovation-science-economic-development/news/2020/06/joint-statement-from-founding-members-of-the-global-partnership-on-artificial-intelligence.html?fbclid=IwAR0QF7jyy0ZwHBm8zkjkRQqjbIgiLd8wt939PbZ7EbLICPdupQwR685dlvw accessed 21 June 2020.

15 See Monique Mann, Angela Daly, Michael Wilson and Nicolas Suzor, ‘The Limits of (Digital) Constitutionalism: Exploring the Privacy-Security (Im)balance in Australia’ (2018) 80(4) International Communication Gazette 369; Monique Mann and Angela Daly, ‘(Big) Data and the North-in-South: Australia’s Informational Imperialism and Digital Colonialism’ (2019) 20(4) Television & New Media 379.

16 Australian Government Department of Industry, Innovation and Science (2019), Artificial Intelligence: Australia’s Ethics Framework (7 November 2019) https://consult.industry.gov.au/strategic-policy/artificial-intelligence-ethics-framework/ accessed 22 June 2020.

17 Australian Government Department of Industry, Science, Energy and Resources, ‘AI Ethics Principles’ www.industry.gov.au/data-and-publications/building-australias-artificial-intelligence-capability/ai-ethics-framework/ai-ethics-principles accessed 22 June 2020.

18 Australian Government Department of Industry, Science, Energy and Resources, ‘Applying the AI Ethics Principles’ www.industry.gov.au/data-and-publications/building-australias-artificial-intelligence-capability/ai-ethics-framework/applying-the-ai-ethics-principles accessed 22 June 2020.

19 Australian Human Rights Commission, ‘Human Rights and Technology’ (17 December 2019) www.humanrights.gov.au/our-work/rights-and-freedoms/projects/human-rights-and-technology accessed 22 June 2020.

20 FLIA. (2017). China’s New Generation of Artificial Intelligence Development Plan (30 July 2017) https://flia.org/notice-state-council-issuing-new-generation-artificial-intelligence-development-plan/ accessed 22 June 2020.

23 中国电子技术标准化研究院 (China Electronics Standardization Institute), ‘人工智能标准化白皮书 (White Paper on AI Standardization)’ (January 2018) www.cesi.cn/images/editor/20180124/20180124135528742.pdf accessed 22 June 2020.

24 国家人工智能标准化总体组 (National AI Standardization Group), ‘人工智能伦理风险分析报告 (Report on the Analysis of AI-Related Ethical Risks)’ (April 2019) www.cesi.cn/images/editor/20190425/20190425142632634001.pdf accessed 22 June 2020. The references include (1) ASILOMAR AI Principles; (2) the Japanese Society for Artificial Intelligence Ethical Guidelines; (3) Montréal Declaration for Responsible AI (draft) Principles; (4) Partnership on Al to Benefit People and Society; (5) the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.

25 Huw Roberts et al., ‘The Chinese Approach to Artificial Intelligence: An Analysis of Policy and Regulation’ (2020) AI & Society (forthcoming).

26 国家人工智能标准化总体组 (National AI Standardization Group) (Footnote n 24) 31–32.

27 Beijing Academy of Artificial Intelligence, ‘Beijing AI principles’ (28 February 2019) www.baai.ac.cn/blog/beijing-ai-principles accessed 22 June 2020.

28 Graham Webster, ‘Translation: Chinese AI alliance drafts self-discipline “Joint Pledge” (New America Foundation, 17 June 2019) www.newamerica.org/cybersecurity-initiative/digichina/blog/translation-chinese-ai-alliance-drafts-self-discipline-joint-pledge/ accessed 22 June 2020.

29 China Daily, ‘Governance Principles for the New Generation Artificial Intelligence–Developing Responsible Artificial Intelligence’ (17 June 2020) www.chinadaily.com.cn/a/201906/17/WS5d07486ba3103dbf14328ab7.html accessed 22 June 2020.

30 However, the Chinese representative, Baidu, which is the largest search giant in China, has recently left the Partnership on AI amid the current US-China tension. See Will Knight, ‘Baidu Breaks Off an AI Alliance Amid Strained US-China Ties’ (Wired, 18 June 2020) www.wired.com/story/baidu-breaks-ai-alliance-strained-us-china-ties/ accessed 13 August 2020.

31 新京报网 (BJNews), ‘人工智能企业要组建道德委员会,该怎么做? (Shall AI Enterprises Establish an Internal Ethics Board? And How?)’ (2019) www.bjnews.com.cn/feature/2019/07/26/608130.html accessed 15 May 2020.

32 J. Si Towards an Ethical Framework for Artificial Intelligence (2018) https://mp.weixin.qq.com/s/_CbBsrjrTbRkKjUNdmhuqQ.

33 Roberts et al. (Footnote n 25).

34 中新网 (ChinaNews), ‘新兴科技带来风险 加快建立科技伦理审查制度 (As Emerging Technologies Bring Risks, the State Should Accelerate the Establishment of a Scientific and Technological Ethics Review System)’ (9 August 2019) https://m.chinanews.com/wap/detail/zw/gn/2019/08-09/8921353.shtml accessed 22 June 2020.

35 全国信息安全标准化技术委员会 (National Information Security Standardization Technical Committee), ‘人工智能安全标准化白皮书 (2019版) (2019 Artificial Intelligence Security Standardization White Paper)’ (October 2019) www.cesi.cn/images/editor/20191101/20191101115151443.pdf accessed 22 June 2020.

36 Kan He, ‘Feilin v. Baidu: Beijing Internet Court Tackles Protection of AI/Software-Generated Work and Holds that Copyright Only Vests in Works by Human Authors’ (The IPKat, 9 November 2019) http://www.ipkitten.blogspot.com/2019/11/feilin-v-baidu-beijing-internet-court.html%20accessed%2022%20June%202020 http://www.ipkitten.blogspot.com/2019/11/feilin-v-baidu-beijing-internet-court.html accessed 22 June 2020. ‘AI Robot Has IP Rights, Says Shenzhen Court’ (Greater Bay Insight, 6 January 2020) https://greaterbayinsight.com/ai-robot-has-ip-rights-says-shenzhen-court/ accessed 22 June 2020.

38 Benjamin Greze, ‘The Extra-territorial Enforcement of the GDPR: A Genuine Issue and the Quest for Alternatives’ (2019) 9(2) International Data Privacy Law 109.

39 See , e.g., Lilian Edwards and Michael Veale, ‘Slave to the Algorithm? Why a ‘Right to an Explanation’ Is Probably Not the Remedy You Are Looking For’ (2017) 16(1) Duke Law & Technology Review 18; Sandra Wachter, Brett Mittelstadt and Luciano Floridi, ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation’ (2017) 7(2) International Data Privacy Law 76.

40 European Parliament, ‘Resolution of 16 February 2017 with Recommendations to the Commission on Civil Law Rules on Robotics’ (2015/2103(INL)) https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52017IP0051 accessed 22 June 2020.

42 European Commission, ‘Communication on Artificial Intelligence for Europe’ (COM/2018/237 final, 2018) https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2018%3A237%3AFIN accessed 22 June 2020.

43 European Commission Independent High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI (Final Report, 2019) https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai accessed 22 June 2020.

44 European Commission Independent High-Level Expert Group on Artificial Intelligence ‘Policy and Investment Recommendations for Trustworthy AI’ (26 June 2019) https://ec.europa.eu/digital-single-market/en/news/policy-and-investment-recommendations-trustworthy-artificial-intelligence accessed 22 June 2020.

48 European Commission, ‘White Paper on Artificial Intelligence – A European Approach to Excellence and Trust’ (COM(2020) 65 final, 2020) https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf accessed 22 June 2020.

51 European Parliament Committee on Legal Affairs, ‘Draft Report with Recommendations to the Commission on a Civil Liability Regime for Artificial Intelligence’ (2020/2014(INL), 2020); European Parliament Committee on Legal Affairs, ‘Draft Report with Recommendations to the Commission on a Framework of Ethical Aspects of Artificial Intelligence, Robotics and Related Technologies’ (2020/2012(INL), 2020); European Parliament Committee on Legal Affairs, ‘Draft Report on Intellectual Property Rights for the Development of Artificial Intelligence Technologies’ (2020/2015(INI), 2020).

52 Samuel Stolton, ‘MEPs Chart Path for a European Approach to Artificial Intelligence’ (EurActiv, 12 May 2020) www.euractiv.com/section/digital/news/meps-chart-path-for-a-european-approach-to-artificial-intelligence/ accessed 22 June 2020.

53 Bundesministerium für Bildung und Forschung; Bundesministerium für Wirtschaft und Energie; Bundesministerium für Arbeit und Soziales, ‘Strategie Künstliche Intelligenz der Bundesregierung’ (15 November 2018) www.bmwi.de/Redaktion/DE/Publikationen/Technologie/strategie-kuenstliche-intelligenz-der-bundesregierung.html accessed 22 June 2020.

54 European Commission (Footnote n 45).

55 Datenethikkommission der Bundesregierung, ‘Gutachten der Datenethikkommission der Bundesregierung’ (2019) www.bmjv.de/SharedDocs/Downloads/DE/Themen/Fokusthemen/Gutachten_DEK_DE.pdf?__blob=publicationFile&v=3 accessed 22 June 2020.

56 Sebastian Hallensleben et al., From Principles to Practice. An Interdisciplinary Framework to Operationalise AI Ethics (Bertelsmann Stiftung 2020).

57 Jessica Heesen, Jörn Müller-Quade and Stefan Wrobel, Zertifizierung von KI-Systemen (München 2020).

58 Government of India Ministry of Electronics & Information Technology, ‘Digital India Programme’ https://digitalindia.gov.in/ accessed 22 June 2020.

59 Government of India Ministry of Finance, ‘Make in India’ www.makeinindia.com/home/ accessed 22 June 2020.

60 Government of India Ministry of Housing and Urban Affairs, ‘Smart Cities Mission’ www.smartcities.gov.in/content/ accessed 22 June 2020; Vidushi Marda, ‘Artificial Intelligence Policy in India: A Framework for Engaging the Limits of Data-Driven Decision-Making’ (2018) 376(2133) Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences.

61 Government of India Ministry of Commerce and Industry, ‘Report of the Artificial Intelligence Task Force’ (20 March 2018) https://dipp.gov.in/sites/default/files/Report_of_Task_Force_on_ArtificialIntelligence_20March2018_2.pdf accessed 22 June 2020.

62 NITI Aayog, ‘National Strategy for Artificial Intelligence’ (discussion paper, June 2018) https://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf accessed 22 June 2020.

63 Vidushi Marda, ‘Every Move You Make’ (India Today, 29 November 2019) www.indiatoday.in/magazine/up-front/story/20191209-every-move-you-make-1623400-2019-11-29 accessed 22 June 2020.

64 Jay Mazoomdaar, ‘Delhi Police Film Protests, Run Its Images through Face Recognition Software to Screen Crowd’ (The Indian Express, 28 December 2019) https://indianexpress.com/article/india/police-film-protests-run-its-images-through-face-recognition-software-to-screen-crowd-6188246/ accessed 22 June 2020.

65 Vijaita Singh, ‘1,100 Rioters Identified Using Facial Recognition Technology: Amit Shah’ (The Hindu, 12 March 2020) https://economictimes.indiatimes.com/news/economy/policy/personal-data-protection-bill-can-turn-india-into-orwellian-state-justice-bn-srikrishna/articleshow/72483355.cms accessed 22 June 2020.

66 The New India Express, ‘India Joins GPAI as Founding Member to Support Responsible, Human-Centric Development, Use of AI’ (15 June 2020) www.newindianexpress.com/business/2020/jun/15/india-joins-gpai-as-founding-member-to-support-responsible-human-centric-development-use-of-ai-2156937.html accessed 22 June 2020.

67 Stephen Cave and Sean ÓhÉigeartaigh, ‘An AI Race for Strategic Advantage: Rhetoric and Risks’ (AI Ethics And Society Conference, New Orleans, 2018).

68 US White House, ‘Executive Order on Maintaining American Leadership in Artificial Intelligence’ (11 February 2019) www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence/ accessed 22 June 2020.

69 Future of Life Institute (Footnote n 8).

70 US White House (Footnote n 69).

72 US Department of Defense, ‘Summary of the 2018 Department of Defense Artificial Intelligence strategy: Harnessing AI to Advance Our Security and Prosperity’ (2019) https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF accessed 22 June 2020.

73 US White House Office for Science and Technology Policy, ‘American Artificial Intelligence: Year One Annual Report’ (February 2020) www.whitehouse.gov/wp-content/uploads/2020/02/American-AI-Initiative-One-Year-Annual-Report.pdf accessed 22 June 2020.

75 ‘Guidance for Regulation of Artificial Intelligence Applications’ www.whitehouse.gov/wp-content/uploads/2020/01/Draft-OMB-Memo-on-Regulation-of-AI-1-7-19.pdf accessed 22 June 2020.

78 US Food and Drug Administration, ‘Artificial Intelligence and Machine Learning in Software as a Medical Device’ (28 January 2020) www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device accessed 22 June 2020; US Food and Drug Administration, ‘Clinical Decision Support Software’ (September 2019) www.fda.gov/regulatory-information/search-fda-guidance-documents/clinical-decision-support-software accessed 22 June 2020.

79 Hagendorff (Footnote n 7).

80 Antonia Horst and Fiona McDonald, ‘Personalisation and Decentralisation: Potential Disrupters in Regulating 3D Printed Medical Products’ (2020) working paper.

81 See Angela Daly, S. Kate Devitt and Monique Mann (eds), Good Data (Institute of Network Cultures 2019).

82 Brett Mittelstadt, ‘Principles Alone Cannot Guarantee Ethical AI’ (2019) 1(11) Nature Machine Intelligence 501.

10 EU By-Design Regulation in the Algorithmic Society A Promising Way Forward or Constitutional Nightmare in the Making?

1 See, on the rise of automated decision-making and on the challenges this raises, Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015). See also Karen Yeung, ‘Hypernudge: Big Data as a Mode of Regulation by Design’, (2017) 20 Information, Communication & Society 118136. On artificial intelligence in particular, Nicolas Petit, ‘Artificial Intelligence and Automated Law Enforcement: A Review Paper’, SSRN Working Paper 2018 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3145133 accessed 29 February 2020.

2 According to the European Commission, Independent High Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI, 8 April 2019, p. 8 https://ec.europa.eu/futurium/en/ai-alliance-consultation accessed 29 February 2020, compliance with EU law is a prerequisite for ethical behaviour.

3 See European Commission, White Paper on Artificial Intelligence – A European Approach to Excellence and Trust, COM (2020) 65 final, https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf accessed 29 February 2020, pp. 11 and 14.

4 As also mentioned in Pagona Tsormpatzoudi, Bettina Berendt, and Fanny Coudert, ‘Privacy by Design: From Research and Policy to Practice – The Challenge of Multi-disciplinarity’ in Bettina Berendt, Thomas Engel, Demosthenes Ikonomou, Daniel Le Métayer, and Stefan Schiffner (eds.), Privacy Technologies and Policy (Springer, 2017) 199.

5 To some extent, this idea is closely related to the theory that the infrastructure of cyberspace limits possibilities in itself. In that regard, code is law as well; see Lawrence Lessig, Code and Other Laws of Cyberspace (Basic Books, 1999) 6. The idea of by-design regulation demands designers/developers to code in certain values so as to limit that technology would keep defying certain legal values or obligations. See also Karen Yeung, Footnote n. 1, 121.

6 Compare with Ira Rubinstein, ‘Privacy and Regulatory Innovation: Moving beyond Voluntary Codes’, (2011) I/S: a Journal of Law and Policy for the Information Society 371.

7 European Network and Information Security Agency (ENISA), Privacy and Data Protection by Design – From Policy to Engineering, available at www.enisa.europa.eu/publications/privacy-and-data-protection-by-design accessed 29 February 2020, 2014 Report, 2.

8 Ann Cavoukian and Marc Dixon, ‘Privacy and Security by Design: An Enterprise Architecture Approach’, available at www.ipc.on.ca/wp-content/uploads/Resources/pbd-privacy-and-security-by-design-oracle.pdf accessed 29 February 2020.

9 For a review of such technologies, see Yun Shen and Siani Pearson, Privacy Enhancing Technologies: A Review, available at www.hpl.hp.com/techreports/2011/HPL-2011-113.pdf accessed 29 February 2020.

10 See Article 25 Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), [2016] O.J. L119/1 (hereafter GDPR).

11 Seda Gürses, Carmela Troncoso, and Claudia Diaz, ‘Engineering Privacy-by-Design’, available at www.esat.kuleuven.be/cosic/publications/article-1542.pdf accessed 29 February 2020, p. 2.

12 See, for a most basic definition, http://ec.europa.eu/smart-regulation/better_regulation/documents/brochure/brochure_en.pdf. See also Christopher Marsden, Internet Co-Regulation (Cambridge University Press, 2011) 46; Michèle Finck, ‘Digital Co-regulation: Designing a Supranational Legal Framework for the Platform Economy’, (2018) 43 European Law Review 47, 65.

13 European Parliament, Council, Commission, Interinstitutional Agreement on better law-making, OJ2003, C 321/01, point 18. This agreement has been replaced by a new 2016 interinstitutional agreement ([2016] O.J. L123/1), in which the notion of co-regulation no longer explicitly features that notion. That does not mean, however, that the EU no longer relies on co-regulation. Quite on the contrary, best practices and guiding principles for better co-regulation have still been developed in 2015; see https://ec.europa.eu/digital-single-market/sites/digital-agenda/files/CoP%20-%20Principles%20for%20better%20self-%20and%20co-regulation.pdf.

14 I have found those implicit three criteria to underlie the conceptualisations made by Linda Senden, ‘Soft Law, Self-Regulation and Co-Regulation in European Law: Where Do They Meet?, 9 Electronic Journal of Comparative Law (2005), and Ira Rubinstein, ‘The Future of Self-Regulation Is Co-regulation’ in Evan Salinger, Jules Polonetsky and Omer Tene (eds.), The Cambridge Handbook of Consumer Privacy (Cambridge University Press, 2018) 503523. I do, however, take responsibility for limiting my typology to a distinction on the basis of those three criteria. I would like to state, as a caveat, that this typology could be refined; yet is taken as a starting point for further reflections on the possibilities for by-design co-regulation in the EU legal order.

15 See, on the EU’s new approach from a constitutional perspective, Harm Schepel, The Constitution of Private Governance – Product Standards in the Regulation of Integrating Markets (Hart, 2005). See also Noreen Burrows, ‘Harmonisation of Technical Standards: Reculer Pour Mieux Sauter?’, (1990) 53 Modern Law Review 598.

16 Regulation 1025/2012 of the European Parliament and of the Council of 25 October 2012 on European standardisation, amending Council Directives 89/686/EEC and 93/15/EEC and Directives 94/9/EC, 94/25/EC, 95/16/EC, 97/23/EC, 98/34/EC, 2004/22/EC, 2007/23/EC, 2009/23/EC, and 2009/105/EC of the European Parliament and of the Council and repealing Council Decision 87/95/EEC and Decision No. 1673/2006/EC of the European Parliament and of the Council, [2012] O.J. L316/12. See also Harm Schepel, ‘The New Approach to the New Approach: The Juridification of Harmonized Standards in EU Law’, (2013) Maastricht Journal of European and Comparative Law 523.

17 CJEU, Case C-613/14, James Elliott Construction, EU:C:2016:821, para. 34.

18 See, for that framework, Niamh Moloney, ‘The Lamfalussy Legislative Model: A New Era for the EC Securities and Investment Services Regime’, (2003) 52 International and Comparative Law Quarterly 510.

19 On this framework in EU financial services regulation, see Pieter Van Cleynenbreugel, Market Supervision in the European Union. Integrated Administration in Constitutional Context (Brill, 2014) 5255.

20 Annex II of the 1985 New Approach Resolution refers to essential safety requirements or other requirements in the general interest which can be translated into harmonised technical standards.

21 Article 43 GDPR.

22 European Network and Security Information Agency, ‘Privacy by Design in Big Data. An Overview of Privacy Enhancing Technologies in the Era of Big Data Analytics’, December 2015 Report, www.enisa.europa.eu/publications/big-data-protection accessed 29 February 2020, and European Data Protection Supervisor, ‘Preliminary Opinion on Privacy by Design’, 31 May 2018, https://edps.europa.eu/sites/edp/files/publication/18-05-31_preliminary_opinion_on_privacy_by_design_en_0.pdf accessed 29 February 2020 (hereafter EDPS Opinion 2018), p. 16.

23 For that argument in the context of technical standards, Linda Senden, ‘The Constitutional Fit of European Standardization Put to the Test’, (2017) 44 Legal Issues of Economic Integration 337.

24 Art. 4(1) and 5 of the Treaty on European Union (TEU).

25 See indeed also Art. 169 and 191–193 TFEU.

26 See EDPS 2018 Opinion, pp. 18–19.

27 As confirmed by CJEU, Case C-270/12, United Kingdom v. Parliament and Council, EU:C:2014:18.

28 CJEU, Case C-436/03, European Parliament v. Council, EU:C:2006:277, para. 37.

29 Footnote Ibid., paras. 44–45.

30 Charter of Fundamental Rights in the European Union, [2012] O.J. C236/391. The Charter does not give the EU additional competences, yet at the same time affirms the key values the EU wants to promote throughout its policies. It could therefore be imagined indeed that those values constitute the background against which value-inspired specifications will be developed that would be part of the by-design co-regulatory enterprise.

31 Paul Craig, ‘Delegated Acts, Implementing Acts and the New Comitology Regulation’, (2011) 36 European Law Review 675.

32 CJEU, Case C-696/15 P, Czech Republic v. Commission, EU:C:2017:595, para. 55.

33 Regulation 182/2011 of the European Parliament and of the Council of 16 February 2011 laying down the rules and general principles concerning mechanisms for control by Member States of the Commission’s exercise of implementing powers, [2011] O.J. L55/13.

34 Joana Mendes, ‘The EU Administration’ in Pieter-Jan Kuijper et al. (ed.), The Law of the European Union, 5th edition (Kluwer, 2018) 267311.

35 CJEU, Case 9/56, Meroni v. High Authority, EU:C:1958:7 at p. 152.

36 CJEU, Case 9/56, Meroni, at 150–151. See, for a schematic overview, Takis Tridimas, ‘Financial Supervision and Agency Power’ in Niamh Nic Shuibhne and Lawrence Gormley (eds.), From Single Market to Economic Union. Essays in Memory of John A. Usher (Oxford University Press, 2012) 6162.

37 CJEU, Case 9/56, Meroni, at 152.

38 CJEU, Case 98/80, Giuseppe Romano v. Rijksinstituut voor Ziekte- en Invaliditeitsverzekering, EU:C:1981:104, 1241, para. 20 on the prohibition to take binding decisions by an administrative commission.

39 See Opinion of Advocate General Jääskinen of 12 September 2013 in Case C-270/12, United Kingdom v. Council and European Parliament, EU:C:2013:562, para. 68.

40 Linda Senden, Footnote n. 23, 350.

41 CJEU, Case C-613/14, James Elliott Construction, EU:C:2016:821.

42 According to Robert Schütze, ‘From Rome to Lisbon: “Executive federalism” in the (New) European Union’, (2010) 47 Common Market Law Review 1418.

43 See also Pieter Van Cleynenbreugel, Footnote n. 19, 209 for an example as to how the EU tried to overcome such diversity.

44 See also Joana Mendes, Footnote n. 34, 283 and 295.

45 For an example, see Article 16 of Regulation 1093/2010 of the European Parliament and of the Council of 24 November 2010 establishing a European Supervisory Authority (European Banking Authority) amending Decision 716/2009/EC and repealing Commission Decision 2009/78/EC, O.J. L 331/12; Regulation 1094/2010 of the European Parliament and of the Council of 24 November 2010 establishing a European Supervisory Authority (European Insurance and Occupational Pensions Authority) amending Decision 716/2009/EC and repealing Commission Decision 2009/79/EC, O.J. L 331/ 48; Regulation 1095/2010 of the European Parliament and of the Council of 24 November 2010 establishing a European Supervisory Authority (European Securities and Markets Authority) amending Decision 716/2009/EC and repealing Commission Decision 2009/77/EC, O.J. L 331/84. All three regulations established the so-called European Supervisory Authorities in EU financial services supervision, establishing bodies that assemble representatives of different Member States’ authorities. Collectively, they are referred to as the ESA Regulations.

46 By way of example, Regulation (EU) 236/2012 of the European Parliament and of the Council of 14 March 2012 on short selling and certain aspects of credit default swaps, [2012] OJ L86/1.

47 Pieter Van Cleynenbreugel, ‘EU Post-Crisis Economic and Financial Market Regulation: Embedding Member States’ Interests within “More Europe”’ in Marton Varju (ed.), Between Compliance and Particularism. Member State Interests and European Union Law (Springer, 2019) 79102.

48 See Article 103 TFEU and Article 11 of Council Regulation 1/2003 of 16 December 2002 on the implementation of the rules on competition laid down in Articles 81 and 82 of the Treaty, [2003] OJ L 1/1.

49 For an example, see Article 58 ESA Regulations.

50 In the realm of EU competition law, see most notably Directive 2014/104/EU of the European Parliament and of the Council of 26 November 2014 on certain rules governing actions for damages under national law for infringements of the competition law provisions of the Member States and of the European Union, [2014] O.J. L349/1. In the realm of consumer protection law, see the Proposal for a Directive on representative actions for the protection of the collective interests of consumers, and repealing Directive 2009/22/EC, COM 2018/184 final, available at https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM:2018:184:FIN.

51 Article 83 GDPR.

52 Article 58 GDPR.

53 Article 65 GDPR – the European Data Protection Board has a role in the resolution of disputes between supervisory authorities.

54 See, in that context, Eillis Ferran, ‘The Existential Search of the European Banking Authority’, (2016) European Business Organisation Law Review 285317.

55 Provided that Article 114 TFEU would be relied upon, a qualified majority would be required in this regard.

57 See https://eur-lex.europa.eu/legal-content/EN/HIS/?uri=COM:2017:795:FIN for a proposal in this regard currently in development at the level of the Parliament and Council.

11 What’s in the Box? The Legal Requirement of Explainability in Computationally Aided Decision-Making in Public Administration

* Associate Dean for Research, Professor of Jurisprudence, iCourts (Danish National Research Foundation’s Centre of Excellence for International Courts) at the University of Copenhagen, Faculty of Law; . This work was produced in part with the support of Independent Research Fund Denmark project PACTA: Public Administration and Computational Transparency in Algorithms, grant number: 8091–00025

** Carlsberg Postdoctoral Fellow, iCourts (Danish National Research Foundation’s Centre of Excellence for International Courts) at the University of Copenhagen, Faculty of Law; . This work was produced in part with the support of the Carlsberg Foundation Postdoctoral Fellowship in Denmark project COLLAGE: Code, Law and Language, grant number: CF18-0481.

Professor of Computer Science, Software, Data, People & Society Research Section, Department of Computer Science (DIKU), University of Copenhagen; . This work was produced in part with the support of Independent Research Fund Denmark project PACTA: Public Administration and Computational Transparency in Algorithms, grant number: 8091–00025 and the Innovation Fund Denmark project EcoKnow.org.

1 AI is here used in the broad sense, which includes both expert systems and machine learning as well as hybrid models. Various webpages contain information about how AI and Machine Learning may be understood. For an example, see www.geeksforgeeks.org/difference-between-machine-learning-and-artificial-intelligence/.

2 See also Jennifer Cobbe, ‘Administrative Law and the Machines of Government: Judicial Review of Automated Public-Sector Decision-Making’ (2019) 39 Legal Studies 636; Monika Zalnieriute, Lyria Bennett Moses, and George Williams, ‘The Rule of Law and Automation of Government Decision-Making’ (2019) 82 The Modern Law Review 425. Zalnieriute et al. conduct four case studies from four different countries (Australia, China, Sweden, and United States), to illustrate different approaches and how such approaches differ in terms of impact on the rule of law.

3 See, among various others, Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (St Martin’s Press 2018); Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Broadway Books 2017).

4 We find that some of the ethical guidelines for AI use, such as the European Commission’s Ethics Guidelines for Trustworthy AI (https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai) raise general concerns, but do not provide much guidance on how to address the concerns raised.

5 These categories are generally sketched from Bygrave’s analysis of the travaux préparatoires of Art 22 of the General Data Protection Regulation, which concerns explanation in automated processing and the Commission’s reticence towards implementing fully automated systems exemplified in Art 15 of the Data Protection Directive. See the draft version at p 6–7 of the chapter on Art. 22: Lee A Bygrave, ‘Article 22’, 2019 Draft Commentaries on 6 Articles of the GDPR (From Commentary on the EU General Data Protection Regulation) (Oxford University Press 2020) https://works.bepress.com/christopher-kuner/2/download.

6 A related but more legal technical problem in regards to the introduction of AI public administration is the question of when exactly a decision is made. Associated to this is also the problem of delegation. If a private IT developer designs a decision-system for a specific group of public decisions, does this mean that those decisions have been delegated from the public administration to the IT developer? Are future decisions made in the process of writing the code for the system? We shall not pursue these questions in this chapter, but instead proceed on the assumption that decisions are made when they are issued to the recipient.

7 Elin Wihlborg, Hannu Larsson, and Karin Hedstrom, ‘“The Computer Says No!” – A Case Study on Automated Decision-Making in Public Authorities’, 2016 49th Hawaii International Conference on System Sciences (HICSS) (IEEE 2016) http://ieeexplore.ieee.org/document/7427547/.

8 See e.g., Corinne Cath et al., ‘Artificial Intelligence and the “Good Society”: The US, EU, and UK Approach’ [2017] Science and Engineering Ethics http://link.springer.com/10.1007/s11948-017-9901-7.

9 Meg Leta Jones, ‘The Right to a Human in the Loop: Political Constructions of Computer Automation and Personhood’ (2017) 47 Social Studies of Science 216.

10 Karl M. Manheim and Lyric Kaplan, ‘Artificial Intelligence: Risks to Privacy and Democracy’ (Social Science Research Network 2018) SSRN Scholarly Paper Footnote ID 3273016 https://papers.ssrn.com/abstract=3273016.

11 For discussion of this issue in regards to AI supported law enforcement, see Rashida Richardson, Jason Schultz, and Kate Crawford, ‘Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice’ [2019] New York University Law Review Online 192.

12 Finale Doshi-Velez et al., ‘Accountability of AI Under the Law: The Role of Explanation’ [2017] arXiv:1711.01134 [cs, stat] http://arxiv.org/abs/1711.01134.

13 See, among others, Pauline T. Kim, ‘Data-Driven Discrimination at Work’ (2016) 58 William & Mary Law Review 857.

14 See Zalnieriute, Moses, and Williams (Footnote n 2) 454.

15 By explanation, we mean here that the administrative agency gives reasons that support its decision. In this chapter, we use the term explanation in this sense. This is different from explainability, as used in relation to the so-called ‘black box problem’; see Cynthia Rudin, ‘Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead’ (2019) 1 Nature Machine Intelligence 206. As we explain later, we think the quest for black-box explainability (which we call mathematical transparency) should give way to an explanation in the public law sense (giving grounds for decisions). We take this to be in line with Rudin’s call for interpretability in high-stakes decisions.

16 See e.g., Sandra Wachter, Brent Mittelstadt, and Luciano Floridi, ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation’ (2017) 7 International Data Privacy Law 76; Margot E. Kaminski, ‘The Right to Explanation, Explained’ (2019) 34 Berkeley Tech. LJ 189.

17 See the debate regarding transparency outlined in Brent Daniel Mittelstadt et al., ‘The Ethics of Algorithms: Mapping the Debate’ (2016) 3(2) Big Data & Society 67.

18 See the Ecoknow project: https://ecoknow.org/about/.

19 § 42 (1) of the Danish Consolidation Act on Social Services, available at http://english.sm.dk/media/14900/consolidation-act-on-social-services.pdf. For a review of the legal practice based on this provision (in municipalities), see Ankestyrelsen, ‘Ankestyrelsens Praksisundersøgelse Om Tabt Arbejdsfortjeneste Efter Servicelovens § 42 (National Board of Appeal’s Study on Lost Earnings According to Section 42 of the Service Act)’ (2017) https://ast.dk/publikationer/ankestyrelsens-praksisundersogelse-om-tabt-arbejdsfortjeneste-efter-servicelovens-ss-42.

20 There is indeed also a wide range of ways that an automated decision can take place. For an explanation of this, see the working version of this paper at section 3, http://ssrn.com/abstract=3402974.

21 Perhaps most famous is O’Neil (Footnote n 3), but the debate on Technological Singularity has attracted a lot of attention; see, for an overview, Murray Shanahan, The Technological Singularity (MIT Press 2015).

22 See Saul Levmore and Frank Fagan, ‘The Impact of Artificial Intelligence on Rules, Standards, and Judicial Discretion’ (2019) 93 Southern California Law Review.

23 See, for example, Riccardo Guidotti et al., ‘A Survey of Methods for Explaining Black Box Models’ (2018) 51 ACM Computing Surveys (CSUR) 1. Similarly, Cobbe (Footnote n 2), who makes a distinction between ‘how’ and ‘why’ a decision was made, says ‘just as it is often not straightforward to explain how an ADM system reached a particular conclusion, so it is also not straightforward to determine why that system reached that conclusion’. Our point is that these are the wrong questions to ask, because even in a human non-ADM system, we will never know ‘why that system reached that conclusion’. We cannot know. What we can do, however, is to judge whether or not the explanation given was sufficiently accurate and sufficient under the given legal duty to give reasons.

24 Amina Adadi and Mohammed Berrada, ‘Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)’ (2018) 6 IEEE Access 52138.

25 Uwe Kischel, Die Begründung: Zur Erläuterung Staatlicher Entscheidungen Gegenüber Dem Bürger, vol 94 (Mohr Siebeck 2003) 3234.

26 Franz-Joseph Peine and Thorsten Siegel, Allgemeines Verwaltungsrecht (12th ed., C.F. Müller2018) 160, mn. 513; Schweickhardt, Vondung, and Zimmermann-Kreher (eds), Allgemeines Verwaltungsrecht (10th ed., Kohlhammer 2018) 586588; Kischel (Footnote n 25) 40–65; H. C. H. Hofmann, G. C. Rowe, and A. H. Türk, Administrative Law and Policy of the European Union (Oxford University Press 2011), 200202; CJEU, Council of the European Union v. Nadiany Bamba, 15 November 2012, Case C-417 / 11, para. 49; N. Songolo, ‘La motivation des actes administratifs’, 2011, www.village-justice.com/articles/motivation-actes-administratifs,10849.html; J.-L. Autin, La motivation des actes administratifs unilatéraux, entre tradition nationale et évolution des droits européens ‘RFDA’ 2011, no. 137–138, 85–99. We do not engage in a deeper analysis of the underlying rationale for the existence of the requirement to provide an explanation, as this is not the aim of our chapter. For this discussion in administrative law, see Joana Mendes, ‘The Foundations of the Duty to Give Reasons and a Normative Reconstruction’ in Elizabeth Fisher, Jeff King, and Alison Young (eds), The Foundations and Future of Public Law (Oxford University Press 2020).

27 Making sure that the connection relies on ‘clean’ data is obviously very important, but it is a separate issue that we do not touch on in this chapter. For a discussion of this issue in regards to AI-supported law enforcement, see Richardson, Schultz, and Crawford (Footnote n 11).

28 See Iyad Rahwan et al., ‘Machine Behaviour’ (2019) 568 Nature 477.

29 For a longer detailed analysis, see the working paper version of this chapter: http://ssrn.com/abstract=3402974.

31 §39 VwVfG. Specialised regimes, e.g., for taxes and social welfare, contain similar provisions.

32 We found that in neither France nor the UK is there a general duty for administrative authorities to give reasons for their decisions. For French law, see the decision by Conseil Constitutionnel 1 juillet 2004, no. 2004–497 DC (‘les règles et principes de valeur constitutionnelle n’imposent pas par eux-mêmes aux autorités administratives de motiver leurs décisions dès lors qu’elles ne prononcent pas une sanction ayant le caractère d’une punition’). For UK law, see the decision by House of Lords in R v. Secretary of State for the Home Department, ex parte Doody, 1993 WLR 154 (‘the law does not at present recognise a general duty to give reasons for an administrative decision’).

33 Loi du 11 juillet 1979 relative à la motivation des actes administratifs et à l’amélioration des relations entre l’administration et le public.

34 Art. L211-5 (‘La motivation exigée par le présent chapitre doit être écrite et comporter l’énoncé des considérations de droit et de fait qui constituent le fondement de la decision’).

35 N. Songolo, ‘La motivation des actes administratifs, 2011’, www.village-justice.com/articles/motivation-actes-administratifs,10849.html.

36 Joanna Bell, ‘Reason-Giving in Administrative Law: Where Are We and Why Have the Courts Not Embraced the “General Common Law Duty to Give Reasons”?’ The Modern Law Review 9 http://onlinelibrary.wiley.com/doi/abs/10.1111/1468-2230.12457 accessed 19 September 2019 original emphasis.

37 Cobbe (Footnote n 2) 648.

38 Marion Oswald, ‘Algorithm-Assisted Decision-Making in the Public Sector: Framing the Issues Using Administrative Law Rules Governing Discretionary Power’ (2018) 376 Phil. Trans. R. Soc. A https://ssrn.com/abstract=3216435.

39 Dover District Council (Appellant) v. CPRE Kent (Respondent) CPRE Kent (Respondent) v. China Gateway International Limited (Appellant) [2017] UKSC 79, para. 41. See, in particular, Stefan v. General Medical Council [1999] 1 WLR 1293 at page 1300G.

40 Oswald (Footnote n 38) 6.

41 Case C-370/07 Commission of the European Communities v. Council of the European Union, 2009, ECR I-08917, recital 42 (‘which is justified in particular by the need for the Court to be able to exercise judicial review, must apply to all acts which may be the subject of an action for annulment’).

42 Jürgen Schwarze, European Administrative Law (Sweet & Maxwell 2006) 1406.

43 Reg (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Dir 95/46/EC (General Data Protection Regulation) 2016, Art. 22(1).

44 Footnote Ibid., Art. 22(2)b.

45 Footnote Ibid., Art. 22(2)c.

46 For a longer detailed analysis, see the working paper version of this chapter: http://ssrn.com/abstract=3402974.

47 See Antoni Roig, ‘Safeguards for the Right Not to Be Subject to a Decision Based Solely on Automated Processing (Article 22 GDPR)’ (2017) 8(3) European Journal of Law and Technology.

48 Schwarze (Footnote n 42) 1410.

49 See also Zalnieriute, Moses, and Williams (Footnote n 2), who conclude (at p. 454) after conducting four case studies that only one system (the Swedish student welfare management system) succeeds in reaping benefits from automation while remaining sensitive to rule of law values. They characterize this as ‘a carefully designed system integrating automation with human responsibility’.

50 We are well aware that such decisions do not formally have the character of precedent, what we refer to here is the de facto tendency in the administrative process to make new decisions that closely emulate earlier decisions of the same kind.

51 Even deciding what former decisions are relevant to a new case can sometimes be a complex problem that requires a broader contextual understanding of law and society that is not attainable by algorithms.

52 See also Carol Harlow and Richard Rawlings, ‘Proceduralism and Automation: Challenges to the Values of Administrative Law’ in E. Fisher, J. King, and A. Young (eds), The Foundations and Future of Public Law (in Honour of Paul Craig) (Oxford University Press 2019) (at 6 in the SSRN version) https://papers.ssrn.com/abstract=3334783, who note that ‘Administrative Law cannot be static, and the list of values is not immutable; it varies in different legal orders and over time’.

53 Research has identified a phenomenon known as automation bias. This is the propensity for humans to favour suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct. See Mary Cummings, ‘Automation Bias in Intelligent Time Critical Decision Support Systems’, AIAA 1st Intelligent Systems Technical Conference (2004); Asia J Biega, Krishna P Gummadi, and Gerhard Weikum, ‘Equity of Attention: Amortizing Individual Fairness in Rankings’, The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval (2018). In implementing ADM in public administration, we follow this research by recommending processes that seek to reduce such bias.

54 See O’Neil (Footnote n 3) for a discussion of the problem with feedback loops.

55 Whether recipients can or should be able to demand insight into the underlying neurological or algorithmic computations of caseworkers (human or robotic) is a separate question that we do not seek to answer here. Suffice it to say there may be many reasons why a human might ask for an explanation, including not caring what the justification is but simply wanting a change of outcome.

56 A. M. Turing, ‘Computing Machinery and Intelligence’ (1950) 49 Mind 433460.

57 Formats for issuing drafts could also be formalized so as to reduce the possibility of guessing merely by recognizing the style of the drafter’s language.

58 Cobbe remarks that black box technology that ‘their inexplicability is therefore a serious issue’ and therefore decisions issued by such systems will likely not pass judicial review. She then adds that ‘some public bodies may attempt to circumvent this barrier by providing retrospective justifications’. She flags that Courts and reviewers should be ‘aware of this risk and should be prepared to exercise the appropriate level of scrutiny … against such justifications.’ Cobbe (Footnote n 2) 648.

12 International Race for Regulating Crypto-Finance Risks A Comprehensive Regulatory Framework Proposal

* I am grateful to Manuel Ballbé Mallol for his great support and valuable contribution to an earlier draft. The views expressed in this article are privately held by the author and cannot be attributed to the European Securities and Markets Authority (ESMA).

1 See Ballbé, M. ; Padrós, C. Estado competitivo y armonización europea. Ariel. Barcelona, 1997. See also Ballbé, M.; Cabedo, Y. La necesidad de administraciones reguladoras en Latinoamérica para prevenir los ataques especulativos, los riesgos financieros y la defensa de los consumidores. Revista del CLAD Reforma y Democracia. No 57. Caracas, October 2013.

2 Ballbé, M.; Martinez, R. Law and globalization: between the United States and Europe in global administrative law. Towards a lex administrativa. Eds. Robalino-Orellana, J.; Rodriguez-Arana, J. Cameron May. 2010.

3 Ballbé, M.; Martinez, R. (2010).

4 See Commission fines Google €1.49 billion for abusive practices in online advertising https://ec.europa.eu/commission/presscorner/detail/en/IP_19_1770

5 DeMuth, C. ; The regulatory state. National Affairs. Summer 2012.

6 Eastman, J. B. The public service commission of Massachusetts. The Quarterly Journal of Economics. Vol. 27. No. 4 (August, 1913). Oxford University Press.

7 Ballbé, M.; Martinez, R. (2010).

8 Ballbé, M., Cabedo, Y.; (2013).

9 The European Commission, Parliament and Council.

10 Brummer, C. EU reports on cryptoasset regulation could have global reverberations. Watchdogs urge EU-wide rules. 9 January 2019 www.rollcall.com/2019/01/09/eu-reports-on-cryptoasset-regulation-could-have-global-reverberations/

11 Mazzucato, M. The entrepreneurial state: debunking public vs. private sector myths. Anthem Press. London, 2013.

12 A hash provides a way to represent the bundle of transactions in a block as a string of characters and numbers that are uniquely associated with that block’s transactions. De Filippi, P., Wright, A. Blockchain and the law: the rule of code. Harvard University Press. Massachusetts, 2018.

13 De Filippi, P., Wright, A. Blockchain and the law: the rule of code. Harvard University Press. Massachusetts, 2018.

14 For example, see Delaware law amendments to allow corporations to issue shares through blockchain in Reyes, C.L. Cryptolaw for distributed ledger technologies: a jurisprudential framework. Journal of Law Science and Technology. Spring 2018. Vol. 58 Issue 3. See also the Australian Stock Exchange transition to DLT for equity transactions https://cointelegraph.com/news/covid-19-forces-aussie-stock-exchange-to-delay-dlt-overhaul-to-2023

15 For further analysis on the causes of the crisis, see Lastra, R.M.; Wood, G. The crisis of 2007–09: nature, causes, and reactions. Journal of International Economic Law 13(3). See also Ballbé, M.; Cabedo, Y. (2013).

16 For figures on the bail-out costs of some EU financial institutions, see Ballbé, M.; Cabedo, Y. El ataque alemán deshaucia a España. 29 November 2012. In the United States, the Troubled Asset Relief Program initial budget amounted to $350 billion.

17 Too Big to Fail banks.

18 Mahoney, P.G.; Mei, J. Mandatory versus contractual disclosure in securities markets: evidence from the 1930s. Working Paper, 23 February 2006. Cited in Brummer, C.; et al. What should be disclosed in an initial coin offering? 29 November 2018. Cryptoassets: Legal and Monetary Perspectives, Oxford University Press, Forthcoming. Draft 24 February 2019.

19 In 1986, an amendment to the Game Act was approved to carve-out OTC derivatives. However, the boom of OTC derivatives markets took place later on, in 2000, once the United States had unwound all regulatory and supervisory checks for OTC derivative markets.

20 Bank for International Settlements. BIS quarterly review: international banking and financial markets development. December 2018.

21 Cabedo, Y. OTC regulatory reform: risks of the clearing obligation from a competition perspective. Risk & Regulation. London School of Economics, Centre for Analysis of Risk and Regulation. Summer 2016.

22 This fragmentation system had been implemented in 1933 with the adoption of the Glass-Steagall Act as a risk contention measure; in case an investment bank would fail, entities holding deposits would not be impacted.

23 US banks could not provide banking services beyond the limits of their home state. This was part of the Dual Banking System and was grounded on the US constitutional spirit of checks and balances and control of monopolies. In 1994, the Reagle Neal Act removed this territorial restriction allowing banks to merge with other banks in the other states.

24 Federal Reserve Bank of Dallas. Annual Report. 2011.

25 Or, as some authors like to say, ‘too big to jail’.

26 Stigler, G. J. The theory of economic regulation. The Bell Journal of Economics and Management Science. Vol. 2 No. 1. Spring 1971.

27 ISDA gathers all major participants in OTC derivatives markets.

28 ‘Fools Gold’ by Gillian Tett, Little Brown. 2009 p. 36. Cit. in Thomas, T. The 2008 global financial crisis: origins and response. 15th Malasyan Law Conference, 29–31 July. Kuala Lumpur. 2010.

29 Congressional Oversight Panel. Special report on regulatory reform. January 2009.

30 Cabedo, Y. (2016).

31 Brandeis, L.D. The living law. 1917, p. 468.

32 BIS. Big tech in finance. Opportunities and risks. Annual Report 2019.

33 How to tame the tech titans. The Economist. 18 January 2018.

34 Stucke, M.E. Should we be concerned about data-opolies? 2 Geo. L. Tech. Rev. 275. 2018.

35 De Filipi, P., Wright, A. (2018).

36 See Werbach, K.; Trust, but verify: Why the blockchain needs the law. Berkeley Technology Law Journal. Vol. 33.

37 The most commonly used are Proof of Work, Proof of Stake, Proof of Burn, Proof of Authority, Proof of Capacity and Proof of Storage (new ones are being introduced). Depending on which consensus mechanism is chosen, users will make different uses of computational logic on blockchain.

38 Auer, R. Beyond the doomsday economics of ‘proof-of work’ in cryptocurrencies. BIS Working Papers No 765. January 2019.

39 ESMA. Advice on Initial Coin Offerings and Crypto-Assets. 9 January 2019.

40 Schrepel, T. Collusion by blockchain and smart contracts. Harvard Journal of Law & Technology. Vol. 33. Fall 2019.

41 ESMA (2019).

42 De Filippi, P.; Wright, A. (2018).

43 CFTC. Commissioner Brian Quintez at the 38th Annual GITEX Technology Week Conference. Public Statements & Remarks, 16 October 2018.

44 ESMA. Advice on Initial Coin Offerings and Crypto-Assets. 9 January 2019.

45 ESMA. (2019).

46 A switch from public fiat toward private electronic money still leaves central banks unconvinced due to security, scalability and interoperability concerns. See Ward, O., Rochemont, S.; Understanding central bank digital currencies. Institute and Faculty of Actuaries. March 2019.

47 ESMA. (2019).

48 De Filippi, P.; Wright, A. (2018).

49 Lessig, L.; Code and other laws of cyberspace. Perseus Books, 1999.

50 Malady, L., Buckley, R. P., Didenko, A., Tsang, C. A regulatory diagnostic toolkit for digital financial services in emerging markets. Banking & Finance Law Review, 34(1). 2018.

51 Lastra, R. M., Allen, J. G. Virtual currencies in the Eurosystem: challenges ahead. Study Requested by the ECON Committee, European Parliament. July 2018.

52 Rooney, K. SEC chief says agency won’t change securities laws to cater to cryptocurrencies, CNBC.com. 11 June 2018.

53 CFTC Release Number 8051–19: Chairman Tarbert Comments on Cryptocurrency Regulation at Yahoo! Finance All Markets Summit. 10 October 2019.

54 Decision of 26 February 2020.

55 Andrew Singer, French court moves the BTC chess piece. How will regulators respond? 15 March 2020 https://cointelegraph.com/news/french-court-moves-the-btc-chess-piece-how-will-regulators-respond

56 Brummer, C.; Kiviat, T.; Massari, J. (2018).

57 Levine, M.; The SEC gets a token fight. Bloomberg. 28 January 2019.

58 Whirty, T., Protecting innovation: the kin case, litigating decentralization, and crypto disclosures. 4 February 2019 https://www.alt-m.org/2019/02/01/protecting-innovation-the-kin-case-litigating-decentralization-and-crypto-disclosures/

59 Brummer, C.; Kiviat, T.; Massari, J. (2018).

60 See SEC. Investigative report concluding dao tokens, a digital asset, were securities. Release. 2017.

61 SEC. (2017).

62 SEC v. W. J. Howey Co. et al. 27 May 1946.

63 See US Securities and Exchange Commission. Framework for ‘investment contract’ analysis of digital assets. 3 April 2019.

64 Lastra, R. M.; Allen, J. G. (2018).

65 SEC. Two ICO issuers settle SEC registration charges, agree to register tokens as securities. Press release. 16 November 2018.

66 Whirty, T. (2019).

67 Morris, D. Z.; How Kik’s looming SEC fight could define Blockchain’s future. Breakermag. 30 January 2019.

68 SEC v. Kik Interactive. US District Court Southern District of New York Case No. 19-cv-5244. 04/06/2019.

69 SEC v. Eran Eyal and United Data, Inc. doing business as ‘Shopin’, Case 1:19-cv-11325, filed 11 December 2019.

70 Nathan, D.; Fraud is fraud – sales of unregistered digital securities resemble classic microcap fraud. JDSupra, 18 December 2019.

71 Brummer, C.; Kiviat, T.; Massari, J. (2018).

72 Brummer, C.; Kiviat, T.; Massari, J. (2018).

73 CFTC. Commissioner Brian Quintez. (16 October 2018).

74 Frost, J., Gambacorta, L., Huang, Y., Shin, H., Zbinden, P.; BigTech and the changing structure of financial intermediation. BIS. April 2019.

75 SEC. SEC Names Valerie A. Szczepanik Senior Advisor for Digital Assets and Innovation. Press release. 2018–102.

76 Dale, B. SEC’s Valerie Szczepanik at SXSW: Crypto ‘Spring’ Is Going to Come. Coindesk.com. 15 Mars 2019.

77 G20 Leaders Statement: The Pittsburgh Summit. 24–25 September 2009.

78 See OICV-IOSCO. Issues, risks and regulatory considerations relating to crypto-asset trading platforms. February 2020. See also a compilation of Regulators’ Statements on Initial Coin Offerings.

79 FATF. Guidance on a risk-based approach to virtual assets and virtual asset service providers. June 2019.

80 In the EU, enforcement powers remain with national authorities and the ESAs are mainly tasked with ensuring supervisory convergence. In specific cases, the ESAs are direct supervisors (e.g., ESMA in relation to trade repositories or credit-rating agencies).

81 Romano, R. Does agency structure affect agency decisionmaking? Yale Journal on Regulation. Vol. 36, 2019.

82 Kerwin, C. M.; Furlong, S. R. Rule-making: how government agencies write law and make policy, 53 2011. Cit in Romano (2019).

83 Romano (2019).

84 Landis, J. M. The administrative process. Yale University Press, 1938.

85 CFTC Release 8051–19 (2019).

86 Schrepel, T. (2019).

87 See European Supervisory Authorities. Report on FinTech: Regulatory sandboxes and innovation hubs. JC 2018–74.

88 European Supervisory Authorities. (2018).

89 Brandeis, L. The regulation of competition versus the regulation of monopoly by Louis D. Brandeis. An address to the Economic Club of New York, 1 November 1912. Cited in Ballbé, M.; Martinez, R. (2010).

90 New State Ice Co. v. Liebmann, 285 US 262, 311 (1932) (Brandeis, J., dissenting).

91 Mangano, R.; Recent developments: The sandbox of the UK FCA as win-win regulatory device? Banking and Finance Law Review, Vol. 34, No. 1. December 2018.

92 UK, Financial Conduct Authority, Regulatory sandbox lessons learned report FCA, 2017. Cited in Mangano, R. (2018).

93 It is the case, for example, of B2C2, an electronic OTC trading firm and crypto liquidity provider, authorized by the FCA to offer OTC derivatives on cryptos. See Khatri, Y. UK firm gets regulatory green light to offer crypto derivatives. Coindesk.com. 1 February 2019.

94 FCA Public statement. November 2017.

95 Legal scholar and former Administrator of the Office of Information and Regulatory Affairs for the Obama administration.

96 Solum, L.B.; Sunstein, C. Solum, L. B.; Sunstein, C. R. Chevron as construction. Preliminary draft 12 December 2018.

97 Chevron, USA., Inc. v. Nat. Res. Def. Council, Inc., 467 US 837 (1984).

98 Shiller, R. The subprime solution. Princeton, 2008, p. 129.

99 Romano, R. (2019).

100 Warren, E. Real change: turning up the heat on non-bank lenders. The Huffington Post, 3 September 2009.

101 Lucia et al. v. Securities Exchange Commission. US Supreme Court decision. June 2018.

102 See Lucia v. SEC. Harvard Law Review, 287. 1 May 2019.

103 See Ballbe, M., Martinez, R., Cabedo, Y. La crisis financiera causada por la deregulation de derecho administrativo americano. In book Administración y justicia: un análisis jurisprudencial. Coords. Garcia de Enterría, E., Alonso, R. Madrid, Civitas, Vol. 2. 2012.

104 Refers to the federal government enacting legislation on a subject matter and precluding the state from enacting laws on the same subject.

105 See the Truth in Lending Act of 1968.

106 Greenspan, A. International Financial Risk Management. Federal Reserve Board. 19 November 2002. Cit. in Ballbe, M., Martinez, R., Cabedo, Y. (2012).

107 The special report on regulatory reform of the Congressional oversight panel, January 2009.

108 Supreme Court US. Cuomo, Attorney General of New York v. Clearing House Association, L.L.C., et al. No 08–453. April 2009.

109 On this case, see Ballbe, M.; Cabedo, Y. (2013) and (2012).

110 Mollenkamp, C., Whitehouse, M. Study casts doubt on key rate. WSJ analysis suggests banks may have reported flawed interest data for libor. The Wall Street Journal. 29 May 2008.

Figure 0

Figure 8.1 Basic structure of expert systems

Figure 1

Figure 8.2 Kinds of learning

Figure 2

Figure 8.3 Supervised learning

Figure 3

Figure 11.1 Turing’s experimental setup

(Source: https://en.wikipedia.org/wiki/Turing_test)

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×