Skip to main content Accessibility help
Hostname: page-component-848d4c4894-tn8tq Total loading time: 0 Render date: 2024-06-16T13:13:21.320Z Has data issue: false hasContentIssue false

Part I - Algorithms, Freedom, and Fundamental Rights

Published online by Cambridge University Press:  01 November 2021

Hans-W. Micklitz
European University Institute, Florence
Oreste Pollicino
Bocconi University
Amnon Reichman
University of California, Berkeley
Andrea Simoncini
University of Florence
Giovanni Sartor
European University Institute, Florence
Giovanni De Gregorio
University of Oxford


Publisher: Cambridge University Press
Print publication year: 2021
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0

2 Fundamental Rights and the Rule of Law in the Algorithmic Society

Andrea Simoncini and Erik Longo
2.1 New Technologies and the Rise of the Algorithmic Society

New technologies offer human agents entirely new ways of doing things.Footnote 1 However, as history shows, ‘practical’ innovations always bring with them more significant changes. Each new option introduced by technological evolution allowing new forms affects the substance, eventually changing the way humans think and relate to each other.Footnote 2 The transformation is especially true when we consider information and communication technologies (so-called ICT); as indicated by Marshall McLuhan, ‘the media is the message’.Footnote 3 Furthermore, this scenario has been accelerated by the appearance of artificial intelligence systems (AIS), based on the application of machine learning (ML).

These new technologies not only allow people to find information at an incredible speed; they also recast decision-making processes once in the exclusive remit of human beings.Footnote 4 By learning from vast amounts of data – the so-called Big Data – AIS offer predictions, evaluations, and hypotheses that go beyond the mere application of pre-existing rules or programs. They instead ‘induce’ their own rules of action from data analysis; in a word, they make autonomous decisions.Footnote 5

We have entered a new era, where big multinational firms (called ‘platforms’) use algorithms and artificial intelligence to govern vast communities of people.Footnote 6 Conversely, data generated by those platforms fuel the engine of the ‘Algorithmic Society’.Footnote 7

From this point of view, the Algorithmic Society is a distinctive evolution of the ‘Information Society’,Footnote 8 where a new kind of ‘mass-surveillance’ becomes possible.Footnote 9

This progress generates a mixture of excitement and anxiety.Footnote 10 The development of algorithms and artificial intelligence technologies is becoming ubiquitous, omnipresent, and seemingly omnipotent. They promise to eliminate our errors and make our decisions better suited for any purpose.Footnote 11

In this perspective, a relatively old prophecy, predicted by Herbert Marcuse in one of the ‘red books’ of that massive socio-political movement usually known as ‘1968’, The One-Dimensional Man, becomes reality. Marcuse starts the first page of that seminal book as follows:

A comfortable, smooth, reasonable, democratic unfreedom prevails in advanced industrial civilization, a token of technical progress.

Indeed, what could be more rational than the suppression of individuality in the mechanization of socially necessary but painful performances; … That this technological order also involves a political and intellectual coordination may be a regrettable and yet promising development. The rights and liberties which were such vital factors in the origins and earlier stages of industrial society yield to a higher stage of this society: they are losing their traditional rationale and content. …

To the degree to which freedom from want, the concrete substance of all freedom, is becoming a real possibility. The liberties that pertain to a state of lower productivity are losing their former content. … In this respect, it seems to make little difference whether the increasing satisfaction of needs is accomplished by an authoritarian or a non-authoritarian system.Footnote 12

If technology replaces all ‘socially necessary but painful performances’ – work included – personal freedom reaches its final fulfilment (that is, its very end). In Marcuse’s eyes, this is how technological power will take over our freedom and political system: not through a bloody ‘coup’ but by inducing people – practically and happily – to give up all their responsibilities.

However, this dystopic perspective – a future of ‘digital slavery’, where men and women will lose their liberty and quietly reject all democratic principlesFootnote 13 – produces a reaction. It is not by chance that the European Commission’s strategically endorsing the transformation of the EU into an AI-led economy, at the same time, requires great attention to people’s trust and a high level of fundamental rights protection.Footnote 14

One of the most common areas where we experience the rise of these concerns is public and private security.Footnote 15 For a large part of the 2010s onward, technological innovations have focused on safety and control; the consequence has been an alarming increase in public and private surveillance, coupled with growing threats to political and civil liberties.Footnote 16 In addition to this, the global ‘COVID-19’ pandemic has doubtlessly boosted the already fast-growing ‘surveillance capitalism’.Footnote 17

While at the beginning of the twenty-first century, there was an increasing awareness of the risks of the new pervasive surveillance technologies, today, hit by the pandemic and searching for practical tools to enforce social distancing or controlling policies, the general institutional and academic debate seems to be less worried by liberty-killing effects and more allured by health-preserving results.Footnote 18

Regardless, the most worrying challenges stem from the increasing power of algorithms, created through Big Data analytics such as machine learning and used to automate decision-making processes.Footnote 19 Their explicability,Footnote 20 liability, and culpability are still far from being clearly defined.Footnote 21 As a consequence, several scholars and policymakers are arguing, on the one hand, to aggressively regulate tech firmsFootnote 22 (since classic antitrust law is unfit for this purpose) or, on the other, to require procedural safeguards, allowing people to challenge the decisions of algorithms which can have significant consequences on their lives (such as credit score systems).Footnote 23

2.2 The Impact of the Algorithmic Society on Constitutional Law

As we know, at its very origin, constitutional theory wrestles with the problem of power control.Footnote 24 Scholars commonly consider constitutional law that part of the legal system whose function is to legallyFootnote 25 delimit power.Footnote 26 In the ‘modern sense’,Footnote 27 this discipline establishes rules or builds institutions capable of shielding personal freedoms from external constraints.Footnote 28 According to this idea, constitutionalism historically always ‘adapted’ itself to power’s features; that is to say, the protection of freedoms in constitutions has been shaped following the evolving character of the threats to those same freedoms.Footnote 29

At the beginning of the modern era, the power to be feared was the king’s private force.Footnote 30 The idea of ‘sovereignty’, which appeared at the end of the Middle Ages, had its roots in the physical and military strength of the very person of the ‘Sovereign’.Footnote 31 Sovereignty evoked an ‘external power’Footnote 32 grounded on the monopoly (actual or potential) of the physical ‘force’Footnote 33 used against individuals or communities (e.g., ‘military force’ or the ‘force of law’).Footnote 34 Consequently, liberties were those dimensions of human life not subjected to that power (e.g., habeas corpus). As the offspring of the French and American Revolutions, the ‘rule of law’ doctrine was the main legal tool ‘invented’ by constitutional theory to delimit the king’s power and protect personal freedom and rights. To be ‘legitimate’, any power has to be subjected to the rule of law.

The other decisive turning point in the history of constitutionalism was World War II and the end of twentieth-century European totalitarian regimes. It may sound like a paradox, but those regimes showed that the ‘legislative state’, built on the supremacy of law and therefore exercising a ‘legitimate power’, can become another terrible threat to human freedom and dignity.

If the law itself has no limits, whenever it ‘gives’ a right, it can ‘withdraw’ it. This practice is the inhuman history of some European twentieth-century states that cancelled human dignity ‘through the law’.

With the end of World War II, a demolition process of those regimes began, and learning from the American constitutional experience, Europe transformed ‘flexible’ constitutions – until then, mere ordinary laws – into ‘rigid’ constitutions,Footnote 35 which are effectively the ‘supreme law’ of the land.Footnote 36

In this new scenario, the power that instils fear is no longer the king’s private prerogative; the new limitless force is the public power of state laws, and the constitutional tool intended to effectively regulate that power is vested in the new ‘rigid’ constitution: a superior law, ‘stronger’ than ordinary statutes and thus truly able to protect freedoms, at least apparently, even against legal acts.

With the turn of the twenty-first century, we witness the rise of a new kind of power. The advent of new digital technologies, as discussed previously, provides an unprecedented means of limiting and directing human freedom that has appeared on the global stage; a way based on not an ‘external’ force (as in the two previous constitutional scenarios, the private force of the king or the public ‘force of the law’) but rather an ‘internal’ force, able to affect and eventually substitute our self-determination ‘from inside’.Footnote 37

This technological power is at the origin of ‘platform capitalism’,Footnote 38 which is a vast economic transformation induced by the exponentially fast-growing markets of Internet-related goods and services – for example, smart devices (Apple, Samsung, Huawei, Xiaomi), web-search engines (Google), social media corporations (Facebook, Instagram, Twitter), cloud service providers (Amazon, Microsoft, Google), e-commerce companies (Amazon, Netflix), and social platforms (Zoom, Cisco Webex).

Consider that today,Footnote 39 the combined value of the S&P 500’s five most prominent companiesFootnote 40 now stands at more than $7 trillion, accounting for almost 25 per cent of the market capitalization of the index, drawing a picture of what a recent doctrine accurately defined as a ‘moligopoly’.Footnote 41

These ‘moligopolists’Footnote 42 are not only creating communities and benefitting from network effects generated by users’ transactions, but they also develop a de facto political authority and influence once reserved for legal and political institutions. More importantly, they are taking on configurations that are increasingly similar to the state and other public authorities.Footnote 43 Their structure reflects a fundamental shift in the political and legal systems of Western democracies – what has been called a new type of ‘functional sovereignty’.Footnote 44 Elsewhere we used the term ‘cybernetic power’,Footnote 45 which perhaps sounds like an old-fashioned expression. Still, it is more accurate in its etymology (‘cyber’, from its original ancient Greek meaning,Footnote 46 shares the same linguistic root as ‘govern’ and ‘governance’) to identify how automation and ICT have radically transformed our lives.

As algorithms begin to play a dominant role in the contemporary exercise of power,Footnote 47 it becomes increasingly important to examine the ‘phenomenology’ of this new sovereign power and its unique challenges to constitutional freedoms.

2.3 The ‘Algorithmic State’ versus Fundamental Rights: Some Critical Issues

As already stated, the main force of algorithms is their practical convenience, so their interference with our freedom is not perceived as an ‘external’ constraint or a disturbing power. Instead, it is felt as evidence-based support for our decisions, capturing our autonomy by lifting our deliberation burden.

Who would like to switch back to searching for information in volumes of an encyclopaedia? Who would want to filter their email for spam manually anymore? Who would like to use manual calculators instead of a spreadsheet when doing complex calculations? We are not just living in an increasingly automated world; we are increasingly enjoying the many advantages that come with it. Public administrations are using more and more algorithms to help public-sector functions, such as welfare, the labour market, tax administration, justice, crime prevention, and more. The use of algorithms in decision-making and adjudications promises more objectivity and fewer costs.

However, as we said, algorithms have a darker side, and the following chapters of this section of the book illustrate some of the facets of the Algorithmic State phenomenology.

The fast-growing use of algorithms in the fields of justice, policing, public welfare, and the like could end in biased and erroneous decisions, boosting inequality, discrimination, unfair consequences, and undermining constitutional rights, such as privacy, freedom of expression, and equality.Footnote 48

And these uses raise considerable concerns not only for the specific policy area in which they are operated but also for our society as a whole.Footnote 49 There is an increasing perception that humans do not have complete control over Algorithmic State decision-making processes.Footnote 50 Despite their predictive outperformance over analogue tools, algorithmic decisions are difficult to understand and explain (the so-called black box effect).Footnote 51 While producing highly effective practical outcomes, algorithmic decisions could undermine procedural and substantive guarantees related to democracy and the rule of law.

Issues related to the use of algorithms as part of the decision-making process are numerous and complex, but at the same time, the debate is at an early stage. However, efforts towards a deeper understanding of how algorithms work when applied to legally tricky decisions will be addressed soon.

In this section, we will examine four profiles of the use of algorithmic decisions: the relation between automation and due process, the so-called ‘emotional’ AI, the algorithmic bureaucracy, and predictive policing.

Due Process in the Age of AI

In Chapter 3, entitled ‘Inalienable Due Process in an Age of AI: Limiting the Contractual Creep toward Automated Adjudication’, Frank Pasquale argues that robust legal values must inspire the current efforts to ‘fast track’ cases by judges and agencies, via statistical methods, machine learning, or artificial intelligence. First, he identifies four core features to be included in due process rights when algorithmic decisions are under consideration. They are related to the ‘ability to explain one’s case’, the ‘necessity of a judgment by a human decision-maker’, an ‘explanation for that judgment’, and an ‘ability to appeal’. As a second step, he argues that given that legal automation threatens due process rights, we need proper countermeasures, such as explainability and algorithmic accountability. Courts should not accept legal automation because it could be a hazard for vulnerable and marginalized persons, despite all good intentions. In the last part of his article, Pasquale traces a way to stem the tide of automation in the field of justice and administration, recalling the doctrine of Daniel Farber concerning ‘unconstitutional conditions’, which sets principles and procedures to block governments from requiring waiver of a constitutional right as a condition of receiving some governmental benefit.Footnote 52

Far from a solution that brings us back to an ‘analogic’ world, we agree with Frank Pasquale. In his article, he calls for a more robust and durable theory of constitutionalism to pre-empt the problems that may arise from using automation. However, this is not sufficient, since we need a parallel theory and practice of computer science to consider ethical values and constitutional rights involved in the algorithmic reasoning and to empower officials with the ability to understand when and how to develop and deploy the technology.Footnote 53 Besides, it is necessary to maintain a ‘human-centric’ process in judging for the sake of courts and citizens, who could be destroyed, as Pasquale warns, by the temptation of the acceleration, abbreviation, and automation of decisional processes.

Constitutional Challenges from ‘Emphatic’ Media

Chapter 4, by Peggy Valcke, Damian Clifford, and Viltė Kristina Steponėnaitė, focuses on ‘Constitutional Challenges in the Emotional AI Era’. The emergence of ‘emotional AI’, meaning technologies capable of using computing and artificial intelligence techniques to sense, learn about, and interact with human emotional life (so-called ‘emphatic media’)Footnote 54 raises concerns and challenges for constitutional rights and values from the point of view of its use in the business to consumer context.Footnote 55

These technologies rely on various methods, including facial recognition, physiological measuring, voice analysis, body movement monitoring, and eye-tracking. The social media business gauges several of these techniques to quantify, track, and manipulate emotions to increase their business profits.

In addition to technical issues about ‘accuracy’, these technologies pose several concerns related to protecting consumers’ fundamental rights and the rights of many other individuals, such as voters and ordinary people. As Peggy Valcke, Damian Clifford, and Viltė Kristina Steponėnaitė claim, emotional AI generates a growing pressure on the whole range of fundamental rights involved with the protection against the misuse of AI, such as privacy, data protection, respect for private and family life, non-discrimination, freedom of thought, conscience, and religion.

Although the authors argue for the necessity of constitutional protection against the possible impacts of emotional AI on existing constitutional freedoms, they ask themselves whether we need new rights in Europe in the light of growing practices of manipulation by algorithms and emotional AI. By highlighting the legal and ethical challenges of manipulating emotional AI tools, the three authors suggest a new research agenda that harnesses the academic scholarship and literature on dignity, individual autonomy, and self-determination to inquiring into the need for further constitutional rights capable of preventing or deterring emotional manipulation.

Algorithmic Surveillance as a New Bureaucracy

Chapter 5 is entitled ‘Algorithmic Surveillance as a New Bureaucracy: Law Production by Data or Data Production by Law?’, in which Mariavittoria Catanzariti explores the vast topic of algorithmic administration. Her argument deals with the legitimation of administrative power, questioning the rise of a ‘new bureaucracy’ in Weberian terms. Like bureaucracy, algorithms have a rational power requiring obedience and excluding non-predictable choices. Whereas many aspects of public administration could undoubtedly benefit from applying machine learning algorithms, their substitution for human decisions would ‘create a serious threat to democratic governance, conjuring images of unaccountable, computerized overlords’.Footnote 56

Catanzariti points out that with private sectors increasingly relying on machine learning power, even administration and public authorities, in general, keep pace and make use of the same rationale, giving birth to an automated form of technological rationality. The massive use of classification and measurement techniques affect human activity, generating new forms of power that standardize behaviours for inducing specific conduct. The social power of algorithms is currently visible in the business of many governmental agencies in the United States.

While producing a faster administration, decision-making with algorithms is likely to generate multiple disputes. The effects of algorithmic administration are far from being compliant with the same rationality as law and administrative procedures. Indeed, the use of algorithms determines results that are not totally ‘explainable’, a fact that is often accused of being ‘obscure, crazy, wrong, in short, incomprehensible’.Footnote 57

As Catanzariti explains, algorithms are not neutral, and technology is not merely a ‘proxy’ for human decisions. Whenever an automated decision-making technology is included in a deliberative or administrative procedure, it tends to ‘capture’ the process of deciding or make it extremely difficult to ignore it. Consequently, the author argues that law production by data ‘is not compatible with Weberian legal rationality’, or as we have claimed, automation, far from appearing a mere ‘slave’, unveils its true nature of being the ‘master’ of decision-making when employed, due to its ‘practical appeal’.Footnote 58 Indeed, algorithms put a subtle but potent spell on administrations: by using them, you can save work, time, and above all, you are relieved of your burden of motivating. Yet is this type of algorithmic administration really accountable? Coming back to Frank Pasquale’s question, are ‘due process’ principles effectively applicable to this kind of decision?

Predictive Policing

Finally, Chapters 6 and 7, ‘Human Rights and Algorithmic Impact Assessment for Predictive Policing’ by Céline Castets-Renard and ‘Law Enforcement and Data-Driven Predictions at the National and EU Level: A Challenge to the Presumption of Innocence and Reasonable Suspicion?’ by Francesca Galli, touch upon the issue of law enforcement and technology.Footnote 59 The first addresses the dilemma of human rights challenged by ‘predictive policing’ and the use of new tools such as the ‘Algorithmic Impact Assessment’ to mitigate the risks of such systems. The second explores the potential transformation of core principles of criminal law and whether the techniques of a data-driven society may hamper the substance of legal protection. Both the authors argue for the necessity to protect fundamental rights against the possible increase of coercive control of individuals and the development of a regulatory framework that adds new layers of fundamental rights protection based on ethical principles and other practical tools.

In some countries, police authorities have been granted sophisticated surveillance technologies and much more intrusive investigative powers to reduce crime by mapping the likely locations of future unlawful conduct so that the deployment of police resources can be more effective.Footnote 60

Here again, the problem regards the ability and sustainability of decisions by intelligent machines and their consequences for the rights of individuals and groups.Footnote 61 Machine learning and other algorithmic tools can now correlate multiple variables in a data set and then predict behaviours. Such technologies open new scenarios for information gathering, monitoring, surveilling, and profiling criminal behaviour. The risk here is that predictive policing represents more than a simple shift in tools and could result in less effective and maybe even discriminatory police interventions.Footnote 62

2.4 The Effects of the ‘Algorithmic State’ on the Practice of Liberty

Trying to synthetize some of the most critical issues brought about by the advent of what we call the Algorithmic State on the practice of constitutional liberties, there appear to be two main sensitive areas: surveillance and freedom.


As we have already seen, the rise of the algorithmic state has produced the change foreseen more than forty years ago by Herbert Marcuse. In general, technology is improving people’s lives. However, we know that this improvement comes at a ‘price’. We are increasingly dependent on big-tech-platform services, even if it is clear that they make huge profits with our data. They promise to unchain humans from needs and necessities, but they themselves are becoming indispensable.

Therefore, we are taking for granted that the cost of gaining such benefits – security, efficiency, protection, rewards, and convenience – is to consent to our personal data being recorded, stored, recovered, crossed, traded, and exchanged through surveillance systems. Arguing that people usually have no reason to question surveillance (the ‘nothing to hide’ misconception)Footnote 63 strengthens the order built by the system, and people become ‘normalized’ (as Foucault would have said).Footnote 64

Because of this massive use of technology, we are now subject to a new form of surveillance, which profoundly impacts individual freedom, as it is both intrusive and invasive in private life.Footnote 65 Both explicit and non-explicit forms of surveillance extend to virtually all forms of human interaction.Footnote 66

As the EU Court of Justice pointed out, mass surveillance can be produced by both governments and private companies. This is likely to create ‘in the minds of the persons concerned the feeling that their private lives are the subject of constant surveillance’.Footnote 67 In both cases, we have a kind of intrusive surveillance on people’s lives, and this is evidence of individuals’ loss of control over their personal data.


This process also affects the very idea of the causal link between individual or collective actions and their consequences, therefore, the core notion of our freedom. Replacing causation with correlation profoundly affects the fundamental distinction embedded in our moral and legal theory between instruments and ends.Footnote 68 Today’s cybernetic power is no longer just an instrument to achieve ends decided by human agents. Machines make decisions autonomously on behalf of the person, thus interfering with human freedom.

As it is very clearly described in the following chapters, human agents (individual or collective) explicitly delegate the power to make decisions or express assessments on their behalf to automated systems (judicial support systems, algorithmic administration, emotional assessments, policing decisions). But we must be aware of another crucial dimension of that substitution.

There are two ways to capture human freedom: the first, as we saw in the previously noted cases, occurs whenever we ask a technological system to decide directly on our behalf (we reduce our self-determination to choose our proxy) and the second is when we ask automated machinery to provide the information upon which we take a course of action. Knowledge always shapes our freedom. One key factor (although not the only one) influencing our decisions is the information background we have. Deciding to drive a specific route rather than another to reach our destination is usually affected by information we have either on traffic or roadworks; the choice to vote for one political candidate instead of another depends on the information we get about his or her campaign or ideas. If we ask ourselves which channel we will use today to get information about the world beyond our direct experience, the answer will be more than 80 per cent from the Internet.Footnote 69

Automated technological systems increasingly provide knowledge.Footnote 70 Simultaneously, ‘individual and collective identities become conceivable as fluid, hybrid and constantly evolving’ as the result of ‘continuous processes bringing together humans, objects, energy flows, and technologies’.Footnote 71 This substitution profoundly impacts the very idea of autonomy as it emerged in the last two centuries and basically alters the way people come to make decisions, have beliefs, or take action.

In this way, two distinctive elements of our idea of freedoms’ violations seem to change or disappear in the Algorithmic Society. In the first case – when we explicitly ask technology to decide on our behalf – we cannot say that the restriction of our freedom is unwanted or unvoluntary because we ourselves consented to it. We expressly ask those technologies to decide, assuming they are ‘evidence-based’, more effective, more neutral, science-oriented, and so forth. Therefore, we cannot say that our freedom has been violated against our will or self-determination, given that we expressly asked those systems to make our decisions.

On the other hand, when our decisions are taken on the informative basis provided by technology, we can no longer say that such threats to our liberty are ‘external’; as a matter of fact, when we trust information taken from the Internet (from web search engines, like Google, or from social media, like Facebook or Twitter), there is no apparent coercion, no violence. That information is simply welcomed as a sound and valid basis for our deliberations. Yet there is a critical point here. We trust web-sourced information provided by platforms, assuming they are scientifically accurate or at least trustworthy. However, this trust has nothing to do with science or education. Platforms simply use powerful algorithms that learn behavioural patterns from previous preferences to reinforce individuals or groups in filtering overwhelming alternatives in our daily life. The accuracy of these algorithms in predicting and giving us helpful information with their results only occurs because they confirm – feeding a ‘confirmation bias’Footnote 72 – our beliefs or, worst, our ideological positions (‘bubble effect’).Footnote 73

There is something deeply philosophically and legally problematic about restricting people’s freedom based on predictions about their conduct. For example, as an essential requirement for a just society, liberal and communitarian doctrines share not only the absence of coercion but also independence and capacity when acting; from this point of view, new algorithmic decision-making affects the very basis of both liberal and communitarian theories. As Lawrence Lessig wrote, we have experienced, through cyberspace, a ‘displacement of a certain architecture of control and the substitution with an apparent freedom.’Footnote 74

Towards the Algorithmic State Constitution: A ‘hybrid’ Constitutionalism

Surveillance capitalism and the new algorithmic threats to liberty share a common feature: when a new technology has already appeared, it is often too late for the legal system to intervene. The gradual anticipation in the field of privacy rights, from subsequent to preventive (from protection by regulation, to protection ‘by design’ and finally ‘by default’), exactly traces this sort of ‘backwards’ trajectory. This is the main feature of the Algorithmic State constitutionalism.

It is necessary to incorporate the values of constitutional rights within the ‘design stage’ of the machines; for this, we need what we would define as a ‘hybrid’ constitutional law – that is, a constitutional law that still aims to protect fundamental human rights and at the same time knows how to express this goal in the language of technology.Footnote 75 Here the space for effective dialogue is still abundantly unexplored, and consequently, the rate of ‘hybridization’ is still extraordinarily low.

We argue that after the season of protection by design and by default, a new season ought to be opened – that of protection ‘by education’, in the sense that it is necessary to act when scientists and technologists are still studying and training, to communicate the fundamental reasons for general principles such as personal data protection, human dignity, and freedom protection, but also for more specific values as the explainability of decision-making algorithms or the ‘human in the loop’ principle.

Technology is increasingly integrated with the life of the person, and this integration cannot realistically be stopped, nor it would be desirable, given the huge importance for human progress that some new technologies have had.

The only possible way, therefore, is to ensure that the value (i.e., the meaning) of protecting the dignity of the person and his or her freedom becomes an integral part of the training of those who will then become technicians. Hence the decisive role of school, university, and other training agencies, professional or academic associations, as well as the role of soft law.

3 Inalienable Due Process in an Age of AI: Limiting the Contractual Creep toward Automated Adjudication

Frank Pasquale
3.1 Introduction

Automation is influencing ever more fields of law. The dream of disruption has permeated the US and British legal academies and is making inroads in Australia and Canada, as well as in civil law jurisdictions. The ideal here is law as a product, simultaneously mass producible and customizable, accessible to all and personalized, openly deprofessionalized.Footnote 1 This is the language of idealism, so common in discussions of legal technology – the Dr. Jekyll of legal automation.

But the shadow side of legal tech also lurks behind many initiatives. Legal disruption’s Mr. Hyde advances the cold economic imperative to shrink the state and its aid to the vulnerable. In Australia, the Robodebt system of automated benefit overpayment adjudication clawed back funds from beneficiaries on the basis of flawed data, false factual assumptions, and misguided assumptions about the law. In Michigan, in the United States, a similar program (aptly named “MIDAS,” for Michigan Integrated Data Automated System) “charged more than 40,000 people, billing them about five times the original benefits” – and it was later discovered that 93 percent of the charges were erroneous.Footnote 2 Meanwhile, global corporations are finding the automation of dispute settlement a convenient way to cut labor costs. This strategy is particularly tempting on platforms, which may facilitate millions of transactions each day.

When long-standing appeals to austerity and business necessity are behind “access to justice” initiatives to promote online dispute resolution, some skepticism is in order. At the limit, jurisdictions may be able to sell off their downtown real estate, setting up trusts to support a rump judicial system.Footnote 3 To be sure, even online courts require some staffing. But perhaps an avant-garde of legal cost cutters will find some inspiration from US corporations, which routinely decide buyer versus seller disputes in entirely opaque fashion.Footnote 4 In China, a large platform has charged “citizen juries” (who do not even earn money for their labor but, rather, reputation points) to decide such disputes. Build up a large enough catalog of such encounters, and a machine learning system may even be entrusted with deciding disputes based on past markers of success.Footnote 5 A complainant may lose credibility points for nervous behavior, for example, or gain points on the basis of long-standing status as someone who buys a great deal of merchandise or pays a taxes in a timely manner.

As these informal mechanisms become more common, they will test the limits of due process law. As anyone familiar with the diversity of administrative processes will realize, there is an enormous variation at present in how much opportunity a person is entitled to state their case, to demand a written explanation for a final (or intermediate) result, and to appeal. A black lung benefits case differs from a traffic violation, which in term differs from an immigration case. Courts permit agencies a fair amount of flexibility to structure their own affairs. Agencies will, in all likelihood, continue to pursue an agenda of what Julie Cohen has called “neoliberal managerialism” as they reorder their processes of investigation, case development, and decision-making.Footnote 6 That will, in turn, bring in more automated and “streamlined” processes, which courts will be called upon to accommodate.

While judicial accommodations of new agency forms are common, they are not automatic. At some point, agencies will adopt automated processes that courts can only recognize as simulacra of justice. Think, for instance, of an anti-trespassing robot equipped with facial recognition, which could instantly identify and “adjudicate” a person overstepping a boundary and text that person a notice of a fine. Or a rail ticket monitoring system that would instantly convert notice of a judgment against a person into a yearlong ban on the person buying train tickets. Other examples might be less dramatic but also worrisome. For example, consider the possibility of “mass claims rejection” for private health care providers seeking government payment for services rendered to persons with government-sponsored health insurance. Such claims processing programs may simply compare a set of claims to a corpus of past denied claims, sort new claimants’ documents into categories, and then reject them without human review.

In past work, I have explained why legislators and courts should reject most of these systems, and should always be wary of claims that justice can be automated.Footnote 7 And some initial jurisprudential stirrings are confirming that normative recommendation. For example, there has been a backlash against red-light cameras, which automatically cite drivers for failing to obey traffic laws. And even some of those who have developed natural language processing for legal settings have cautioned that they are not to be used in anything like a trial setting. These concessions are encouraging.

And yet there is another danger lurking on the horizon. Imagine a disability payment scheme that offered something like the following “contractual addendum” to beneficiaries immediately before they began receiving benefits:

The state has a duty to husband resources and to avoid inappropriate payments. By signing below, you agree to the following exchange. You will receive $20 per month extra in benefits, in addition to what you are statutorily eligible for. In exchange, you agree to permit the state (and any contractor it may choose to employ) to review all your social media accounts, in order to detect behavior indicating you are fit for work. If you are determined to be fit for work, your benefits will cease. This determination will be made by a machine learning program, and there will be no appeal.Footnote 8

There are two diametrically opposed ways of parsing such a contract. For many libertarians, the right to give up one’s rights (here, to a certain level of privacy and appeals) is effectively the most important right, since it enables contracting parties to eliminate certain forms of interference from their relationship. By contrast, for those who value legal regularity and due process, this “addendum” is anathema. Even if it is possible for the claimant to re-apply after a machine learning system has stripped her of benefits, the process offends the dignity of the claimant. A person must pass on whether such a grave step is to be taken.

These divergent approaches are mirrored in two lines of US Supreme Court jurisprudence. On the libertarian side, the Court has handed down a number of rulings affirming the “right” of workers to sign away certain rights at work, or at least the ability to contest their denial in court.Footnote 9 Partisans of “disruptive innovation” may argue that startups need to be able to impose one-sided terms of service on customers, so that investors will not be deterred from financing them. Exculpatory clauses have spread like kudzu, beckoning employers with the jurisprudential equivalent of a neutron bomb: the ability to leave laws and regulations standing, without any person capable of enforcing them.

On the other side, the Supreme Court has also made clear that the state must be limited in the degree to which it can structure entitlements when it is seeking to avoid due process obligations. A state cannot simply define an entitlement to, say, disability benefits, by folding into the entitlement itself an understanding that it can be revoked for any reason, or no reason at all. On this dignity-centered approach, the “contractual addendum” posited above is not merely one innocuous add-on, a bit of a risk the claimant must endure in order to engage in an arms’ length exchange for $20. Rather, it undoes the basic structure of the entitlement, which included the ability to make one’s case to another person and to appeal an adverse decision.

If states begin to impose such contractual bargains for automated administrative determinations, the “immoveable object” of inalienable due process rights will clash with the “irresistible force” of legal automation and libertarian conceptions of contractual “freedom.” This chapter explains why legal values must cabin (and often trump) efforts to “fast track” cases via statistical methods, machine learning (ML), or artificial intelligence. Section 3.2 explains how due process rights, while flexible, should include four core features in all but the most trivial or routine cases: the ability to explain one’s case, a judgment by a human decision maker, an explanation for that judgment, and the ability to appeal. Section 3.3 demonstrates why legal automation often threatens those rights. Section 3.4 critiques potential bargains for legal automation and concludes that the courts should not accept them. Vulnerable and marginalized persons should not be induced to give up basic human rights, even if some capacious and abstract versions of utilitarianism project they would be “better off” by doing so.

3.2 Four Core Features of Due Process

Like the rule of law, “due process” is a multifaceted, complex, and perhaps even essentially contested concept.Footnote 10 As J. Roland Pennock has observed, the “roots of due process grow out of a blend of history and philosophy.”Footnote 11 While the term itself is a cornerstone of the US and UK legal systems, it has analogs in both public law and civil law systems around the world.

While many rights and immunities have been evoked as part of due process, it is important to identify a “core” conception of it that should be inalienable in all significant disputes between persons and governments. We can see this grasping for a “core” of due process in some US cases, where the interest at stake was relatively insignificant but the court still decided that the person affected by government action had to have some opportunity to explain him or herself and the contest the imposition of a punishment. For example, in Goss v. Lopez, students who were accused of misbehavior were suspended from school for ten days. The students claimed they were due some kind of hearing before suspension, and the Supreme Court agreed:

We do not believe that school authorities must be totally free from notice and hearing requirements if their schools are to operate with acceptable efficiency. Students facing temporary suspension have interests qualifying for protection of the Due Process Clause, and due process requires, in connection with a suspension of 10 days or less, that the student be given oral or written notice of the charges against him and, if he denies them, an explanation of the evidence the authorities have and an opportunity to present his side of the story.Footnote 12

This is a fair encapsulation of some core practices of due process, which may (as the stakes rise) become supplemented by all manner of additional procedures.Footnote 13

One of the great questions raised by the current age of artificial intelligence (AI) is whether the notice and explanation of the charges (as well as the opportunity to be heard) must be discharged by a human being. So far as I can discern, no ultimate judicial authority has addressed this particular issue in the due process context. However, given that the entire line of case law arises in the context of humans confronting other humans, it does not take a stretch of the imagination to imagine such a requirement immanent in the enterprise of due process.

Moreover, legal scholars Kiel Brennan-Marquez and Henderson argue that “in a liberal democracy, there must be an aspect of ‘role-reversibility’ to judgment. Those who exercise judgment should be vulnerable, reciprocally, to its processes and effects.”Footnote 14 The problem with robot or AI judges is that they cannot experience punishment the way that a human being would. Role-reversibility is necessary for “decision-makers to take the process seriously, respecting the gravity of decision-making from the perspective of affected parties.” Brennan-Marquez and Henderson derive this principle from basic principles of self-governance:

In a democracy, citizens do not stand outside the process of judgment, as if responding, in awe or trepidation, to the proclamations of an oracle. Rather, we are collectively responsible for judgment. Thus, the party charged with exercising judgment – who could, after all, have been any of us – ought to be able to say: This decision reflects constraints that we have decided to impose on ourselves, and in this case, it just so happens that another person, rather than I, must answer to them. And the judged party – who could likewise have been any of us – ought to be able to say: This decision-making process is one that we exercise ourselves, and in this case, it just so happens that another person, rather than I, is executing it.

Thus, for Brennan-Marquez and Henderson, “even assuming role-reversibility will not improve the accuracy of decision-making; it still has intrinsic value.”

Brennan-Marquez and Henderson are building on a long tradition of scholarship that focuses on the intrinsic value of legal and deliberative processes, rather than their instrumental value. For example, applications of the US Supreme Court’s famous Mathews v. Eldridge calculus have frequently failed to take into account the effects of abbreviated procedures on claimants’ dignity.Footnote 15 Bureaucracies, including the judiciary, have enormous power. They owe litigants a chance to plead their case to someone who can understand and experience, on a visceral level, the boredom and violence portended by a prison stay, the “brutal need” resulting from the loss of benefits (as put in Goldberg v. Kelly), the sense of shame that liability for drunk driving or pollution can give rise to. And as the classic Morgan v. United States held, even in complex administrative processes, the one who hears must be the one who decides. It is not adequate for persons to play mere functionary roles in an automated judiciary, gathering data for more authoritative machines. Rather, humans must take responsibility for critical decisions made by the legal system.

This argument is consistent with other important research on the dangers of giving robots legal powers and responsibilities. For example, Joanna Bryson, Mihailis Diamantis, and Thomas D. Grant have warned that granting robots legal personality raises the disturbing possibility of corporations deploying “robots as liability shields.”Footnote 16 A “responsible robot” may deflect blame or liability from the business that set it into the world. This is dangerous because the robot cannot truly be punished: it lacks human sensations of regret or dismay at loss of liberty or assets. It may be programmed to look as if it is remorseful upon being hauled into jail, or to frown when any assets under its control are seized. But these are simulations of human emotion, not the thing itself. Emotional response is one of many fundamental aspects of human experience that is embodied. And what is true of the robot as an object of legal judgment is also true of robots or AI as potential producers of such judgments.

3.3 How Legal Automation and Contractual Surrender of Rights Threaten Core Due Process Values

There is increasing evidence that many functions of the legal system, as it exists now, are very difficult to automate.Footnote 17 However, as Cashwell and I warned in 2015, the legal system is far from a stable and defined set of tasks to complete. As various interest groups jostle to “reform” legal systems the range of procedures needed to finalize legal determinations may shrink or expand.Footnote 18 There are many ways to limit existing legal processes, or simplify them, in order to make it easier for computation to replace or simulate them. The clauses mentioned previously – forswearing appeals of judgments generated or informed by machine learning or AI – would make non-explainable AI far easier to implement in legal systems.

This type of “moving the goalposts” may be accelerated by extant trends toward neoliberal managerialism in public administration.Footnote 19 This approach to public administration is focused on throughput, speed, case management, and efficiency. Neoliberal managerialists urge the public sector to learn from the successes of the private sector in limiting spending on disputes. One potential here is simply to outsource determinations to private actors – a move widely criticized elsewhere.Footnote 20 I am more concerned here with a contractual option: to offer to beneficiaries of government programs an opportunity for more or quicker benefits, in exchange for an agreement not to pursue appeals of termination decisions, or to thereby accepting their automated resolution.

I focus on the inducement of quicker or more benefits, because it appears to be settled law (at least in the US) that such restrictions of due process cannot be embedded into benefits themselves. A failed line of US Supreme Court decisions once attempted to restrict claimants’ due process rights by insisting that the government can create property entitlements with no due process rights attached. On this reasoning, a county might grant someone benefits with the explicit understanding that they could be terminated at any time without explanation: the “sweet” of the benefits could include the “bitter” of sudden, unreasoned denial of them. In Cleveland Board of Education v. Loudermill (1985), the Court finally discarded this line of reasoning, forcing some modicum of reasoned explanation and process for termination of property rights.

What is less clear now is whether side deals might undermine the delicate balance of rights struck by Loudermill. In the private sector, companies have successfully routed disputes with employees out of process-rich Article III courts, and into stripped-down arbitral forums, where one might even be skeptical of the impartiality of decision-makers.Footnote 21 Will the public sector follow suit? Given some current trends in the foreshortening of procedure and judgment occasioned by public sector automation, the temptation will be great.

These concerns are a logical outgrowth of a venerable literature critiquing rushed, shoddy, and otherwise improper automation of legal decision-making. In 2008, Danielle Keats Citron warned that states were cutting corners by deciding certain benefits (and other) claims automatically, on the basis of computer code that did not adequately reflect the complexity of the legal code it claimed to have reduced to computation.Footnote 22 Virginia Eubanks’s Automating Inequality has identified profound problems in governmental use of algorithmic sorting systems. Eubanks tells the stories of individuals who lose benefits, opportunities, and even custody of their children, thanks to algorithmic assessments that are inaccurate or biased. Eubanks argues that complex benefits determinations are not something well-meaning tech experts can “fix.” Instead, the system itself is deeply problematic, constantly shifting the goal line (in all too many states) to throw up barriers to access to care.

A growing movement for algorithmic accountability is both exposing and responding to these problems. For example, Citron and I coauthored work setting forth some basic procedural protections for those affected by governmental scoring systems.Footnote 23 The AI Now Institute has analyzed cases of improper algorithmic determinations of rights and opportunities.Footnote 24 And there is a growing body of scholarship internationally exploring the ramifications of computational dispute resolution.Footnote 25 As this work influences more agencies around the world, it is increasingly likely that responsible leadership will ensure that a certain baseline of due process values applies to automated decision-making.

Though they are generally optimistic about the role of automation and algorithms in agency decision-making, Coglianese and Lehr concede that one “due process question presented by automated adjudication stems from how such a system would affect an aggrieved party’s right to cross-examination. … Probably the only meaningful way to identify errors would be to conduct a proceeding in which an algorithm and its data are fully explored.”Footnote 26 This type of examination is at the core of Keats Citron’s concept of technological due process. It would require something like a right to an explanation of the automated profiling at the core of decision.Footnote 27

3.4 Due Process, Deals, and Unraveling

However, all such protections could be undone. The ability to explain oneself, and to hear reasoned explanations in turn, is often framed as being needlessly expensive. This expense of legal process (or administrative determinations) has helped fuel a turn to quantification, scoring, and algorithmic decision procedures.Footnote 28 A written evaluation of a person (or comprehensive analysis of future scenarios) often requires subtle judgment, exactitude in wording, and ongoing revision in response to challenges and evolving situations. A pre-set formula based on limited, easily observable variables, is far easier to calculate.Footnote 29 Moreover, even if individuals are due certain explanations and hearings as part of law, they may forego them in some contexts.

This type of rights waiver has already been deployed in some contexts. Several states in the United States allow unions to waive the due process rights of public employees.Footnote 30 We can also interpret some Employee Retirement Income Security Act (ERISA) jurisprudence as an endorsement and approval of a relatively common situation in the United States: employees effectively signing away a right to a more substantive and searching review of adverse benefit scope and insurance coverage determinations via an agreement to participate in an employer-sponsored benefit plan. The US Supreme Court has gradually interpreted ERISA to require federal courts to defer to plan administrators, echoing the deference due to agency administrators, and sometimes going beyond it.Footnote 31

True, Loudermill casts doubt on arrangements for government benefits premised on the beneficiary’s sacrificing due process protections. However, a particularly innovative and disruptive state may decide that the opinion is silent as to the baseline of what constitutes the benefit in question, and leverage that ambiguity. Consider a state that guaranteed health care to a certain category of individuals, as a “health care benefit.” Enlightened legislators further propose that the disabled, or those without robust transport options, should also receive assistance with respect to transportation to care. Austerity-minded legislators counter with a proviso: to receive transport assistance in addition to health assistance, beneficiaries need to agree to automatic adjudication of a broad class of disputes that might arise out of their beneficiary status.

The automation “deal” may also arise out of long-standing delays in receiving benefits. For example, in the United States, there have been many complaints by disability rights groups about the delays encountered by applicants for Social Security Disability Benefits, even when they are clearly entitled to them. On the other side of the political spectrum, some complain that persons who are adjudicated as disabled, and then regain capacities to work, are able to keep benefits for too long after they regain the capacity to work. This concern (and perhaps some mix of cruelty and indifference) motivated British policy makers who promoted “fit for work” reviews by private contractors.Footnote 32

It is not hard to see how the “baseline” of benefits might be defined narrowly, and all future benefits would be conditioned in this way. Nor are procedures the only constitution-level interest that may be “traded away” for faster access to more benefits. Privacy rights may be on the chopping block as well. In the United States, the Trump administration proposed reviews of the social media of persons receiving benefits.Footnote 33 The presumption of such review is that a picture of, say, a self-proclaimed depressed person smiling, or a self-proclaimed wheelchair-bound person walking, could alert authorities to potential benefits fraud. And such invasive surveillance could again feed into automated review, which could be flagged by such “suspicious activity” in a way similar to the activation of investigation at US fusion centers by “suspicious activity reports.”

What is even more troubling about these dynamics is the way in which “preferences” to avoid surveillance or preserve procedural rights might themselves become new data points for suspicion or investigation. A policymaker may wonder about the persons who refuse to accept the new due-process-lite “deal” offered by the state: What have they got to hide? Why are they so eager to preserve access to a judge and the lengthy process that may entail? Do they know some discrediting fact about their own status that we do not, and are they acting accordingly? Reflected in the economics of information as an “adverse selection problem,” this kind of speculative suspicion may become widespread. It may also arise as a byproduct of machine learning: those who refuse to relinquish privacy or procedural rights may, empirically, turn out to be more likely to pose problems for the system, or non-renewal of benefits, than those who trade away those rights. Black-boxed flagging systems may silently incorporate such data points into their own calculations.

The “what have you got to hide” rationale leads to a phenomenon deemed “unraveling” by economists of information. This dynamic has been extensively analyzed by the legal scholar Scott Peppet. The bottom line of Peppet’s analysis is that every individual decision to reveal something about himself or herself may also create social circumstances that pressure others to also disclose. For example, if only a few persons tout their grade point average (GPA) on their resumes, that disclosure may merely be an advantage for them in the job-seeking process. However, once 30 percent, 40 percent, 50 percent, or more of job-seekers include their GPAs, human resources personnel reviewing the applications may wonder about the motives of those who do not. If they assume the worst about non-revealers, it becomes a rationale for all but the very lowest GPA holders to reveal their GPA. Those at, say, the thirtieth percentile, reveal their GPA to avoid being confused with those in the twentieth or tenth percentile, and so on.

This model of unraveling parallels similar theorizing in feminist theorizing. For example, Catharine Mackinnon insisted that the “personal is political,” in part because any particular family’s division of labor helped either reinforce or challenge dominant patterns.Footnote 34 A mother may choose to quit work and stay home to raise her children, while her husband works fifty hours a week, and that may be an entirely ethical choice for her family. However, it also helps reinforce patterns of caregiving and expectations in that society which track women into unpaid work and men into paid work. It is not merely accommodating but also promoting gendered patterns of labor.Footnote 35 Like a path through a forest trod ever clearer of debris, it becomes the natural default.

This inevitably social dimension of personal choice also highlights the limits of liberalism in addressing due process trade-offs. Civil libertarians may fight the direct imposition of limitations of procedural or privacy rights by the state. However, “freedom of contract” may itself be framed as a civil liberties issue. If a person in great need wants immediate access to benefits, in exchange for letting the state monitor his social network feed (and automatically terminate benefits if suspect pictures are posted), the bare rhetoric of “freedom” also pulls in favor of permitting this deal. We need a more robust and durable theory of constitutionalism to preempt the problems that may arise here.

3.5 Backstopping the Slippery Slope toward Automated Justice

As the spread of plea bargaining in the United States shows, there is a clear and present danger of the state using its power to make an end-run around protections established in the constitution and guarded by courts. When a prosecutor threatens a defendant with a potential hundred-year sentence in a trial, or a plea for five to eight years, the coercion is obvious. By comparison, given the sclerotic slowness of much of the US administrative state, giving up rights in order to accelerate receipt of benefits is likely to seem to many liberals a humane (if tough) compromise.

Nevertheless, scholars should resist this “deal” by further developing and expanding the “unconstitutional conditions” doctrine. Daniel Farber deftly explicates the basis and purpose of the doctrine:

[One] recondite area of legal doctrine [concerns] the constitutionality of requiring waiver of a constitutional right as a condition of receiving some governmental benefit. Under the unconstitutional conditions doctrine, the government is sometimes, but by no means always, blocked from imposing such conditions on grants. This doctrine has long been considered an intellectual and doctrinal swamp. As one recent author has said, “[t]he Supreme Court’s failure to provide coherent guidance on the subject is, alas, legendary.”Footnote 36

Farber gives several concrete examples of the types of waivers that have been allowed over time. “[I]n return for government funding, family planning clinics may lose their right to engage in abortion referrals”; a criminal defendant can trade away the right to a jury trial for a lighter sentence. Farber is generally open to the exercise of this right to trade one’s rights away.Footnote 37 However, even he acknowledges that courts need to block particularly oppressive or manipulative exchanges of rights for other benefits. He offers several rationales for such blockages, including one internal to contract theory and another based on public law grounds.Footnote 38 Each is applicable to many instances of “automated justice.”

Farber’s first normative ground for unconstitutional conditions challenges to waivers of constitutional rights is the classic behavioral economics concern about situations “where asymmetrical information, imperfect rationality, or other flaws make it likely that the bargain will not be in the interests of both parties.”Footnote 39 This rationale applies particularly well to scenarios where black-box algorithms (or secret data) are used.Footnote 40 No one should be permitted to accede to an abbreviated process when the foundations of its decision-making are not available for inspection. The problem of hyperbolic discounting also looms large. A benefits applicant in brutal need of help may not be capable of fully thinking through the implications of trading away due process rights. Bare concern for survival occludes such calculations.

The second normative foundation concerns the larger social impact of the rights-waiver bargain. For example, Farber observes, “when the agreement would adversely affect the interests of third parties in some tangible way,” courts should be wary of it. The unraveling dynamic described above offers one example of this type of adverse impact on third parties from rights sacrifices. Though it may not be immediately “tangible,” it has happened in so many other scenarios that it is critical for courts to consider whether particular bargains may pave the way to a future where the “choice” to trade away a right is effectively no choice at all, because the cost of retaining it is a high level of suspicion generated by exercising (or merely retaining the right to exercise) the right.

Under this second ground, Farber also mentions that we may “block exchanges that adversely affect the social meaning of constitutional rights, degrading society’s sense of its connection with personhood.” Here again, a drift toward automated determination of legal rights and duties seems particularly apt for targeting. The right of due process at its core means something more than a bare redetermination by automated systems. Rather, it requires some ability to identify a true human face of the state, as Henderson and Brennan-Marquez’s work (discussed previously) suggests. Soldiers at war may hide their faces, but police do not. We are not at war with the state; rather, it is supposed to be serving us in a humanly recognizable way. The same is true a fortiori of agencies dispending benefits and other forms of support.

3.6 Conclusion: Writing, Thinking, and Automation in Administrative Processes

Claimants worried about the pressure to sign away rights to due process may have an ally within the administrative state: persons who now hear and decide cases. AI and ML may ease their workload, but could also be a prelude to full automation. Two contrasting cases help illuminate this possibility. In Albathani v. INS (2003), the First Circuit affirmed the Board of Immigration Appeals’ policy of “affirmance without opinion” (AWO) of certain rulings by immigration judges.Footnote 41 Though “the record of the hearing itself could not be reviewed” in the ten minutes which the Board member, on average, took to review each of more than fifty cases on the day in question, the court found it imperative to recognize “workload management devices that acknowledge the reality of high caseloads.” However, in a similar Australian administrative context, a judge ruled against a Minister in part due to the rapid disposition of two cases involving more than seven hundred pages of material. According to the judge, “43 minutes represents an insufficient time for the Minister to have engaged in the active intellectual process which the law required of him.”Footnote 42

In the short run, decision-makers at an agency may prefer the Albathani approach. As Chad Oldfather has observed in his article “Writing, Cognition, and the Nature of the Judicial Function,” unwritten, and even visceral, snap decisions have a place in our legal system.Footnote 43 They are far less tiring to generate than a written record and reasoned elaboration of how the decision-maker applied the law to the facts. However, in the long run, when the reduction of thought and responsibility for review reduces to a certain vanishing point, it is difficult for decision-makers to justify their own interposition in the legal process. A “cyberdelegation” to cheaper software may be proper then.Footnote 44

We must connect current debates on the proper role of automation in agencies to requirements for reasoned decision-making. It is probably in administrators’ best interests for courts to actively ensure thoughtful decisions by responsible persons. Otherwise, administrators may ultimately be replaced by the types of software and AI now poised to take over so many other roles now performed by humans. The temptation to accelerate, abbreviate, and automate human processes is, all too often, a prelude to destroying them.Footnote 45

4 Constitutional Challenges in the Emotional AI Era

Peggy Valcke , Damian Clifford , and Viltė Kristina Dessers Footnote *
4.1 Introduction

Is a future in which our emotions are being detected in real time and tracked, both in private and public spaces, dawning? Looking at recent technological developments, studies, patents, and ongoing experimentations, this may well be the case.Footnote 1 In its Declaration on the manipulative capabilities of algorithmic processes of February 2019, the Council of Europe’s Committee of Ministers alerts us for the growing capacity of contemporary machine learning tools not only to predict choices but also to influence emotions, thoughts, and even actions, sometimes subliminally.Footnote 2 This certainly adds a new dimension to existing computational means, which increasingly make it possible to infer intimate and detailed information about individuals from readily available data, facilitating the micro-targeting of individuals based on profiles in a way that may profoundly affect their lives.Footnote 3 Emotional artificial intelligence (further ‘emotional AI’) and empathic media are new buzzwords used to refer to the affective computing sub-discipline and, specifically, to the technologies that are claimed to be capable of detecting, classifying, and responding appropriately to users’ emotional lives, thereby appearing to understand their audience.Footnote 4 These technologies rely on a variety of methods, including the analysis of facial expressions, physiological measuring, analyzing voice, monitoring body movements, and eye tracking.Footnote 5

Although there have been important debates as to their accuracy, the adoption of emotional AI technologies is increasingly widespread, in many areas and for various purposes, both in the public and private sectors.Footnote 6 It is well-known that advertising and marketing go hand in hand with an attempt to exploit emotions for commercial gain.Footnote 7 Emotional AI facilitates the systematic gathering of insightsFootnote 8 and allows for the further personalization of commercial communications and the optimization of marketing campaigns in real time.Footnote 9 Quantifying, tracking, and manipulating emotions is a growing part of the social media business model.Footnote 10 For example, Facebook is now infamous in this regard due to its emotional contagionFootnote 11 experiment where users’ newsfeeds were manipulated to assess changes in emotion (to assess whether Facebook posts with emotional content were more engaging).Footnote 12 A similar trend has been witnessed in the political sphere – think of the Cambridge Analytica scandalFootnote 13 (where data analytics was used to gauge the personalities of potential Trump voters).Footnote 14 The aforementioned Declaration of the Council of Europe, among others, points to the dangers for democratic societies that emanate from the possibility to employ algorithmic tools capable of manipulating and controlling not only economic choices but also social and political behaviours.Footnote 15

Do we need new (constitutional) rights, as suggested by some, in light of growing practices of manipulation by algorithms, in general, and the emergence of emotional AI, in particular? Or, is the current law capable of accommodating such developments adequately? This is undoubtedly one of the most fascinating debates for legal scholars in the coming years. It is also on the radar of CAHAI, the Council of Europe’s Ad Hoc Committee on Artificial Intelligence, set up on 11 September 2019, with the mission to examine the feasibility and potential elements of a legal framework for the development, design, and application of AI, based on the Council of Europe’s standards on human rights, democracy, and the rule of law.Footnote 16

In the light of these ongoing policy discussions, the ambition of this chapter is twofold. First, it will discuss certain legal-ethical challenges posed by the emergence of emotional AI and its manipulative capabilities. Second, it will present a number of responses, specifically those suggesting the introduction of new (constitutional) rights to mitigate the potential negative effects of such developments. Given the limited scope of the chapter, it does not seek to evaluate the appropriateness of the identified suggestions, but rather to provide the foundation for a future research agenda in that direction. The focus of the chapter lies on the European legal framework and on the use of emotions for commercial business-to-consumer purposes, although some observations are also valid in the context of other highly relevant uses of emotional AI,Footnote 17 such as implementations by the public sector, or for the purpose of political micro-targeting, or fake news. The chapter is based on a literature review, including recent academic scholarship and grey literature. Its methodology relies on a legal analysis of how the emergence of emotional AI raises concerns and challenges for ‘constitutional’ rights and values through the lens of its use in the business to consumer context. With constitutional rights, we do not refer to national constitutions, but given the chapter’s focus on the European level, to the fundamental rights and values as enshrined in the European Convention for the Protection of Human Rights and Fundamental Freedoms (‘ECHR’), on the one hand, and the EU Charter of Fundamental Rights (‘CFREU’) and Article 2 of the Treaty on European Union (‘TEU’), on the other.

4.2 Challenges to Constitutional Rights and Underlying Values
Protecting the Citizen-Consumer

Emotion has always been at the core of advertising and marketing, and emotion detection has been used in market research for several decades.Footnote 18 Consequently, in various areas of EU and national law, rules have been adopted to protect consumers and constrain forms of manipulative practices in business-to-consumer relations. Media and advertising laws have introduced prohibitions on false, misleading, deceptive, and surreptitious advertising, including an explicit ban on subliminal advertising.Footnote 19 Consumer protection law instruments shield consumers from aggressive, unfair, and deceptive trade practices.Footnote 20 Competition law prohibits exploitative abuses of market power.Footnote 21 Data protection law has set strict conditions under which consumers’ personal data can be collected and processed.Footnote 22 Under contract law, typical grounds for a contract being voidable include coercion, undue influence, misrepresentation, or fraud. The latter, fraud (i.e., the intentional deception to secure an unfair or unlawful gain, or deprive a victim of her legal right) is considered a criminal offence. In the remainder of the text, these rules are referred to as ‘consumer protection law in the broad sense’, as they protect citizens as economic actors.

Nevertheless, the employment of emotional AI may justify additional layers of protection. The growing effectiveness of the technology drew public attention following Facebook’s aforementioned emotional contagionFootnote 23 experiment, where users’ newsfeeds were manipulated to assess changes in emotion (to assess whether Facebook posts with emotional content were more engaging),Footnote 24 as well as the Cambridge Analytica scandalFootnote 25 (where it was used to gauge the personalities of potential Trump voters).Footnote 26 There are also data to suggest that Facebook had offered advertisers the ability to target advertisements to teenagers based on real-time extrapolation of their mood.Footnote 27 Yet Facebook is obviously not alone in exploiting emotional AI (and emotions) in similar ways.Footnote 28 As noted by Stark and Crawford, commenting on the fallout from the emotional contagion experiment, it is clear that quantifying, tracking, and ‘manipulating emotions’ is a growing part of the social media business model.Footnote 29 Researchers are documenting the emergence of what Zuboff calls ‘surveillance capitalism’Footnote 30 and, in particular, its reliance on behavioural tracking and manipulation.Footnote 31 Forms of ‘dark patterns’ are increasingly detected, exposed, and – to some extent – legally constrained. Dark patterns can be described as exploitative design choices, ‘features of interface design crafted to trick users into doing things that they might not want to do, but which benefit the business in question’.Footnote 32 In its report from 2018, the Norwegian Consumer Authority called the use by large digital service providers (in particular Facebook, Google, and Microsoft) of such dark patterns an ‘unethical’ attempt to push consumers towards the least privacy friendly options of their services.Footnote 33 Moreover, it questioned whether such practices are in accordance with the principles of data protection by default and data protection by design, and whether consent given under these circumstances can be said to be explicit, informed, and freely given. It stated that ‘[w]hen digital services employ dark patterns to nudge users towards sharing more personal data, the financial incentive has taken precedence over respecting users’ right to choose. The practice of misleading consumers into making certain choices, which may put their privacy at risk, is unethical and exploitative.’ In 2019, the French data protection authority, CNIL, effectively fined Google for the violation of transparency and information obligations and lack of (valid) consent for advertisements personalization. In essence, the users were not aware of the extent of personalization.Footnote 34 Notably, the Deceptive Experiences to Online Users Reduction Act, as introduced by senators Deb Fischer and Mark Warner in the United States (the so-called DETOUR Act), explicitly provided protection against ‘manipulation of user interfaces’ and offered prohibiting dark patterns when seeking consent to use personal information.Footnote 35

It is unlikely, though, that existing consumer protection law (in the broad sense) will be capable of providing a conclusive and exhaustive answer to the question of where to draw the line between forms of permissible persuasion and unacceptable manipulation in the case of emotional AI. On the one hand, there may be situations in which dubious practices escape the scope of application of existing laws. Think of the cameras installed at Piccadilly Lights in London which are able to detect faces in the crowd around the Eros statue in Piccadilly Circus, and ‘when they identify a face the technology works out an approximate age, sex, mood (based on whether think you are frowning or laughing) and notes some characteristics such as whether you wear glasses or whether you have a beard’.Footnote 36 The cameras have been used during a certain period with the purpose of optimizing the advertising displayed on Piccadilly Lights.Footnote 37 Even if such practices of emotional AI in public spaces are not considered in violation of the EU General Data Protection Regulation (given the claimed immediate anonymization of the faces detected), they raise serious question marks from an ethical perspective.Footnote 38 On the other hand, the massive scale with which certain practices are deployed may surpass the enforcement of individual rights. The Council of Europe’s Parliamentary Assembly expressed concerns that persuasive technologies enable ‘massive psychological experimentation and persuasion on the internet’.Footnote 39 Such practices seem to require a collective answer (e.g., by including them in the blacklist of commercial practices),Footnote 40 since enforcement in individual cases risks being ineffective in remedying harmful effects on society as a whole.

Moreover, emotional AI is arguably challenging the very underlying rationality-based paradigm imbued in (especially, but not limited to) consumer protection law. Modern legality is characterized by a separation of rational thinking (or reason) from emotion and consumer protection essentially rely on rationality.Footnote 41 As noted by Maloney, the law works from the perspective that rational thinking and emotion ‘belong to separate spheres of human existence; the sphere of law admits only of reason; and vigilant policing is required to keep emotion from creeping in where it does not belong’.Footnote 42 The law is traditionally weighted towards the protection of the verifiable propositional content of commercial communications; however, interdisciplinary research is increasingly recognizing the persuasive effect of the unverifiable content (i.e., images, music)Footnote 43 and has long recognized that people interact with computers as social agents and not just tools.Footnote 44 It may be reasonably argued that the separation of rationality from affect in the law fails to take interdisciplinary insights into account.Footnote 45 In relation to this, the capacity of the current legal framework to cope with the advancements is in doubt. In particular, since the development of emotion detection technology facilitates the creation of emotion-evolved consumer-facing interactions, it poses challenges to the framework which relies on rationality.Footnote 46 The developments arguably raise concerns regarding the continuing reliance on the rationality paradigm within consumer protections, and hence consumer self-determination and individual autonomy, as core underlying principles of the legal protections.

Motivating a Constitutional Debate

The need for guidance about how to apply and, where relevant, complement existing consumer protection laws (in the broad sense) in light of the rise of emotional AI motivates the need for a debate at a more fundamental level, looking at constitutional and ethical frameworks. The following paragraphs – revolving around three main observations – focus on the former of these frameworks, and will highlight how emotion detection and manipulation may pose threats to the effective enjoyment of constitutional rights and freedoms.

What’s in a Name?

By way of preliminary observation, it should be stressed that, as noted by Sunstein, manipulation has ‘many shades’ and is extremely difficult to define.Footnote 47 Is an advertising campaign by an automobile company showing a sleek, attractive couple exiting from a fancy car before going to a glamorous party ‘manipulation’? Do governments – in an effort to discourage smoking – engage in ‘manipulation’ when they require cigarette packages to contain graphic, frightening health warnings, depicting people with life-threatening illnesses? Is showing unflattering photographs of your opponent during a political campaign ‘manipulation’? Is setting an opt-out consent system for deceased organ donation as the legislative default ‘manipulation’? Ever since Nobel Prize winner Richard Thaler and Cass Sunstein published their influential book Nudge, a rich debate has ensued on the permissibility of deploying choice architectures for behavioural change.Footnote 48 The debate, albeit extremely relevant in the emotional AI context, exceeds the scope of this chapter, and is inherently linked to political-philosophical discussions. A key takeaway from Sunstein’s writing is that, in a social order that values free markets and is committed to freedom of expression, it is ‘exceptionally difficult to regulate manipulation as such’.Footnote 49 He suggests to consider a statement or action as manipulative to the extent that it does not sufficiently engage or appeal to people’s capacity for reflective and deliberative choice. This reminds us of the notions of consumer self-determination and individual autonomy, which we mentioned previously and which will also be discussed further in this section.

From Manipulation over Surveillance to Profiling Errors

Second, it is important to understand that, in addition to the concerns over its manipulative capabilities, on which the chapter focused so far, emotional AI and its employment equally require to take into consideration potential harmful affective impacts, on the one hand, and potential profiling errors, on the other. In relation to the former (the latter are discussed later), it is well-known that surveillance may cause a chilling effect on behaviourFootnote 50 and, in this way, encroach on our rights to freedom of expression (Article 10 ECHR; Article 10 CFREU), freedom of assembly and association (Article 11 ECRH; Article 12 CFREU), and – to the extent that our moral integrity is at stake – our right to private life and personal identity (Article 8 ECHR; Article 7 CFREU).Footnote 51 Significantly, as noted by Calo, ‘[e]ven where we know intellectually that we are interacting with an image or a machine, our brains are hardwired to respond as though a person were actually there’.Footnote 52 The mere observation or perception of surveillance can have a chilling effect on behaviour.Footnote 53 As argued by Stanley (in the context of video analytics), one of the most worrisome concerns is ‘the possibility of widespread chilling effects as we all become highly aware that our actions are being not just recorded and stored, but scrutinized and evaluated on a second-by-second’ basis.Footnote 54 Moreover, such monitoring can also have an impact on an individual’s ability to ‘self-present’.Footnote 55 This refers to the ability of individuals to present multifaceted versions of themselves,Footnote 56 and thus behave differently depending on the circumstances.Footnote 57 Emotion detection arguably adds a layer of intimacy-invasion via the capacity to not only detect emotions as expressed but also detect underlying emotions that are being deliberately disguised. This is of particular significance, as it not only limits the capacity to self-present but potentially erodes this capacity entirely. This could become problematic if such technologies and the outlined technological capacity become commonplace.Footnote 58 In that regard, it is important to understand that emotional AI can have an impact on an individual’s capacity to self-present irrespective of its accuracy (i.e., what is important is that the individual’s belief or the mere observation or perception of surveillance can have a chilling effect on behaviour).Footnote 59

The lack of accuracy of emotional AI, resulting in profiling errors and incorrect inferences, presents additional risks of harm,Footnote 60 including inconvenience, embarrassment, or even material or physical harm.Footnote 61 In this context, it is particularly important that a frequently adopted approachFootnote 62 for emotion detection relies on the six basic emotions as indicated by Ekman (i.e., happiness, sadness, surprise, fear, anger, and disgust). However, this classification is heavily criticized as not accurately reflecting the complex nature of an affective state.Footnote 63 The other major approaches for detecting emotions, namely the dimensional and appraisal-based approach, also present challenges of their own.Footnote 64 As Stanley puts it, emotion detection is an area where there is a special reason to be sceptical, since many such efforts spiral into ‘a rabbit hole of naïve technocratic simplification based on dubious beliefs about emotions’.Footnote 65 The AI Now Institute at New York University alerts (in the light of facial recognition) that new technologies reactivate ‘a long tradition of physiognomy – a pseudoscience that claims facial features can reveal innate aspects of our character and personality’ – and emphasizes that contextual, social, and cultural factors play a larger role in emotional expression than was believed by Ekman and his peers.Footnote 66 Leaving the point that emotion detection through facial expressions is a pseudoscience to one side, improving the accuracy of emotion detection more generally may arguably require more invasive surveillance to gather more contextual insights and signals, paradoxically creating additional difficulties from a privacy perspective. Building on the revealed circumstances, the risks associated with profiling are strongly related to the fact that the databases being mined for inferences are often ‘out-of-context, incomplete or partially polluted’, resulting in the risk of false positives and false negatives.Footnote 67 This risk remains unaddressed by the individual participation rights approach in the EU data protection framework. Indeed, while the rights of access, correction, and erasure as evident in the EU General Data Protection Regulation may have theoretical significance, the practical operation of these rights requires significant effort and is becoming increasingly difficult.Footnote 68 This in turn may have a significant impact on the enjoyment of key fundamental rights and freedoms, such as inter alia the right to respect for private and family life and protection of personal data (Article 8 ECHR; Articles 7–8 CFREU); equality and non-discrimination (Article 14 ECHR; Articles 20–21 CFREU); and freedom of thought, conscience, and religion (Article 9 ECHR; Article 10 CFREU); but also – and this brings us to our third observation – the underlying key notions of autonomy and human dignity.

Getting to the Core Values: Autonomy and Human Dignity

Both at the EU and Council of Europe level, institutions have stressed that new technologies should be designed in such a way that they preserve human dignity and autonomy – both physical and psychological: ‘the design and use of persuasion software and of ICT or AI algorithms … must fully respect the dignity and human rights of all users’.Footnote 69 Manipulation of choice can inherently interfere with autonomy.Footnote 70 Although the notion of autonomy takes various meanings and conceptions, based on different philosophical, ethical, legal, and other theories,Footnote 71 for the purposes of this chapter, the Razian interpretation of autonomy is adopted, as it recognizes the need to facilitate an environment in which individuals can act autonomously.Footnote 72 According to Razian legal philosophy, rights are derivatives of autonomyFootnote 73 and, in contrast with the traditional liberal approach, autonomy requires more than simple non-interference. Raz’s conception of autonomy does not preclude the potential for positive regulatory intervention to protect individuals and enhance their freedom. In fact, such positive action is at the core of this conception of autonomy, as a correct interpretation must allow effective choice in reality, thus at times requiring regulatory intervention.Footnote 74 Raz argues that certain regulatory interventions which support certain activities and discourage those which are undesirable ‘are required to provide the conditions of autonomy’.Footnote 75 According to Raz, ‘[a]utonomy is opposed to a life of coerced choices. It contrasts with a life of no choices, or of drifting through life without ever exercising one’s capacity to choose. Evidently the autonomous life calls for a certain degree of self-awareness. To choose one must be aware of one’s options.’Footnote 76 Raz further asserts: ‘Manipulating people, for example, interferes with their autonomy, and does so in much the same way and to the same degree, as coercing them. Resort to manipulation should be subject to the same conditions as resort to coercion.’Footnote 77 Hence the manipulation of choice can inherently interfere with autonomy, and one can conclude that through this lens, excessive persuasion also runs afoul of autonomy.Footnote 78

Autonomy is inherent in the operation of the democratic values, which are protected at the foundational level by fundamental rights and freedoms. However, there is no express reference to a right to autonomy or self-determination in either the ECHR or the CFREU. Despite not being expressly recognized in a distinct ECHR provision, the European Court of Human Rights (further ‘ECtHR’) has ruled on several occasions that the protection of autonomy comes within the scope of Article 8 ECHR,Footnote 79 which specifies the right to respect for private and family life. This connection has been repeatedly illustrated in the ECtHR jurisprudence dealing with individuals’ fundamental life choices, including inter alia in relation to sexual preferences/orientation, and personal and social life (i.e., including a person’s interpersonal relationships). Such cases illustrate the role played by the right to privacy in the development of one’s personality through self-realization and autonomy (construed broadly).Footnote 80 The link between the right to privacy and autonomy is thus strong, and therefore, although privacy and autonomy are not synonyms,Footnote 81 it may be reasonably argued that the right to privacy currently offers an avenue for protection of autonomy (as evidenced by the ECtHR case law).Footnote 82 The emergence of emotional AI and the detection of emotions in real time through emotion surveillance challenges the two strands of the right simultaneously, namely (1) privacy as seclusion or intimacy through the detection of emotions and (2) privacy as freedom of action, self-determination, and autonomy via their monetization.Footnote 83

Dignity, similar to autonomy, cannot be defined easily. The meaning of the word is by no means straightforward, and its relationship with fundamental rights is unclear.Footnote 84 The Rathenau Institute has touched upon this issue, noting that technologies are likely to interfere with other rights if the use of technologies interferes with human dignity.Footnote 85 However, there is little or no consensus as to what the concept of human dignity demands of lawmakers and adjudicators, and as noted by O’Mahony, as a result, many commentators argue that it is at best meaningless or unhelpful, and at worst potentially damaging to the protection of human rights.Footnote 86 Whereas a full examination of the substantive content of the concept is outside the scope of this chapter, it can be noted that human dignity, despite being interpreted differently due to cultural differences,Footnote 87 is considered to be a central value underpinning the entirety of international human rights law,Footnote 88 one of the core principles of fundamental rights,Footnote 89 and the basis of most of the values emphasized in the ECHR.Footnote 90 Although the ECHR itself does not explicitly mention human dignity,Footnote 91 its importance has been highlighted in several legal sources related to the ECHR, including the case law of ECtHR and various documents of the CoE.Footnote 92 Human dignity is also explicitly recognized as the foundation of all fundamental rights guaranteed by the CFREU,Footnote 93 and its role was affirmed by the Court of Justice of the EU (further ‘CJEU’).Footnote 94

With regard to its substantive content, it can be noted that as O’Mahony argues, perhaps the most universally recognized aspects of human dignity are equal treatment and respect.Footnote 95 In the context of emotional AI, it is particularly relevant that although human dignity shall not be considered as a right itself,Footnote 96 it is the source of the right to personal autonomy and self-determination (i.e., the latter are derived from the underlying principle of human dignity).Footnote 97 As noted by Feldman, there is arguably no human right which is unconnected to human dignity; however, ‘some rights seem to have a particularly prominent role in upholding human dignity’, and these include the right to be free of inhuman or degrading treatment, the right to respect for private and family life, the right to freedom of conscience and belief, the right to freedom of association, the right to marry and found a family, and the right to be free of discriminatory treatment.Footnote 98 Feldman argues that, apart from freedom from inhuman and degrading treatment, these rights are ‘not principally directed to protecting dignity and they are more directly geared to protecting the interests in autonomy, equality and respect’.Footnote 99 However, it is argued that these interests – autonomy, equality, and respect – are important in providing circumstances in which ‘dignity can flourish’, whereas rights which protect them usefully serve as a cornerstone of dignity.Footnote 100 In relation to this, since the employment of emotional AI may pose threats to these rights (e.g., to the right to respect for private and family life, as illustrated above, or to the right to be free of discriminatory treatment),Footnote 101 in essence it may pose threats to human dignity, respectively. To illustrate, one may refer to the analysis of live facial recognition technologies by the EU Agency for Fundamental Rights (further ‘FRA’),Footnote 102 emphasizing that the processing of facial images may affect human dignity in different ways.Footnote 103 According to FRA, human dignity may be affected, for example, when people feel uncomfortable going to certain places or events, change their behaviours, or withdraw from social life. The ‘impact on what people may perceive as surveillance technologies on their lives may be so significant as to affect their capacity to live a dignified life’.Footnote 104 FRA argues that the use of facial recognition can have a negative impact on people’s dignity and, relatedly, may pose threats to (rights to) privacy and data protection.Footnote 105

To summarize, the deployment of emotional AI in a business-to-consumer context necessitates a debate at a fundamental, constitutional level. Although it may benefit both businesses and consumers (e.g., by providing revenues and consumer satisfaction respectively), it has functional weaknessesFootnote 106 and also begs for the revealed legal considerations. Aside from the obvious privacy and data protection concerns, from the consumer’s perspective, individual autonomy and human dignity as overarching values may be at risk. Influencing activities evidently interfere not only with an individual’s autonomy and self-determination, but also with the individual’s freedom of thought, conscience, and religion.Footnote 107 It may be clear, as the CoE’s Committee of Ministers has noted, that also in other contexts (e.g., political campaigning), fine-grained, subconscious, and personalized levels of algorithmic persuasion may have significant effects on the cognitive autonomy of individuals and their right to form opinions and take independent decisions.Footnote 108 As a result, not only the exercise and enjoyment of individual human rights may be weakened, but also democracy and the rule of law may be threatened, as they are equally grounded on the fundamental belief in the equality and dignity of all humans as independent moral agents.Footnote 109

4.3 Suggestions to Introduce New (Constitutional) Rights

In the light of the previously noted factors, it comes as no surprise that some authors have discussed or suggested the introduction of some novel rights, in order to reinforce the existing legal arsenal.Footnote 110 Although both autonomy and dignity as relevant underlying values and some relevant rights such as right to privacy, freedom of thought, and freedom of expression are protected by the ECHR, some scholars argue that the ECHR does not offer sufficient protection in the light of the manipulative capabilities of emotional AI.Footnote 111 The subsequent paragraphs portray, in a non-exhaustive manner, such responses that concern the introduction of some new (constitutional) rights.

A first notable (American) scholar is Shoshana Zuboff, who has argued (in a broader context of surveillance capitalism)Footnote 112 for the ‘right to the future tense’. As noted by Zuboff, ‘we now face the moment in history when the elemental right to future tense is endangered’ by digital architecture of behavioural modification owned and operated by surveillance capital’.Footnote 113 According to Zuboff, current legal frameworks as mostly centred on privacy and antitrust have not been sufficient to prevent undesirable practices,Footnote 114 including the exploitation of technologies for manipulative purposes. The author argues for the laws that reject the fundamental legitimacy of certain practices,

including the illegitimate rendition of human experience as behavioral data; the use of behavioural surplus as free raw material; extreme concentrations of the new means of production; the manufacture of prediction products; trading in behavioral futures; the use of prediction products for third-party operations of modification, influence and control; the operations of the means of behavioural modification; the accumulation of private exclusive concentrations of knowledge (the shadow text); and the power that such concentrations confer.Footnote 115

While arguing about the rationale of the so-called right to the future tense, the author relies on the importance of free will (i.e., Zuboff argues that in essence manipulation eliminates the freedom to will). Consequently, there is no future without the freedom to will, and there are no subjects but only ‘objects’.Footnote 116 As the author puts it, ‘the assertion of freedom of will also asserts the right to the future tense as a condition of a fully human life’.Footnote 117 While arguing for the recognition of such a right as a human right, Zuboff relies on Searle, who argues that elemental rights are crystallized as formal human rights only at that moment in history when they come under systematic threat. Hence, given the development of surveillance capitalism, it is necessary to recognize it as a human right. To illustrate, Zuboff argues that no one is recognizing, for example, a right to breathe because it is not under attack, which cannot be said about the right to the future tense.Footnote 118

German scholar Jan Christoph Bublitz argues for the ‘right to cognitive liberty’ (phrased alternatively a ‘right to mental self-determination’), relying in essence on the fact that the right to freedom of thought has been insignificant in practice, despite its theoretical importance.Footnote 119 Bublitz calls for the law to redefine the right to freedom of thought in terms of its theoretical significance in light of technological developments capable of altering thoughts.Footnote 120 The author argues that such technological developments require the setting of normative boundaries ‘to secure the freedom of the forum internum’.Footnote 121

In their report for the Council of Europe analyzing human rights in the robot age, Dutch scholars Rinie van Est and Joost Gerritsen from the Rathenau Institute suggest reflecting on two novel human rights, namely, the right to not be measured, analyzed or coached and the right to meaningful human contact.Footnote 122 They argue that such rights are indirectly related to and aim to elaborate on existing human rights, in particular, the classic privacy right to be let alone and the right to respect for family life (i.e., the right to establish and develop relationships with other human beings).Footnote 123 While discussing the rationale of a potential right not to be measured, analyzed, or coached, they rely on scholarly work revealing detrimental effects of ubiquitous monitoring, profiling or scoring, and persuasion.Footnote 124 They argue that what is at stake given the technological development is not only the risk of abuse but the right to remain anonymous and/or the right to be let alone, ‘which in the robot age could be phrased as the right to not be electronically measured, analyzed or coached’.Footnote 125 However, their report ultimately leaves it unclear whether they assume it is necessary to introduce the proposed rights as new formal human rights. Rather, it calls for the CoE to clarify how these rights – the right to not be measured, analyzed, or coached, and the right to meaningful human contact – could be included within the right to privacy and the right to family life respectively.Footnote 126 In addition to considering potential novel rights, the Rathenau report calls for developing fair persuasion principles, ‘such as enabling people to monitor the way in which information reaches them, and demanding that firms must be transparent about the persuasive methods they apply’.Footnote 127

According to UK scholar Karen Yeung, manipulation may threaten individual autonomy and the ‘right to cognitive sovereignty’.Footnote 128 While arguing about the rationale of such a right, Yeung relies on the importance of individual autonomy and on the Razian approach comparing manipulation to coercion,Footnote 129 as discussed previously. In addition, Yeung relies on Nissenbaum, who observes that the risks of manipulation are even more acute in a digital world involving ‘pervasive monitoring, data aggregation, unconstrained publication, profiling, and segregation’, because the manipulation that deprives us of autonomy is more subtle than the world in which lifestyle choices are punished and explicitly blocked.Footnote 130 When it comes to arguing about the need to introduce a new formal human right, Yeung notes that human dignity and individual autonomy are not sufficiently protected by Articles 8, 9, and 10 of the ECHR; however, the study in question does not provide detailed arguments in that regard. The author also refrains from elaborating on the content of such a right.Footnote 131

Some novel rights are discussed at the institutional level as well. For example, the CoE’s Parliamentary Assembly has proposed working on guidelines which would cover, among other things, the recognition of some new rights, including the right not to be manipulated.Footnote 132

Further research is undoubtedly necessary to assess whether the current legal framework is not already capable of accommodating the developments properly. While the introduction of novel constitutional rights may indeed contribute to defining normative beacons, we should at the same time be cautious not to dilute the significance of constitutional rights by introducing new ones that could, in fact, be considered as manifestations of existing constitutional rights.Footnote 133 Hence, it is particularly important to delineate, as noted by Clifford, between primary and secondary law, and to assess the capabilities of the latter in particular.Footnote 134 In other words, it is necessary to exercise restraint and consider what already exists and also to delineate between rights and the specific manifestation of these rights in their operation and/or in secondary law protections (i.e., derived sub-rights). For example, key data subject rights like the right to erasure, object, access, and portability are all manifestations of the aim of respecting the right to data protection as balanced with other rights and interests. Admittedly, while the right to data protection has been explicitly recognized as a distinct fundamental right in the CFREU, this is not the case in the context of the ECHR, where the ECtHR has interpreted the right to privacy in Article 8 ECHR as encompassing informational privacy.Footnote 135 The rich debate on the relation between the right to privacy and the right to data protection, and how this impacts secondary law like the GDPR and Convention 108+, clearly exceeds the scope of this chapter.Footnote 136

4.4 Blueprint for a Future Research Agenda

The field of affective computing, and more specifically the technologies capable of detecting, classifying, and responding to emotions – in this chapter referred to as emotional AI – hold promises in many application sectors, for instance, for patient well-being in the health sector, for road safety, consumer satisfaction in retail sectors, and so forth. But, just like most (if not all) other forms of artificial intelligence, emotional AI brings with it a number of challenges and calls for assessing whether the existing legal frameworks are capable of accommodating the developments properly. Due to its manipulative capabilities, its potential harmful affective impact and potential profiling errors, emotional AI puts pressure on a whole range of constitutional rights, such as the right to respect for private and family life, non-discrimination, and freedom of thought, conscience, and religion. Moreover, the deployment of emotional AI poses challenges to individual autonomy and human dignity as underlying values underpinning the entirety of international human rights law, as well as to the underlying rationality-based paradigm imbued in law.

Despite the constitutional protection already offered at the European level, some scholars argue, in particular in the context of the ECHR, that this framework does not offer sufficient protection in light of the manipulative capabilities of emotional AI. They suggest (contemplating or introducing) novel rights such as the right to the future tense; the right to cognitive liberty (or, alternatively, the right to mental self-determination); the right to not be measured, analyzed, or coached; the right to cognitive sovereignty; and the right not to be manipulated.

At the same time, it should be noted that the field of constitutional law (in this chapter meant to cover the field of European human rights law) is a very dynamic area that is further shaped through case law, along with societal, economic, and technological developments. The way in which the ECtHR has given a multifaceted interpretation of the right to privacy in Article 8 ECHR is a good example of this.

This motivates the relevance of further research into the scope of existing constitutional rights and secondary sub-rights, in order to understand whether there is effectively a need to introduce new constitutional rights. A possible blueprint for IACL’s Research Group ‘Algorithmic State, Society and Market – Constitutional Dimensions’ could include

  • empirical research into the effects of fine-grained, subconscious, and personalised levels of algorithmic persuasion based on affective computing (in general or for specific categories of vulnerable groups, like childrenFootnote 137);

  • interdisciplinary research into the rise of new practices, such as the trading or renting of machine learning models for emotion classification, which may escape the traditional legal protection frameworks;Footnote 138

  • doctrinal research into the scope and limits of existing constitutional rights at European level in light of affective computing; Article 9 ECHR and Article 8 CFREU seem particularly interesting from that perspective;

  • comparative research, on the one hand, within the European context into constitutional law traditions and interpretations at the national level (think of Germany, where the right to human dignity is explicitly recognised in Article 1 Grundgesetz, versus Belgium or France, where this is not the case), and on the other hand, within the global context (comparing, for instance, the fundamental rights orientated approach to data protection in the EU and the more market-driven approach in other jurisdiction such as the US and AustraliaFootnote 139); and

  • policy research into the level of jurisdiction, and type of instrument, best suited to tackle the various challenges that emotional AI brings with it. (Is there, for instance, a need for a type of ‘Oviedo Convention’ in relation to (emotional) AI?)

At the beginning of this chapter, reference was made to the CoE’s Declaration on the Manipulative Capabilities of Algorithmic Processes of February 2019.Footnote 140 In that Declaration, the Committee of Ministers invites member States to

initiat[e], within appropriate institutional frameworks, open-ended, informed and inclusive public debates with a view to providing guidance on where to draw the line between forms of permissible persuasion and unacceptable manipulation. The latter may take the form of influence that is subliminal, exploits existing vulnerabilities or cognitive biases, and/or encroaches on the independence and authenticity of individual decision-making.

Aspiring to deliver a modest contribution to this much-needed debate, this chapter has set the scene and hopefully offers plenty of food for thought for future activities of the IACL Research Group on Algorithmic State Market & Society – Constitutional Dimensions.

5 Algorithmic Law: Law Production by Data or Data Production by Law?

Mariavittoria Catanzariti
5.1 Introduction

Online human interactions are a continuous matching of data that affects both our physical and virtual life. How data are coupled and aggregated is the result of what algorithms constantly do through a sequence of computational steps that transform the input into the output. In particular, machine learning techniques are based on algorithms that identify patterns in datasets. The paper explores how algorithmic rationality may fit into Weber’s conceptualization of legal rationality. It questions the idea that technical disintermediation may achieve the goal of algorithmic neutrality and objective decision-making.Footnote 1 It argues that such rationality is represented by surveillance purposes in the broadest meaning. Algorithmic surveillance reduces the complexity of reality calculating the probability that certain facts happen on the basis of repeated actions. Algorithms shape human behaviour, codifying situations and facts, stigmatizing groups rather than individuals, and learning from the past: predictions may lead to static patterns that recall the idea of caste societies, in which the individual potential of change and development is far from being preserved. The persuasive power of algorithms (the so-called nudging) largely consists of small changes aimed at predicting social behaviours that are expected to be repeated in time. This boost in the long run builds a model of anti-social mutation, where actions are oriented. Against such a backdrop, the role of law and legal culture is relevant for individual emancipation and social change in order to frame a model of data production by law. This chapter is divided into four sections: the first part describes commonalities and differences between legal bureaucracy and algorithms, the second part examines the linkage between a data-driven model of law production and algorithmic rationality, the third part shows the different perspective of the socio-legal approach to algorithmic regulation, and the fourth section questions the idea of law production by data as a product of legal culture.

5.2 Bureaucratic Algorithms

‘On-life’ dimensions represent the threshold for a sustainable data-driven rationality.Footnote 2 As stated in the White Paper on AI, ‘today 80% of data processing and analysis that takes place in the cloud occurs in data centres and centralized computing facilities, and 20% in smart connected objects, such as cars, home appliances or manufacturing robots, and in computing facilities close to the user (“edge computing”)’. By means of unceasing growth of categorizations and classifications, algorithms develop mechanisms of social control connecting the dots. This entails that our actions mostly depend or are somehow affected by the usable form in which the algorithm code is rendered. In order to enhance their rational capability in calculating every possible action, algorithms aim at reducing human discretion and at structuring behaviours and decisions similarly to bureaucratic organizations. Algorithms act as normative systems that formalize certain patterns. As pointed out by Max Weber, modern capitalist enterprise is mainly based on calculation. For its existence, it requires justice and an administration whose operation can at least in principle be rationally calculated on the basis of general rules – in the same way in which the foreseeable performance of a machine is calculated.Footnote 3 This entails that, on the one hand, like bureaucracy, algorithms, in fact, use impersonal laws requiring obedience that impede free not predictable choices.Footnote 4 In fact, according to the Weberian bureaucratic ideal types, the separation between the administrative body and the material means of the bureaucratic enterprise is quintessential to the most perfect form of bureaucratic administration: political expropriation towards specialized civil servants.Footnote 5 Nonetheless, impersonality of legal rules does not entail in any case lack of responsibility by virtue of the principle of the division of labour and the hierarchical order on which modern bureaucracy is based:Footnote 6 civil servants’ responsibility is indeed to obey impersonal rules or pretend they are impersonal, whereas the exclusive and personal responsibility belongs to the political boss for his actions.Footnote 7 Bureaucracy is characterized by the objective fulfilment of duties, ‘regardless of the person’ and based on foreseeable rules and independent from human considerations.Footnote 8

On the contrary, the risk of algorithmic decision-making is that no human actor is to take responsibility for the decision.Footnote 9 The supervision and the attribution of specialized competences from the highest bureaucratic levels towards the lowest ones (Weber uses the example of ‘procurement’)Footnote 10 assures that the exercise of authority is compliant to precise competences and technical qualities.Footnote 11 Standardization, rationalization, and formalization are common aspects both for bureaucratic organizations and algorithms. Bureaucratic administration can be considered economic as far as it is fast, precise, continuous, specialized, and avoids possible conflicts.Footnote 12 Testing algorithms as legal rational means imposes a double question: (1) whether through artificial intelligence and isocratic forms of administration the explainability of algorithmic processes improves the institutional processes and in what respect towards staff competence and individual participation, and (2) whether algorithms take on some of the role of processing institutional and policy complexity much more effectively than humans.Footnote 13

According to Aneesh, ‘bureaucracy represents an “efficient” ideal-typical apparatus characterized by an abstract regularity of the exercise of authority centred on formal rationality’.Footnote 14 In fact, algorithms ‘are trained to infer certain patterns based on a set of data. In such a way actions are determined in order to achieve a given goal’.Footnote 15 The socio-technical nature of public administration consists in the ability to share data: this is the enabler of artificial intelligence for rationalization. Like bureaucracy, algorithms would be apparently compatible with three Weberian rationales: the Zweckverein (purpose union), as an ideal type of the voluntary associated action; the Anstalt (institution), as an ideal type of institutions, rational systems achieved throughout coercive measures; the Verband (social group), as an ideal type of common action that aims to an agreement for a common purpose.Footnote 16 According to the first rationale, algorithms are used to smoothly guide a predictable type of social behaviour through data extraction on an ‘induced’ and mostly accepted voluntary basis;Footnote 17 as for the second, the induction of needs is achieved through forms of ‘nudging’, such as the customization of contractual forms and services based on profiling techniques and without meaningful mechanisms of consent; finally, the legitimacy is based on the social agreement on their utility to hasten and cheapen services (automation theory) or also improve them (augmentation system).Footnote 18

However, unlike bureaucracy, technology directly legitimizes action enabling users with the bare option ‘can/cannot’. Legitimacy is embedded within the internal rationality of technology. As Pasquale observes, ‘authority is increasingly expressed algorithmically’.Footnote 19 Moreover, similar to the rise of bureaucratic action, technologies have been thought to be controlled by the exercise of judicial review not to undermine civil liberties and equality. As a matter of fact, algorithmic systems are increasingly being used as part of the continuous process of Entzauberung der Welt (disenchantment of the world) – the achievement of rational goals through organizational measures – with potentially significant consequences for individuals, organizations and societies as a whole.

There are essentially four algorithmic rational models of machine learning that are relevant for law-making: the Neural Networks that are algorithms learning from examples through neurons organized in layers; the Tree Ensemble methods that combine more than one learning algorithm to improve the predictive power of any of the single learning algorithms that they combine; the Support Vector Machines that utilize a subset of the training data, called support vectors, to represent the decision boundary; the Deep Neural Network that can model complex non-linear relationship with multiple hidden layers.Footnote 20

Opaqueness and automation are their main common features, consisting of the secrecy of the algorithmic code and the very limited human input.Footnote 21 This typical rationality is blind, as algorithms – Zuboff notes – inform operations given the interaction of these two aspects. Nonetheless, explainability and interpretability are also linked to the potential of algorithmic legal design as rational means.Footnote 22 Rational algorithmic capability is linked to the most efficient use of data and inferences based on them. However, the development of data-driven techniques in the algorithmic architecture determines a triangulation among market, law, and technology. To unleash the full potential of data, rational means deployed to create wider data accessibility and sharing for private and public actors are now being devised in many areas of our lives. However, it should be borne in mind that the use of algorithms as a tool for speeding up the efficiency of the public sector cannot be separately examined from the risk of algorithmic surveillance based on indiscriminate access to private-sector data.Footnote 23 This is due to the fact that the entire chain of services depends upon more or less overarching access to private sector data. Access to those data requires a strong interaction between public actors’ political power and private actors’ economic and technological capability. This dynamic is pervasive as much as it entirely dominates our daily life from market strategy to economic supply. Furthermore, once the ‘sovereigns’ of the nation-states and their borders have been trumped, data flows re-articulate space in an endless way. The paradox of creating space without having a territory is one of the rationales of the new computational culture that is building promises for the future.

5.3 Law Production by Data

Law production is increasingly subjected to a specialized rationality.Footnote 24 Quantitative knowledge feeds the aspiration of the state bureaucracy’s ‘rationality’, since it helps dress the exercise of public powers of an aura of technical neutrality and impersonality, apparently leaving no room to the discretion of the individual power.Footnote 25 Behind the appearance of the Weberian bureaucratic principle sine ira et studio – which refers to the exclusion of affective personal, non-calculable, and non-rational factors in the fulfilment of civil servants’ dutiesFootnote 26 – the use of classification and measurement techniques affecting human activities generate new forms of power that standardize behaviours for forecasting expectations, performances and conducts of agents.Footnote 27 As correctly highlighted by Zuboff, ‘instrumentarian power reduces the human experience to measurable observable behaviour while remaining steadfastly indifferent to the meaning of that experience’.Footnote 28 However, even though the production of the law through customized and tailored solutions can be a legitimate goal of computational law, it is not all. Social context may change while the law is ruling, but technology reflects changing social needs in a more visible way than the law and apparently provides swifter answers.Footnote 29 On the contrary, the law should filter daily changes, including technological ones, into its own language, while it is regulating a mutable framework. To be competing with other prescriptive systems, the law may be used either as an element of computational rationality or a tool to be computable itself for achieving specific results. In the first case, the law guides and shrinks the action of rational agents through the legal design of algorithms, as an external constraint. In the second case, regulatory patterns are conveyed by different boosts that use the law in a predetermined way for achieving a given goal. Depending on which of those models is chosen, there could be a potential risk for the autonomy of the law in respect to algorithmic rationality. Both the autonomy of the law and the principle of certainty applicable to individuals are at stake. This is an increasingly relevant challenge since the whole human existence is fragmented through data.

Against these two drawbacks, the law may develop its internal rationality even in a third way: as the product of the legal culture that copes with social challenges and needs. Essentially, legal culture is a tough way in which society reflects upon itself, through doctrinal and conceptual systems elaborated by lawyers; through interpretation; and through models of reasoning.Footnote 30 This entails the law being a rational means not only due to its technical linguistic potentialFootnote 31 but also due to its technical task aimed at producing social order.Footnote 32 As Weber notes, the superiority of bureaucratic legal rationality over other rational systems is technical.

Nonetheless, not all times reflect a good legal culture, as this can be strongly affected by political and social turmoil. In the age of datification, all fragments of daily life are translated into data, and it is technically possible to shape different realities on demand, including information politics and market. The creation of propensities and assumptions through algorithms as a basis of a pre-packaged concept of the law – driven by colonizing factors – breaks off a spontaneous process through which legal culture surrounds the law. As a result, the effects of algorithmic legal predictions contrast with the goal of legal rationality, which is to establish certain hypotheses and to cluster factual situations into them. The production of the legal culture entails the law being the outcome of a specific knowledge and normative meanings as the result of a contextual Weltanschauung. This aspect has nothing to do either with the legitimacy or with the effectiveness, rather with the way in which the law relies on society. In particular, the capability to produce social consequences that are not directly addressed by the law, by suggesting certain social behaviours and by activating standardized decisions on a large scale, represents such a powerful tool that has been considered the core of algorithmic exception states.Footnote 33 The idea of exception is explained by the continuous confusion between the rule of causality and the rule of correlation.Footnote 34 Such a blurring between cause and effects, evidences and probabilities, causal inferences and variables, affects database structures, administrative measures that are showed under the form of the algorithmic code, and ultimately rules.Footnote 35 Algorithms lack adaptability because they are based on a casual model that cannot replicate the inferential process of humans to which the general character of the law refers. Human causal intuitions dominate uncertainty differently from machine learning techniques.Footnote 36

Data is disruptive for its capability to blur the threshold between what is inside and what is outside the law. The transformation of the human existence into data is at the crossroad of the most relevant challenges for law and society. Data informs the functioning of legal patterns, but it can be also a component of law production. A reflection on the social function of the law in the context of algorithmic rationality is useful in order to understand what type of data connections are created for regulatory purposes within an ‘architecture of assumptions’, to quote McQuillan. Decoding algorithms sometimes allows one to interpret such results, even though the plurality and complexity of societal patterns cannot be reduced to the findings of data analysis or inferential interpretation generated by automated decision-making processes. The growing amount of data, despite being increasingly the engine of law production, does reflect the complexity of the social reality, which instead refers to possible causal interactions between technology, reality and regulatory patterns, and alternative compositions of them, depending upon uncertain variables. Datification, on which advanced technologies are generally based, has profoundly altered the mechanisms of production of legal culture, which cannot be easily reduced to what data aggregation or data analysis is. Relevant behaviours and social changes nourish inferences that can be made from data streams: despite the fact that they can be the output of the law, they will never be the input of the legal culture. Between the dry facts and the causal explanation, there is a very dense texture for the elaboration of specialized jurists, legal scholars, and judges. Furthermore, globalization strongly shapes common characters across different legal traditions no longer identifiable with an archetypal idea of state sovereignty. This depends upon at least two factors: on the one hand, the increasing cooperation between private and public actors in data access and information management beyond national borders; on the other hand, the increasing production of data from different sources. Nonetheless, not much attention has been paid to the necessity of safeguarding the space of the legal culture in respect to law overproduction by data. Regulation of technology combined with the legal design of technology tends to create a misleading overlap between both, because technological feasibility is becoming the natural substitution of legal rationales. Instead, I argue that the autonomous function of the legal culture should be revenged and preserved as the theoretical grid for data accumulation. What legal culture calls into question is the reflexive social function of the law that data-driven law erases immediately by producing a computational output. In addition, the plurality of interconnected legal systems cannot be reduced to data. The increasing production of the law resulting from data does not reflect the complexity of social reality. How data and technologies based on them affect the rise of legal culture and the production of data-driven laws has not only to do with data. According to a simple definition of legal culture as ‘one way of describing relatively stable patterns of legally oriented social behaviour and attitudes’,Footnote 37 one may think of data-driven law as a technologically oriented legal conduct.

‘Commodification of “reality” and its transformation into behavioural data for analysis and sales’,Footnote 38 defined by Zuboff as surveillance capitalism, has made the private human experience a ‘free raw material’Footnote 39 that can be elaborated and transformed into behavioural predictions feeding production chain and business. Data extraction allows the capitalistic system to know all about all. It is a ‘one-way process, not a relationship’, which produces identity fragmentation and attributes an exchange value to single fragments of the identity itself.Footnote 40 Algorithmic surveillance indeed produces a twofold phenomenon: on the one hand, it forges the extraction process itself, which is predetermined to be predictive; on the other hand, it determines effects that are not totally explainable, despite all accurate proxies input into the system. Those qualities are defined operational variables that are processed at a very high speed so that it is hard for humans to monitor them.Footnote 41

In the light of an unprecedented transformation that is radically shaping the development of personality as well as common values, the role of the law should be not only to guarantee ex post legal remedies but also to reconfigure the dimension of human beings, technology, and social action within integrated projects of coexistence with regulatory models. When an individual is subject to automation – the decision-making process, which determines the best or worst chances of well-being, the easiest or least opportunities to find a good job, or in the case of the predictive police, a threat to the presumption of innocence – the social function of the law is necessary to cope with the increasing complexity of relevant variables and to safeguard freedom. Power relationships striving to impose subjugation vertically along command and obedience relationships are replaced by a new ‘axiomatic’ one: the ability to continuously un-code and re-code the lines along which information, communication, and production intertwine, combining differences rather than forcing unity.

5.4 The Socio-legal Approach

The current socio-legal debate on algorithmic application on legal frameworks is very much focused on issues related to data-driven innovation. Whereas the internal approach is still dominant in many regulatory areas, the relationship between law and technology requires an external perspective that takes into account different possibilities. As the impact of artificial intelligence on the law produces social and cultural patterns, a purely internal legal approach cannot contribute to a comprehensive understanding. However, whereas the law produces bindings effects depending on if certain facts may or not happen, algorithms are performative in the sense that the effect that they aim to produce is encompassed in the algorithmic code. The analysis of both the benefits and the risks of algorithmic rationality have societal relevance for the substantial well-being of individuals. On one hand, the lack of an adequate sectoral regulatory framework requires a cross-cutting analysis to highlight potential shortcomings in the existing legal tools and their inter-relationships. In addition, operational solutions should be proactive in outlining concrete joined-up policy actions, which also consider the role of soft-law solutions. On the other hand, the potential negative impact of biased algorithms on rights protection and non-discrimination risks establishing a legal regime for algorithmic rationality that does not meet societal needs. In order to address the interplay between societal needs, rights, and algorithmic decision-making, it is relevant to pinpoint several filters on the use of AI technology in daily life.

For example, a social filter sets a limits for the manner in which technology is applied on the basis of the activities of people and organizations. A well-known recent example of a social filter is how taxi drivers and their backing organizations have opposed transport platforms and services. An institutional filter sets institutionally determined limits on the ways in which technology can be applied. This type of institutional system includes the corporate governance model, the education system, and the labour market system. A normative filter sets regulatory and statute-based limitations on the manner in which technology can be applied. For example, the adoption of self-steering vehicles in road traffic will be slow until the related issues regarding responsibilities have been conclusively determined in legislation. Last but not least, an ethical filter sets restrictions on the ways in which technology is applied.

A further step requires identifying a changing legal paradigm that progressively shifts attention from the idea of a right to a reasonable explanation of the algorithm as a form of transparency to the right to reasonable inferences (through the extensive interpretation of the notion of personal data that it includes the notion of decisional inference) or towards an evolutionary interpretation of the principle of good administration.Footnote 42 The evolutionary interpretation of the principle of good administration has hinged on the algorithmic ‘black box’ within a more fruitful path, oriented towards the legality and responsibility of the decision maker in the algorithmic decision-making process. This is particularly relevant in the field of preventive surveillance, for example, as it is mainly a public service whose technological methods can be interpreted in the light of the principle of good administration.

More broadly, the rationale of AI in the digital single market should inter alia guarantee: (1) better services that are cost-efficient; (2) unifying cross-border public services, increasing efficiency and improving transparency; (3) promoting the participation of individuals in the decision-making process; and (4) improving the use of AI in the private sector as a potential to improve business and competitiveness.Footnote 43

In order to achieve these objectives, it is necessary to evaluate the social impact, as well as the risks and opportunities, that the interaction between public and private actors in accessing data through the use of algorithmic rationality combined with legal rationality entails. However, the optimization of organizational processes in terms of efficiency, on the one hand, and the degree of users’ satisfaction, on the other hand, are not relevant factors to face the impact of algorithms on rights. The law preserving individual chances of emancipation is at the centre of this interaction, constituting the beginning and the end of the causal chain, since both the production of law for protecting rights and the violation of rights significantly alter this relationship. This aspect is significant, for instance, in the field of machine learning carried out on the basis of the mass collection of data flows, from which algorithms are able to learn. The ability of machine learning techniques to model human behaviour, to codify reality and to stigmatize groups, increases the risk of couching static social situations, undermining the free and self-determined development of personality. Such a risk is real, irrespective of the fact that algorithms are used to align a legal system to a predetermined market model or to reach a precise outcome of economic policy. In both cases, algorithms exceed the primary function of the law, which is to match the provision of general and abstract rules with concrete situations through adaptive solutions. Such an adaptation process is missing in the algorithmic logic, because the algorithmic code is unchangeable.

Law as a social construction is able to address specific situations and change at the same time in its interpretation or according to social needs. Indeed, law should advocate an emancipatory function for human beings who are not subject to personal powers. If applied to algorithmic decision-making in the broadest context, the personality of laws may result in tailored and fragmented pictures corresponding to ‘social types’ on the basis of profiling techniques. This is the reason why law production by data processed through algorithms cannot be the outcome of any legal culture, as it would be a pre-packaged solution regardless of the institutional and political context surrounding causes and effects. Nonetheless, the increasing tailored production of data-driven law through algorithmic rationality cannot overcome such a threshold in a way that enables a decision-making process – at every level of daily life – being irrespective of autonomy, case-by-case evaluation, and freedom.

The alignment of legal requirements and algorithmic operational rules must always be demonstrated ex post both at a technical level and at a legal level in relation to the concrete case.

5.5 Data Production by Law

Against the backdrop of data-driven law, legal rationality should be able to frame a model rather based on data production by law. However, a real challenge that should be borne in mind is that algorithmic bureaucracy does not need a territory as legal bureaucracy.Footnote 44 Algorithmic systems are ubiquitous, along with data that feed machine learning techniques. Whereas a bureaucratic state is a way to organize and manage the distribution of power over and within a territory, algorithms are not limited by territory. Sovereignty’s fragmentation operated by data flows shows that virtual reality is a radical alternative form to territorial sovereignty and cannot be understood as a mere assignment of sovereign powers upon portions of data. The ubiquity of data requires a new description of regulatory patterns in the field of cross-border data governance as data location that would create under certain conditions the context of the application of legal regime, and the exclusion of another is not necessarily a criterion which is meaningfully associated with the data flow. Data is borderless, as it can be scattered everywhere across different countries.Footnote 45 Although data can be accessed everywhere irrespective of where it is located, its regulation and legal effects are still anchored to the territoriality principle. Access to data does not depend on physical proximity; nor are regulatory schemes arising from data flows intrinsically or necessarily connected to any particular territory. Connection with territory must justify jurisdictional concerns but does not have much to do with physical proximity. Such disconnection between information and its un-territorial nature potentially generates conflicts of law and may produce contrasting claims of sovereign powers.Footnote 46 This is magnified by algorithmic systems that do not have a forum loci because they are valid formulations regardless of the geographical space where they are applied. Furthermore, they gather data sets irrespective of borders or jurisdictions. Bureaucracy’s functioning much depends upon borders, as it works only within a limited territory.Footnote 47 On the contrary, algorithms are unleashed from territories but can affect multiple jurisdictions, as the algorithmic code is territorially neutral. This may be potentially dangerous for two reasons: on the one hand, algorithms can transversally impact different jurisdictions, regardless of the legal systems and regulatory regimes involved; on the other hand, the disconnection of the algorithmic code from territory and implies a law production that does not emerge from legal culture. Even though legal culture is not necessarily bound to the concept of state sovereignty,Footnote 48 it is inherent to a territory as a political and social space. Weber rejects the vision of the modern judge as a machine in which ‘documents are input together with expenses’ and which spits out the sentence together with the motives mechanically inferred from the paragraphs. Indeed, there is the space for the individualizing assessment in respect of which the general norms have a negative function in that they limit the official’s positive and creative activity.Footnote 49 This massive difference between legal rationality and algorithmic rationality imposes rethinking the relationship between law, technology, and legal culture. Data production by law can be a balanced response to reconnect algorithmic codes to the boundaries of jurisdictions. Of course, many means of data production by law exist. A simple legal design of data production is not the optimal option. Matching algorithmic production of data and legal compliance can be mechanically ensured through the application of certain patterns that are inserted in the algorithmic process. Instead, the impact of legal culture over the algorithmic production of data shape a socio-legal context inspiring the legal application of rules on data production.

The experience of the Italian Administrative Supreme Court (Council of State) is noteworthy. After the leading case of 8 April 2019 n. 2270 that opened the path to administrative algorithmic decision-making, the Council of State confirmed its case law.Footnote 50 It holds the lawfulness of automated decision-making in administrative law, providing limits and criteria.Footnote 51 It extended for the first time the automated decision-making both to public administration’s discretionary and binding activities. The use of algorithmic administrative decision-making is encompassed by the principle of good performance of administration pursuant to article 97 of the Italian Constitution. The Council stated that the fundamental need for protection posed by the use of the so-called IT tool algorithmic is transparency due to the principle of motivation of the decision.Footnote 52 It expressly denied algorithmic neutrality, holding that predictive models and criteria are the result of precise choices and values. Conversely, the issue of the dangers associated with the instrument is not overcome by the rigid and mechanical application of all detailed procedural rules of Law no. 241 of 1990 (such as, for example, the notice of initiation of the proceeding).

The underlying innovative rationale is that the ‘multidisciplinary character’ of the algorithm requires not only legal but technical, IT, statistical, and administrative skills, and does not exempt from the need to explain and translate the ‘technical formulation’ of the algorithm into the ‘legal rule’ in order to make it legible and understandable.

Since algorithm becomes a modality of the authoritative decision, it is necessary to determine specific criteria for their use. Surprisingly, the Council made an operation of legal blurring, affirming that knowability and transparency must be interpreted according to articles 13, 14, and 15 GDPR. In particular, the interested party must be informed of the possible execution of an automated decision-making process; in addition, the owner of algorithms must provide significant information on the logic used, as well as the importance and expected consequences of this treatment for the interested party.

Additionally, the Council adopted three supranational principles: (1) the full knowability of the algorithm used and the criteria applied pursuant to article 42 of the EU Charter (‘Right to a good administration’), according to which everyone has the right to know the existence of automated decision-making processes concerning him or her and, in this case, to receive significant information on the logic used; (2) the non-exclusivity of automated decision-making, according to which everyone has the right not to be subjected to solely automated decision-making (similarly to article 22 GDPR); and (3) the non-discrimination principle, as a result of the application of the principle of non-exclusivity, plus data accuracy, minimization of risks of errors, and data security.Footnote 53 In particular, the data controller must use appropriate mathematical or statistical procedures for profiling, implementing adequate technical and organizational measures in order to ensure correction of the factors that involve data inaccuracy, thus minimizing the risk of errors.Footnote 54 Input data should be corrected to avoid discriminatory effects in decision-making output. This operation requires the necessary cooperation of those who instruct the machines that produce these decisions. The goal of a legal design approach is to filter data production through the production of potential algorithmic harms and the protection of individual rights, and figure out which kind of legal remedies are available and also useful to individuals. The first shortcoming of such endeavour is that – given for granted the logic of garbage in/garbage out, according to which inaccurate inputs produce wrong outputs – it is noteworthy that a legal input is not a sufficient condition to produce a lawful output. Instead, an integrated approach such as the one adopted by the Council of State is based on more complex criteria to consider the lawfulness of algorithmic decision-making, also in respect of actors involved. First, it is necessary to ensure the traceability of the final decision to the competent body pursuant to the law conferring the power of the authoritative decision to the civil servants in charge.Footnote 55 Second, the comprehensibility of algorithms must involve all aspects but cannot result in harm for IP rights. In fact, pursuant to art. 22, let. c, Law 241/90 holders of an IP right on software are considered counter-interested,Footnote 56 but Consiglio di Stato does not specifically address the issue of holders of trade secrets.


While discussing similarities between bureaucratic and algorithmic rationality, I voluntarily did not address the issue of secrecy. According to Weber, each power that aims to its preservation is a secret power in one of its features. Secrecy is functional for all bureaucracies to the superiority of their technical tasks towards other rational systems.Footnote 57 Secrecy is also the fuel of algorithmic reasoning, as its causal explanation is mostly secret. This common aspect, if taken for granted as a requirement of efficient rational decision-making, should be weighted in a very precise way in order to render algorithms compliant with the principle of legality.

This chapter has explored how algorithmic bureaucracy proves to be a valuable form of rationality as far as it does not totally eliminate human intermediation under the form of imputability, responsibility, and control.Footnote 58 To be sure, this may happen only under certain conditions that are summarized as follows: (1) Technological neutrality for law production cannot be a space ‘where legal determinations are de-activated’Footnote 59 in such a way that externalizes control. (2) Law production by data is not compatible with Weberian’s legal rationality. (3) Translation of technical rules into legal rules needs to be filtered through legal culture. (4) Data production by law is the big challenge of algorithmic rationality. (5) Algorithmic disconnection from territory cannot be replaced by algorithmic global surveillance. (6) Legal design of algorithmic functioning is not an exhaustive solution. (7) The linkage of automated decision-making to the principle of good administration is a promising trajectory along which concepts such as traceability, knowability, accessibility, readability, imputability, responsibility, and non-exclusivity of the automated decision have been developed in the public interest.

All these conditions underlie a regulatory idea that draws the role of lawyers from what Max Weber defined as die geistige Arbeit als Beruf (the spiritual work as profession). In this respect, algorithmic rationality may be compatible with a legal creative activity as long as a society is well equipped with good lawyers.Footnote 60 The transformation of law production by data into data production by law is a complex challenge that lawyers can drive if they do not give up being humanists for being only specialized experts.Footnote 61 From this perspective, algorithmic bureaucratic power has a good chance of becoming an ‘intelligent humanism’.Footnote 62 To accomplish this task, the law should re-appropriate its own instruments of knowledge's production. This does not mean to develop a simplistic categorization of legal compliance requirements for machine-learning techniques. Nor it only relies on the formal application of legal rationality to the algorithmic process. In the long run, it shall bring towards increasing forms of data production of law. Data production of law defines the capability of the law to pick and choose those data that are relevant to elaborate new forms of legal culture. How the law autonomously creates knowledge from experiences that impact on society is a reflexive process that needs institutions as well as individuals. As much as this process is enshrined in a composite legal culture, the law has more chances to recentre its own role in the development of democratic societies.

6 Human Rights and Algorithmic Impact Assessment for Predictive Policing

Céline Castets-Renard Footnote *
6.1 Introduction

Artificial intelligence (AI) constitutes a major form of scientific and technological progress. For the first time in human history, it is possible to create autonomous systems capable of performing complex tasks, such as processing large quantities of information, calculating and predicting, learning and adapting responses to changing situations, and recognizing and classifying objects.Footnote 1 For instance, algorithms, or so-called Algorithmic Decision Systems (ADS),Footnote 2 are increasingly involved in systems used to support decision-making in many fields,Footnote 3 such as child welfare, criminal justice, school assignment, teacher evaluation, fire risk assessment, homelessness prioritization, Medicaid benefit, immigration decision systems or risk assessment, and predictive policing, among other things.

An Automated Decision(-making/-support) System (ADS) is a system that uses automated reasoning to facilitate or replace a decision-making process that would otherwise be performed by humans.Footnote 4 These systems rely on the analysis of large amounts of data from which they derive useful information to make decisions and to inferFootnote 5 correlations,Footnote 6 with or without artificial intelligence techniques.Footnote 7

Law enforcement agencies are increasingly using algorithmic predictive policing systems to forecast criminal activity and allocate police resources. For instance, New York, Chicago, and Los Angeles use predictive policing systems built by private actors, such as PredPol, Palantir, and Hunchlab,Footnote 8 to assess crime risk and forecast its occurrence, in hope of mitigating it. More often, such systems predict the places where crimes are most likely to happen in a given time window (place-based) based on input data, such as the location and timing of previously reported crimes.Footnote 9 Other systems analyze who will be involved in a crime as either victim or perpetrator (person-based). Predictions can focus on variables such as places, people, groups, or incidents. The goal is also to better deploy officers in a time of declining budgets and staffing.Footnote 10 Such tools are mainly used in the United States, but European police forces have expressed an interest in using them to protect the largest cities.Footnote 11 Predictive policing systems and pilot projects have already been deployed,Footnote 12 such as PredPol, used by the Kent Police in the United Kingdom.

However, these predictive systems challenge fundamental rights and guarantees of the criminal procedure (Section 6.2). I will address these issues by taking into account the enactment of ethical norms to reinforce constitutional rights (Section 6.3),Footnote 13 as well as the use of a practical tool, namely Algorithmic Impact Assessment, to mitigate the risks of such systems (Section 6.4).

6.2 Human Rights Challenged by Predictive Policing Systems

In proactive policing, law enforcement uses data and analyzes patterns to understand the nature of a problem. Officers attempt to prevent crime and mitigate the risk of future harm. They refer to the power of information, geospatial technologies, and evidence-based intervention models to predict what and where something is likely to happen, and then deploy resources accordingly.Footnote 14

6.2.1 Reasons for Predictive Policing in the United States

There are many reasons why predictive policing systems have been specifically deployed in the United States. First, the high level of urban gun violence pushed the police departments of Chicago,Footnote 15 New York, Los Angeles, and Miami, among others, to take preventative action.

Second, it is an opportunity for American tech companies to deploy, within the national territory, products that have previously been developed and put into practice within the framework of international US military operations.

Third, beginning in 2007, within the context of the financial and economic crisis and ensuing budget cuts in police departments, predictive policing tools have been seen as a way ‘to do more with less’.Footnote 16 Concomitantly, the National Institute of Justice (NIJ), an agency of the US Department of Justice, granted several police departments permission to conduct research and try these new technologies.Footnote 17

Fourth, the emergence of predictive policing tools has been incited by the crisis of weakened public trust in law enforcement in numerous cities. Police violence, particularly towards young African Americans, has led to the research on more ‘objective’ methods to improve the social climate and conditions of law enforcement. Public outcry against the discrimination risks inherent to traditional methods has come from citizens, social movements such as ‘Black Lives Matter’, and even in an official capacity from the US Department of Justice (DOJ) investigations surrounding the actions of the Ferguson Police Department after the death of Michael Brown.Footnote 18 Following this incident, the goal was to find new and modern methods which are unbiased toward African Americans as much as possible. The unconstitutionality of methods,Footnote 19 such as Stop-and-Frisk in New York and Terry Stop,Footnote 20 based on the US Supreme Court’s decision in the Terry v. Ohio case, converged with the rise of new, seemingly perfect technologies. The Fourth Amendment of the US Constitution prohibits ‘unreasonable searches and seizures’, and states, ‘no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized’.

Fifth, the privacy laws are less stringent in the United States than in the European Union, due to a sectorial approach to protection within the United States. Such normative difference can explain why the deployment of predicting policing systems was easier in the United States.

6.2.2 Cases Studies: PredPol and Palantir

When working to predict crime, multiple methods and tools are available for use. I propose a closer analysis of two tools offered by the PredPol and Palantir companies. PredPol

PredPol is a commercial software offered by the American company PredPol Inc. and was initially used in tests by the LAPDFootnote 21 and eventually used in Chicago and in Kent County in the United Kingdom. The tool’s primary purpose is to predict, both accurately and in real time, the locations and times where crimes have the highest risk of occurring.Footnote 22 In other words, this tool identifies risk zones (hotspots) based on the same types of statistical models used in seismology. The input data include city and territorial police archives (reports, ensuing arrests, emergency calls), all applied in order to identify the locations where crimes occur most frequently, so as to ‘predict’ which locations should be prioritized. Here, the target is based on places, not people. The types of offenses can include robberies, automobile thefts, and thefts in public places. A US patent regarding the invention of an ‘Event Forecasting System’Footnote 23 was approved on 3 February 2015 by the US Patent and Trademark Office (USPTO). The PredPol company claims that its product assists in improving the allocation of resources in patrol deployment. Finally, the tool also incorporates the position of all patrols in real time, which allows departments to not only know where patrols are located but also control their positions. Providing information on a variety of mobile tools such as tablets, smartphones, and laptops, in addition to desktop computers, was also a disruption from previously used methods.

The patent’s claims do not specify the manner in which data are used, calculated, or applied. The explanation provided in the patent is essentially based on the processes used by the predictive policing systems, particularly the organizational method used (the three types of data (place, time, offense), geographic division into cells, the transfer of information by a telecommunications system, the reception procedure of historic data, access to GPS data, the link with legal information from penal codes, etc.), rather than on any explanation of the technical aspects. The patent focuses more particularly on the various graphic interfaces and features available to users, such as hotspot maps (heatmaps), which display spatial-temporal smoothing models of historical crime data. It also allows for the use of the method in its entirety but does not relate to the predictive algorithm. The technical aspects are therefore not subject to ownership rights but are instead covered by trade secrets. Even if PredPol claims to provide transparency of its approach, the focus is on the procedure, rather than on the algorithm and mathematical methods used, despite the publication of several articles by the inventors.Footnote 24 Some technical studiesFootnote 25 have been carried out by using publicly available data in cities, such as Chicago, and applying the data to models similar to that of PredPol. However, this tool remains opaque.

It is difficult to estimate the value that these forecasts add in comparison to historic hotspot maps. The few works evaluating this approach that have been published do not concern the quality of the forecasting, but the crime statistics. Contrary to PredPol’s claims,Footnote 26 the difference in efficiency is ultimately modest, depending on both the quantity of data available on a timescale and on the type of offense committed. The studies most often demonstrate that the prediction of crimes occurred most frequently in the historically most criminogenic areas within the city. Consequently, the software does not teach anything to the most experienced police officers who may be using it. While the Kent Police Department was the first to introduce ‘predictive policing’ in Europe in 2013, it has been officially recognized that it is difficult to prove whether the system has truly reduced crime. It was finally stopped in 2018Footnote 27 and replaced by a new internal tool, the NDAS (National Data Analytics Solution) project, to reduce costs and achieve a higher efficiency. It is likely that a tool developed in one context will not necessarily be relevant in another criminogenic context, as the populations, geographic configurations of cities, and the organization of criminal groups are different.

Moreover, the software tends to systematically send patrols into neighbourhoods that are considered as more criminogenic, which are mainly inhabited in the United States by African American and Latino/a populations.Footnote 28 Historical data certainly show high risk in these neighbourhoods, but most of the data were collected in the age of policies such as Terry Stop and Stop-and-Frisk, and were biased, discriminatory, and ultimately unconstitutional. The system, however, does not examine or question the trustworthiness of these types of data. Furthermore, the choice of the type of offense, primarily related to property crime (burglaries, car thefts), constitutes a type of crime that is more likely to be practiced by the poorest and most vulnerable populations, which are frequently composed of the aforementioned minority groups. The results would naturally be different if white-collar crimes were considered. These crimes are excluded from today’s predictive policing due to the difficulties of modelling and the absence of significant data. The fact that law enforcement wants to prevent certain types of offenses rather than others, via the use of automated tools is not socially neutral and carries out discrimination against a part of the population. The founders of PredPol and its developers responded to these critiques of bias in several articles published in 2017 and 2018, in which they largely emphasize the auditing of learning data.Footnote 29 High-quality learning data are essential to avoid and reduce bias. But if the data used by PredPol are biased, this demonstrates that society itself is biased as a whole. PredPol simply emphasizes this fact, without actually being a point of origin of discrimination. Consequently, the bias present in the tool is no greater than the bias previously generated by the data collected by police officers on the ground. Palantir

Crime Risk Forecasting is the patent held by the company Palantir Technologies Inc., based in California. This device has been deployed in Los Angeles, New York, and New Orleans, but the contracts are often kept secret.Footnote 30 Crime Risk Forecasting is an ensemble of software and material that constitutes an ‘invention’ outlined in US patent and obtained on 8 September 2015.Footnote 31 The patent combines several components and features, including a database manager, visualization tools (notably interactive geographic cartography), and criminal forecasts. The goal is to assist police in predicting when and where crime will take place in the future. The forecasts of criminal risk are established within a geographic and temporal grid, for example, of 250 square meters, during an eight-hour police patrol.

The data include:

  • Crime history, classified by date, type, location, and more. The forecast can provide either a precise date and time, or a period of time over which risk is uniformly distributed. Similarly, the location can be more or less precise, either by address, GPS coordinates, or geographic zone. The offenses can be, for example, robberies, vehicle thefts (or thefts of belongings from within vehicles), and violence.

  • Historical information which is not directly connected to crime: weather, presence of patrols within the grid or in proximity, distribution of emergency service personnel.

  • Custody data indicating individuals who have been apprehended or who are in custody for certain types of crimes. These data can be used to decrease crime risk within a zone or to increase risk after the release of accused or convicted criminal.

Complex algorithms can be developed by aggregating methods associating hot-spotting, histograms, criminology models, and learning algorithms. The combination possibilities and the aggregation of multiple models and algorithms, as well as the large numbers of variables, result in a highly complex system, with a considerable number of parameters to estimate and hyperparameters to optimize. The patent does not specify how these parameters are optimized, nor does it define the expected quality of the forecasts. It is difficult to imagine that any police force could actually use this tool regularly, without constant assistance from Palantir. Moreover, one can wonder: what are the risks of possible re-identification of victims from the historical data? What precautions are taken to anonymize and prevent re-identification? How about custody data, which are not only personal data, but are, in principle, only subject to treatment by law enforcement and government criminal justice services? Consequently, the features of these ADS remain opaque while the processed data are also unclear.

In this context, it would be a mistake to take predictive policing as a panacea to eradicate crime. Many concerns focus on inefficiency, risk of discrimination, as well as lack of transparency.

6.2.3 Fundamental Rights Issues

Algorithms are fallible human creations, and they are embedded with errors and bias, similar to human processes. More precisely, an algorithm is not neutral and depends notably on the data used. Many legal scholars have revealed bias and racial discrimination in algorithmic systems,Footnote 32 as well as their opacity.Footnote 33 When algorithmic tools are adopted by governmental agencies without adequate transparency, accountability, and oversight, their use can threaten civil liberties and exacerbate existing issues within government agencies. Most often, the data used to train automated decision-making systems will come from the agency’s own databases, and existing bias in an agency’s decisions will be carried over into new systems trained on biased agency data.Footnote 34 For instance, many data used by predictive policing systems come from the Stop-and-Frisk program in New York City and the Terry Stop policy. This historical data (‘dirty data’)Footnote 35 create a discriminatory pattern because data from 2004 to 2012 showed that 83 per cent of the stops were of black and Hispanic individuals and 33 per cent white. The overrepresentation of black and Hispanic people who were stopped may lead an algorithm to associate typically black and Hispanic traits with stops that lead to crime prevention.Footnote 36 Despite its over-inclusivity, inaccuracy, and disparate impact,Footnote 37 such data continue to be processed.Footnote 38 Consequently, the algorithms will consider African Americans as a high-risk population (resulting in a ‘feedback loop’ or a self-fulfilling prophecy),Footnote 39 as greater rates of police inspection lead to a higher rate of reported crimes, therefore reinforcing disproportionate and discriminatory policing practices.Footnote 40 Obviously, these tools may violate human rights protections in the United States, as well as in the European Union, both before or after their deployment.

A priori, predictive policing activities can violate the fundamental rights of individuals if certain precautions are not taken. Though predictive policing tools are useful for the prevention of offenses and the management of police forces, they should not be accepted as sufficient motive for stopping and/or questioning individuals. Several fundamental rights can be violated in case of abusive, disproportionate, or unjustified use of predictive policing tools: the right to physical and mental integrity (Charter of Fundamental Rights of the European Union, art. 3); the right to liberty and security (CFREU, art. 6); the right to respect for private and family life, home, and communications; the right to freedom of assembly and of association (CFREU, art. 12); the right to equality before the law (CFREU, art. 20); and the right to non-discrimination (CFREU, art. 21). The risks of infringing on these rights are greater if predictive policing tools target people, as opposed to places. The fact remains that the mere identification of a high-risk zone does not naturally lead to more rights for the police, who, in principle, must continue to operate within the framework of crime prevention and the maintenance of order.

In the United States, due process (the Fifth and Fourteenth Amendments)Footnote 41 and equal treatment clauses (the Fourteenth Amendment) could be infringed. Moreover, predictive policing could constitute a breach of privacy or infringe on citizens’ rights to be secure in their persons, houses, papers, and effects against unreasonable searches and seizures without a warrant based on a ‘probable cause’ (the Fourth Amendment). Similar provisions have been enacted in the State Constitutions. Despite the presence of these theoretical precautions, some infringements of fundamental rights have been revealed in practice.Footnote 42

A posteriori, these risks are higher when algorithms are involved in systems used to support decision-making by police departments. Law enforcement may find it needs to answer to the conditions of use of these tools on a case-by-case basis when decisions are reached involving individuals. To provide an example, the NYPD was taken to court for the use of the Palantir Gotham tool and its technical features.Footnote 43 The lack of information on the existence and use of predictive tools, the nature of the data in question, and the conditions of application of algorithmic results based on automated treatment were all contested on the basis of a lack of transparency and the resulting impossibility to enforce the defence’s right to due process (the Fifth and Fourteenth Amendments).Footnote 44 Additionally, the media,Footnote 45 academics,Footnote 46 and civil rights defence organizationsFootnote 47 have called out against the issues of bias and discrimination within these tools, which violate the Fourteenth Amendment principle of Equal Protection for all citizens under the law. In EU law, the Charter of Fundamental Rights also guarantees the right to an effective remedy and access to a fair trial (CFREU, art. 47), as well as the right to presumption of innocence and right of defence (CFREU, art. 48). All of these rights can be threatened if the implementation of predictive policing tools is not coupled with sufficient legal and technical requirements.

The necessity of protecting fundamental rights has to be reiterated in the algorithmic society. To achieve this, adapted tools must be deployed to ensure proper enforcement of fundamental rights. Some ethical principles need to be put in place in order to effectively protect fundamental rights and reinforce them. The goal is not substituting human rights with ethical principles but adding new ethical considerations focused on risks generated by ADS. These ethical principles must be accompanied by practical tools that will make it possible to provide designers and users with concrete information regarding what is expected when making or using automated decision-making tools. Algorithmic Impact Assessment (AIA) constitutes an interesting way to provide a concrete governance of ADS. I argue that while the European constitutional and ethical framework is theoretically sufficient, other tools must be adopted to guarantee the enforcement of Fundamental Rights and Ethical Principles in practice to provide a robust framework for putting human rights at the centre.

6.3 Human Rights Reinforced by Ethical Principles to Govern AI

Before considering the enactment of ethical principles to reinforce fundamental rights in the use of ADS, one needs to identify whether or not efficient legal provisions are already enacted.

6.3.1 Statutory Provisions in the European Law

At this time, very few statutory provisions in European Law are capable of reinforcing the respect and protection of fundamental rights with the use of ADS. ADS are algorithmic processes which require data in order to perform. Predictive policing systems do not automatically use personal data, but some of them do. In this case, if the processed personal data concerns some data subjects within the European Union, the General Data Protection Regulation (GDPR) may be applied by the private companies. Moreover, police services are subject to the Data Protection Law Enforcement Directive. It provides for several rights in favour of the data subject, especially the ‘right to receive a meaningful information concerning the logic involved’ (art. 13–15) and the right ‘not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning one or similarly significantly affects one’ (art. 22),Footnote 48 in addition to a Data Protection Impact Assessment (DPIA) tool (art. 35).Footnote 49

However, these provisions fail to provide adequate protection against the violation of human rights. First, several exceptions restrict the impact of these rights. Article 22 paragraph 1 is limited by paragraph 2, according to which the right ‘not to be subject to an automated decision’ is excluded, when consent has been given or a contract concluded. This right is also excluded if exceptions have been enacted by the member states.Footnote 50 For instance, French LawFootnote 51 provides an exception in favour of the governmental use of ADS. Consequently, Article 22 is insufficient per se to protect data subjects. Second, ADS can produce biased decisions without processing personal data, especially when a group is targeted in the decision-making process. Even if the GDPR attempts to consider the profiling of data subjects and decisions that affect groups of people, for instance, through collective representation, such provisions are insufficient to prevent group discrimination.Footnote 52 Third, other risks against fundamental rights have to be considered, such as procedural guarantees related to the presumption of innocence and due process. The protection of such rights is not, or at least not directly, within the scope of the GDPR. The personal data protection regulations cannot address all the social and ethical risks associated with ADS. Consequently, such provisions are insufficient, and because other specific statutory provisions have not yet been enacted,Footnote 53 ethical guidelines could be helpful as a first step.Footnote 54

6.3.2 European Ethics Guidelines for Trustworthy AI

In the EU, the Ethics Guidelines for Trustworthy Artificial Intelligence (AI) is a document prepared by the High-Level Experts Group on Artificial Intelligence (AI HLEG). This group was set up by the European Commission in June 2018 as part of the AI strategy announced earlier that year. The AI HLEG presented a first draft of the Guidelines in December 2018. Following further deliberations, the Guidelines were revised and published in April 2019, the same day as a European Commission Communication on Building Trust in Human-Centric Artificial Intelligence.Footnote 55

Guidelines are based on the fundamental rights enshrined in the EU Treaties, with reference to dignity, freedoms, equality and solidarity, citizens’ rights, and justice, such as the right to a fair trial and the presumption of innocence. These fundamental rights are at the top of the hierarchy of norms of many States and international texts. Consequently, they are non-negotiable and even less optional. However, the concept of ‘fundamental rights’ is integrated with the concept of ‘ethical purpose’ in these Guidelines, which creates a normative confusion.Footnote 56 According to the Experts Group, while fundamental human rights legislation is binding, it still does not provide comprehensive legal protection in the use of ADS. Therefore, the AI Ethics Principles have to be understood both within and beyond these fundamental rights. Consequently, trustworthy AI should be (1) lawful – respecting all applicable laws and regulations; (2) ethical – respecting ethical principles and values; and (3) robust – both from a technical perspective while taking into account its social environment.

The key principles are the principle of respect for human autonomy, the principle of prevention of harm, the principle of fairness, and the principle of explicability.Footnote 57 However, an explanation as to why a model has generated a particular output or decision (and what combination of input factors contributed to that) is not always possible.Footnote 58 These cases are referred to as ‘black box’ algorithms and require special attention. In those circumstances, other explicability measures (e.g., traceability, auditability, and transparent communication on system capabilities) may be required, provided that the system as a whole respects fundamental rights.

In addition to the four principles, the Expert Group established a set of seven key requirements that AI systems should meet in order to be deemed trustworthy: (1) Human Agency and Oversight; (2) Technical Robustness and Safety; (3) Privacy and Data Governance; (4) Transparency; (5) Diversity, Non-Discrimination, and Fairness; (6) Societal and Environmental Well-Being; and (7) Accountability.

Such principles and requirements certainly push us in the right direction, but they are not concrete enough to indicate to ADS designers and users how they can ensure the respect of fundamental rights and ethical principles. Back to the predictive policing activity, the risks against fundamental rights have been identified but not yet addressed. The recognition of ethical principles adapted to ADS is useful for highlighting specific risks but nothing more. It is insufficient to protect human rights, and they must be accompanied by practical tools to guarantee their respect on the ground.

6.4 Human Rights Reinforced by Practical Tools to Govern ADS

In order to identify solutions and practical tools, excluding the instruments of self-regulation,Footnote 59 the ‘Trustworthy AI Assessment List’ proposed by the Group of Experts can first be considered. Aiming to operationalize the ethical principles and requirements, the Guidelines present an assessment list that offers guidance on the practical implementation of each requirement. This assessment list will undergo a piloting process in which all interested stakeholders can participate, in order to gather feedback for its improvement. In addition, a forum to exchange best practices for the implementation of Trustworthy AI has been created. However, the goal of these Guidelines and the List is to regulate the activities linked with AI technologies via a general approach. Consequently, the measures proposed are broad enough to cover many situations and different applications of AI, such as climate action and sustainable infrastructure, health and well-being, quality education and digital transformation, tracking and scoring individuals, and lethal autonomous weapon systems (LAWS). But while our study concerns predictive policing activities, it is more relevant to consider specific, practical tools which regulate the governmental activities and ADS.Footnote 60 In this sense, the Canadian government enacted in February 2019 a Directive on Automated Decision-MakingFootnote 61 and a method on AIA.Footnote 62 These tools pursue the goal of offering governmental institutions a practical method to comply with fundamental rights, laws, and ethical principles. I argue that these methods are relevant to assess the activity of predictive policing in theory.

6.4.1 Methods: Canadian Directive on Algorithmic Decision-Making and the Algorithmic Impact Assessment Tool

The Canadian Government announced its intention to increasingly look to utilize artificial intelligence to make, or assist in making, administrative decisions to improve the delivery of social and governmental services. This government is committed to doing so in a manner that is compatible with core administrative legal principles such as transparency, accountability, legality, and procedural fairness, as based on the directive, and an AIA. An AIA is a framework to help institutions better understand and reduce the risks associated with ADS and to provide the appropriate governance, oversight, and reporting/audit requirements that best match the type of application being designed. The Canadian AIA is a questionnaire designed to assist the administration in assessing and mitigating the risks associated with deploying an ADS. The AIA also helps identify the impact level of the ADS under the proposed Directive on Automated Decision-Making. The questions are focused on the business processes, the data, and the systems to make decisions.

The Directive took effect on 1 April 2019, with compliance required by no later than 1 April 2020. It applies to any ADS developed or procured after 1 April 2020 and to any system, tool, or statistical model used to recommend or make an administrative decision about a client (the recipient of a service). Consequently, this provision does not apply in the criminal justice system or criminal proceedings. This Directive is divided into eleven parts and three appendices on Purpose, Authorities, Definitions, Objectives and Expected Results, Scope, Requirements, Consequences, Roles and Responsibilities of Treasury Board of Canada Secretariat, Application, References, and Enquiries. The three appendices concern the Definitions (appendix A), the Impact Assessment Levels (appendix B), and the Impact Level Requirements (appendix C).

The objective of this Directive is to ensure that ADS are deployed in a manner that reduces risks to Canadians and federal institutions, leading to more efficient, accurate, consistent, and interpretable decisions made pursuant to Canadian law. The expected results of this Directive are as follows:

  • Decisions made by federal government departments are data-driven, responsible, and comply with procedural fairness and due process requirements.

  • Impacts of algorithms on administrative decisions are assessed, and negative outcomes are reduced, when encountered.

  • Data and information on the use of ADS in federal institutions are made available to the public, where appropriate.

Concerning the requirements, the Assistant Deputy Minister responsible for the program using the ADS, or any other person named by the Deputy Head, is responsible for AIA, transparency, quality assurance, recourse, and reporting. He has to provide with any applicable recourse options that are available to them to challenge the administrative decision, and to complete an AIA prior to the production of any ADS. He can use the AIA tool to assess and mitigate the risks associated with deploying an ADS based on a questionnaire.

6.4.2 Application of These Methods to Predictive Policing Activities

Though such measures specifically concern the Government of Canada and do not apply to criminal proceedings, I propose to use this method both abroad and more extensively. It can be relevant for any governmental decision-making, especially for predictive policing activities. I will consider the requirements that should be respected by people responsible for predictive policing programs. Those responsible should be appointed to perform their work on the ground, for each predictive tool used. This would be done using a case-by-case approach.

The first step is to assess the impact in consideration of the ‘impact assessment levels’ provided by appendix B of the Canadian Directive.

Appendix B: Impact Assessment Levels

The decision will likely have little to no impact on:

  • the rights of individuals or communities,

  • the health or well-being of individuals or communities,

  • the economic interests of individuals, entities, or communities,

  • the ongoing sustainability of an ecosystem.

Level I decisions will often lead to impacts that are reversible and brief.


The decision will likely have moderate impacts on:

  • the rights of individuals or communities,

  • the health or well-being of individuals or communities,

  • the economic interests of individuals, entities, or communities,

  • the ongoing sustainability of an ecosystem.

Level II decisions will often lead to impacts that are likely reversible and short-term.


The decision will likely have high impacts on:

  • the rights of individuals or communities,

  • the health or well-being of individuals or communities,

  • the economic interests of individuals, entities, or communities,

  • the ongoing sustainability of an ecosystem.

Level III decisions will often lead to impacts that can be difficult to reverse, and are ongoing.


The decision will likely have very high impacts on:

  • the rights of individuals or communities,

  • the health or well-being of individuals or communities,

  • the economic interests of individuals, entities, or communities,

  • the ongoing sustainability of an ecosystem.

Level IV decisions will often lead to impacts that are irreversible, and are perpetual.

At least level III would be probably reached for predictive policing activities in consideration of the high impact on the freedoms and rights of individuals and communities previously highlighted.

Keeping these levels III and IV in mind, they reveal in a second step the level of risks and requirements. Defined in appendix C, it indicates the ‘requirements’, concerning especially the notice, the explanation, and the human-in-loop process. The ‘notice requirements’ are focus on more transparency, which is particularly relevant to address the opacity problem of predictive policing systems.

Appendix C: Impact level requirements
RequirementLevel ILevel IILevel IIILevel IV
NoticeNonePlain language notice posted on the program or service website.

Publish documentation on relevant websites about the automated decision system, in plain language, describing:

  • How the components work;

  • How it supports the administrative decision; and

  • Results of any reviews or audits; and

  • A description of the training data, or a link to the anonymized training data if these data are publicly available.

These provisions allow one to know if the algorithmic system makes or supports the decision at levels III and IV. They also inform the public about the data used, especially from the start of the training process. This point is particularly relevant, in consideration of the historical and biased data mainly used in predictive policing systems. These requirements could help solve the discriminatory problem.

Moreover, AIAs usually provide a pre-procurement step that gives the public authority the opportunity to engage in a public debate and proactively identify concerns, establish expectations, and draw on expertise and understanding from relevant stakeholders. This is also when the public and elected officials can push back against deployment before potential harms occur. In implementing AIAs, authorities should consider incorporating them into the consultation procedures that they already use for procuring algorithmic systems or for assessing their pre-acquisition.Footnote 63 It would be a way to address the lack of transparency of predictive policing systems which should be addressed at levels III and IV.

Besides, other requirements concern the ‘explanation’.

RequirementLevel ILevel IILevel IIILevel IV
ExplanationIn addition to any applicable legislative requirement, ensuring that a meaningful explanation is provided for common decision results. This can include providing the explanation via a Frequently Asked Questions section on a website.In addition to any applicable legislative requirement, ensuring that a meaningful explanation is provided upon request for any decision that resulted in the denial of a benefit, a service, or other regulatory action.In addition to any applicable legislative requirement, ensuring that a meaningful explanation is provided with any decision that resulted in the denial of a benefit, a service, or other regulatory action.

At levels III and IV, each regulatory action that impacts a person or a group requires the provision of a meaningful explanation. Concretely, if these provisions were made applicable to police services, the police departments who use some predictive policing tools should be able to give an explanation of the decisions made and the way of reasoning, especially in the case of using personal data. The place or a person targeted by predictive policing should also be explained.

Concerning the ‘human-in-loop for decisions’ requirement, levels III and IV impose a human intervention during the decision-making process. That is also relevant for predictive policing activities which require that the police officers keep their free will and self-judgment. Moreover, the human decision has to prevail over the machine-decision. That is crucial to preserve the legitimacy and autonomy of the law enforcement authorities, as well as their responsibility.

RequirementLevel ILevel IILevel IIILevel IV
Human-in-the-loop for decisionsDecisions may be rendered without direct human involvement.Decisions cannot be made without having specific human intervention points during the decision-making process, and the final decision must be made by a human.

Furthermore, if infringement on human rights has to be prevented, additional requirements on testing, monitoring, and training have to be respected at all levels. Before going into production, the person in charge of the program has to develop the appropriate processes to ensure that training data are tested for unintended data biases and other factors that may unfairly impact the outcomes. Moreover, he has to ensure that data being used by the ADS are routinely tested to verify that it is still relevant, accurate, and up-to-date. He also has to monitor the outcomes of ADS on an ongoing basis to safeguard against unintentional outcomes and to ensure compliance with legislations.

Finally, the ‘training’ requirement for level III concerns the documentation on the design and functionality of the system. Training courses must be completed, but contrary to level IV, there is surprisingly no obligation to verify that it has been done.

The sum of these requirements is relevant to mitigate the risks of opacity and discrimination. However, alternately, it does not address the problem of efficiency. Such criteria should also be considered in the future, as the example of predictive policing activities reveals a weakness regarding the efficiency and social utility of this kind of algorithmic tool at this step. It is important not to consider that an ADS is necessarily efficient by principle. Public authorities should provide evidence of it.

6.5 Conclusion

Human rights are a representation of the fundamental values of a society and are universal. However, in an algorithmic society, even if a European lawmaker pretends to reinforce the protection of these rights through ethical principles, I have demonstrated that the current system is not good enough when it comes to guaranteeing their respect in practice. Constitutional rights must be reinforced not only by ethical principles but even more by specific practical tools taking into account the risks involved in ADS, especially when the decision-making concerns sensitive issues such as predictive policing. Beyond the Ethics Guidelines for Trustworthy AI, I argue that the European lawmaker should consider enacting similar tools as the Canadian Directive on Automated Decision Making and AIAs policies that must be made applicable to police services to make them accountable.Footnote 64 AIAs will not solve all of the problems that algorithmic systems might raise, but they do provide an important mechanism to inform the public and to engage policymakers and researchers in productive conversation.Footnote 65 Even if this tool is certainly not perfect, it constitutes a good starting point. Moreover, I argue this policy should come from the European Union and not its member states. The protection of human rights in an algorithmic society may be considered globally as a whole system integrating human rights. The final result is providing a robust theoretical and practical framework, while human rights keep a central place within this broad system.

7 Law Enforcement and Data-Driven Predictions at the National and EU Level A Challenge to the Presumption of Innocence and Reasonable Suspicion?

Francesca Galli
7.1 Introduction

Technological progress could constitute a huge benefit for law enforcement and criminal justice more broadly.Footnote 1 In the security context,Footnote 2 alleged opportunities and benefits of applying big data analytics are greater efficiency, effectiveness, and speed of law enforcement operations, as well as more precise risk analyses, including the discovery of unexpected correlations,Footnote 3 which could nourish profiles.Footnote 4

The concept of ‘big data’ refers to the growing ability of technology to capture, aggregate, and process an ever-greater volume and variety of data.Footnote 5 The combination of mass digitisation of information and the exponential growth of computational power allows for their increasing exploitation.Footnote 6

A number of new tools have been developed. Algorithms are merely an abstract and formal description of a computational procedure.Footnote 7 Besides, law enforcement can rely on artificial intelligence (i.e., the theory and development of computer systems capable of performing tasks which would normally require human intelligence), such as visual perception, speech recognition, decision-making, and translation between languages.Footnote 8 For the purpose of this contribution, these systems are relevant because they do not simply imitate the intelligence of human beings; they are meant to formulate and often execute decisions. The notion of an allegedly clever agent, capable of taking relatively autonomous decisions, on the basis of its perception of the environment, is in fact, pivotal to the current concept of artificial intelligence.Footnote 9 With machine learning, or ‘self-teaching’ algorithms, the knowledge in the system is the result of ‘data-driven predictions’, the automated discovery of correlations between variables in a data set, often to make estimates of some outcome.Footnote 10 Correlations are relationships or patterns, thus more closely related to the concept of ‘suspicion’ rather than the concept of ‘evidence’ in criminal law.Footnote 11 Data mining, or ‘knowledge discovery from data’, refers to the process of discovery of remarkable patterns from massive amounts of data.

Such tools entail new scenarios for information gathering, as well as the monitoring, profiling, and prediction of individual behaviours, thus allegedly facilitating crime prevention.Footnote 12 The underlying assumption is that data could change public policy, addressing biases and fostering a data-driven approach in policy-making. Clearer evidence could support both evaluations of existing policies and impact assessments of new proposals.Footnote 13

Law enforcement authorities have already embraced the assumed benefits of big data, irrespective of criticism questioning the validity of crucial assumptions underlying criminal profiling.Footnote 14 In a range of daily operations and surveillance activities, such as patrol, investigation, as well as crime analysis, the outcomes of computational risk assessment are increasingly the underlying foundation of criminal justice policies.Footnote 15 Existing research on the implications of ‘big data’ has mostly focused on privacy and data protection concerns.Footnote 16 However, potential gains in security come also at the expenses of accountabilityFootnote 17 and could lead to the erosion of fundamental rights, emphasising coercive control.Footnote 18

This contribution first addresses the so-called rise of the algorithmic society and the use of automated technologies in criminal justice to assess whether and how the gathering, analysis, and deployment of big data are changing law enforcement activities. It then examines the actual or potential transformation of core principles of criminal law and whether the substance of legal protectionFootnote 19 may be weakened in a ‘data-driven society’.Footnote 20

7.2 The Rise of the Algorithmic Society and the Use of Automated Technologies in Criminal Justice
7.2.1 A Shift in Tools Rather than Strategy?

One could argue that the development of predictive policing is more a shift in tools than strategy. Prediction has always been part of policing, as law enforcement authorities attempt to predict where criminal activities could take place and the individuals involved in order to deter such patterns.Footnote 21

Law enforcement has over time moved towards wide-ranging monitoring and even more preventative approaches. Surveillance technologies introduced in relation to serious crimes (e.g., interception of telecommunications) are increasingly used for the purpose of preventing and investigating ‘minor’ offences; at the same time, surveillance technologies originally used for public order purposes in relation to minor offences (e.g., CCTV cameras) are gradually employed for the prevention and investigation of serious crime.Footnote 22 On the one side, serious crime including terrorism has had a catalysing effect on the criminal justice system, prompting increased use of surveillance techniques and technologies. The subsequent introduction of exceptional provisions has been first regarded as exceptional and limited in scope first to terrorism and then to organised crime. However, through a long-lasting normalisation process at the initiative of the legislator, specific measures have become institutionalised as part of the ordinary criminal justice system and have a tendency to be applied beyond their original scope.Footnote 23 On the other side, a parallel shift has occurred in the opposite direction. Video surveillance technologies, which are one of the most obvious and widespread signs of the development of surveillance, were originally conceived by the private sector for security purposes. They have been subsequently employed for public order purposes and finally in the prevention of minor offences and/or petty crimes (such as street crimes or small drug dealers), without any significant change in the level of judicial scrutiny and on the basis of a simple administrative authorisation. In such contexts, they were rather a tool to deter would-be criminals than an investigative means.Footnote 24 The terrorist threat has become an argument to justify an even more extensive deployment and use of video surveillance, as well as a broader use of the information gathered for the purposes of investigation.

Anticipative criminal investigations have a primary preventive function, combined with evidence gathering for the purpose of eventual prosecution.Footnote 25 The extensive gathering, processing, and storage of data for criminal law purposes imply a significant departure from existing law enforcement strategies. The relentless storage combined with an amplified memory capacity make a quantitative and qualitative jump as compared to traditional law enforcement activities. The growth of available data over the last two centuries has been substantial, but the present explosion in data size and variety is unprecedented.Footnote 26

First, the amount of data that are generated, processed, and stored has increased enormously (e.g., internet data) because of the direct and intentional seizure of information on people or objects; the automated collection of data by devices or systems; and the volunteered collection of data via the voluntary use of systems, devices, and platforms. Automated and volunteered collection have exponentially increased due to the widespread use of smart devices, social media, and digital transactions.Footnote 27 The ‘datafication’Footnote 28 of everyday activities, which is furthered driven by the ‘Internet of Things’,Footnote 29 leads to the virtually unnoticed gathering of data, often without the consent or even the awareness of the individual.

Second, new types of data have become available (e.g., location data). Irrespective of whether law enforcement authorities will eventually use these forms of data, much of the electronically available data reveal information about individuals which were not available in the past. Plus, there is a vast amount of data available nowadays on people’s behaviour.Footnote 30 Moreover, because of the combination of digitisation and automated recognition, data has become increasingly accessible, and persons can be easily monitored at distance.

Third, the growing availability of real-time data fosters real-time analyses. Thus the increased use of predictive data analytics is a major development. Their underlying rationale is the idea of predicting a possible future with a certain degree of probability.

7.2.2 Interoperable Databases: A New Challenge to Legal Protection?

Although police have always gathered information about suspects, now data can be stored in interoperable databases,Footnote 31 furthering the surveillance potential.Footnote 32 The possibility to link data systems and networks fosters the systematic analysis of computer processors as well as increased data storage capacity.

Interoperability challenges existing modes of cooperation and integration in the EU AFSJ and also the existing distribution of competences between the EU and Member States, between law enforcement authorities and intelligence services, and between public and private actors, which are increasingly involved in information-management activities. Moreover, large-scale information exchanges via interoperable information systems have progressively eroded the boundaries between law enforcement and intelligence services. Besides, they have facilitated a reshuffling of responsibilities and tasks within the law enforcement community, such as security and migration actors. Furthermore, competent authorities have access to huge amounts of data in all types of public and private databases. Interoperable information systems function not only across national boundaries but also across the traditional public-private divide.

If, on the one hand, the so-called big data policing partially constitutes a restatement of existing police practices, then on the other hand, big data analytics bring along fundamental transformations in police activities. There has been also an evolution of the share of roles, competences, and technological capabilities of intelligence services and law enforcement authorities. The means at the disposal of each actor for the prevention and investigation of serious crime are evolving so that the share of tasks and competences have become blurred. Nowadays the distinction is not always clear, and this leads to problematic coordination and overlap.Footnote 33 Intelligence has also been given operational tasks. Law enforcement authorities have resorted to ever more sophisticated surveillance technologies and have been granted much more intrusive investigative powers to use them. Faith in technological solutions and the inherent expansionary tendency of surveillance tools partially explains this phenomenon. Surveillance technologies, in fact, are used in areas or for purposes for which they were not originally intended.Footnote 34

Information sharing and exchange do not in itself blur the institutional barriers between different law enforcement authorities, but the nature of large-scale information-sharing activities does provide a new standing to intelligence activities in the law enforcement domain. The resources spent on and the knowledge developed by such large-scale information gathering and analysis are de facto changing police officers into intelligence actors or intelligence material users.

In addition, EU initiatives enhancing access to information by law enforcement authorities have a direct impact on the functional borders in the security domain. With the much-debated interoperability regulations,Footnote 35 the intention of the Commission has been to improve information exchanges not only between police authorities but also between customs authorities and financial intelligence units and in interactions with the judiciary, public prosecution services, and all other public bodies that participate in a process that ranges from the early detection of security threats and criminal offences to the conviction and punishment of suspects. The Commission has portrayed obstacles to the functional sharing of tasks as follows: ‘Compartmentalization of information and lack of a clear policy on information channels hinder information exchange’,Footnote 36 whereas there is, allegedly, a need to facilitate the free movement of information between competent authorities within Member States and across borders.

In this context, a controversial aspect of interoperability is that systems and processes are linked with information systems that do not serve law enforcement purposes, including other state-held databases and ones held by private actors. With reference to the first category, the issue to address concerns the blurring of tasks between different law enforcement actors. In fact, a key aspect of the EU strategy on databases and their interoperability is an aim to maximise access to personal data, including access by police authorities to immigration databases, and to personal data related to identification. This blurring has an impact on the applicable legal regime (in terms of jurisdiction) and also in terms of legal procedure (e.g., administrative/criminal). In fact, the purpose for which data are gathered, processed, and accessed is crucial, not only because of data protection rules but because it links the information/data with a different stage of a procedure (either administrative or criminal) to which a set of guarantees are (or are not) attached, and thus has serious consequences for the rights of individuals (including access, appeal, and correction rights). Neither legal systems nor legal provisions are fully compatible either because they belong to administrative or criminal law or because of a lack of approximation between Member State systems. Such differences also have an impact on the potential use of information: information used for identification purposes (the focus of customs officers at Frontex), or only for investigation purposes with no need to reach trial (the focus of intelligence actors), or for prosecution purposes (the focus of police authorities). Eventually, of course, the actors involved in the process have different impacts on the potential secret use of data, with consequent transparency concerns.Footnote 37

7.2.3 A ‘Public-Private Partnership’

The information society has substantially changed the ways in which law enforcement authorities can obtain information and evidence. Beyond their own specialised databases, competent authorities have access to huge amounts of data in all types of public and private databases.Footnote 38

Nowadays the legal systems in most Western countries thus face relevant changes in the politics of information control. The rise of advanced technologies has magnified the capability of new players to control both the means of communication and data flows. To an increasing extent, public authorities are sharing their regulatory competences with an indefinite number of actors by imposing preventive duties on the private sector, such as information-gathering and sharing (e.g., on telecommunication companies for data retention purposes).Footnote 39 This trend is leading to a growing privatisation of surveillance practises. In this move, key players in private information society (producers, service providers, key consumers) are given law enforcement obligations.

Private actors are not just in charge of the operational enforcement of public authority decisions in security matters. They are often the only ones with the necessary expertise, and therefore they profoundly shape decision-making and policy implementation. Their choices are nevertheless guided by reasons such as commercial interest, and they are often unaccountable.

In the context of information sharing, and particularly in the area of interoperable information systems, technical platform integration (information hubs) functions across national boundaries and across the traditional public–private divide. Most of the web giants are established overseas, so that often private actors – voluntarily or compulsorily – transfer data to third countries. Companies do not just cooperate with public authorities but effectively and actively come to play a part in bulk collection and security practices. They identify, select, search, and interpret suspicious elements by means of ‘data selectors’. Private actors, in this sense, have become ‘security professionals’ in their own right.

Systematic government access to private sector data is carried out not only directly via access to private sector databases and networks but also through the cooperation of third parties, such as financial institutions, mobile phone operators, communication providers, and the companies that maintain the available databases or networks.

Personal data originally circulated in the EU for commercial purposes may be transferred by private intermediaries to public authorities, often also overseas, for other purposes, including detection, investigation, and prosecution. The significant blurring of purposes among the different layers of data-gathering – for instance, commercial profiling techniques and security – aims to exploit the ‘exchange value’ of individuals’ fragmented identities, as consumers, suspects of certain crimes, ‘good citizens’, or ‘others’.

In this context, some have argued that the most important shortcoming of the 2016 data protection reform is that it resulted in the adoption of two different instruments, a Regulation and a Directive.Footnote 40 This separation is a step backwards regarding the objective envisaged by Article 16 TFEU – which instead promotes a cross-sectoral approach potentially leading to a comprehensive instrument embracing different policy areas (including the AFSJ) in the same way. This is a weakness because the level of protection envisaged by the 2016 Police Data Protection Directive is de facto lower than in the Regulation, as data gathering for law enforcement and national security purposes is mostly exempted from general data protection laws or constitutes an exemption under those provisions even at the EU level.Footnote 41 Furthermore, what happens in practice mostly depends on terms and conditions in contractual clauses signed by individuals every time they subscribe as clients of service providers and media companies.

A further element of novelty is thus the linkage of separate databases, which increased their separate utility since law enforcement authorities and private companies partially aggregated their data.Footnote 42 Such a link between criminal justice data with private data potentially provides numerous insights about individuals. Law enforcement and private companies have therefore embraced the idea of networking and sharing personal information. Law enforcement thus benefits from the growth of private surveillance gathering of information.

The nature and origins of data that are available for security purposes are thus further changing. Public and private data are increasingly mixed. Private data gathering tools play a broader role in security analyses, complementing data from law enforcement authorities’ sources.Footnote 43 An example is the use of social media analyses tools by the police together with intelligence (e.g., in counter-terrorism matters). It is often not merely the data itself which is valuable but the fact of linking large amounts of data.

Having examined the use of surveillance technologies for preventive and investigative purposes, it would be interesting to focus on the next phase of criminal procedure – that is, the retention and use of information gathered via surveillance technologies for the prosecution during trials for serious crimes, including terrorism. In fact, a huge amount of information is nowadays retained by private companies such as network and service providers, but also by different CCTV operators. The question is under which circumstances such information can be accessed and used by different actors of criminal procedures (police officers, intelligence services, prosecutors, and judges) for the purposes of investigating and prosecuting serious crimes. The retention of data for investigation and prosecution purposes poses the question of the collaboration between public authorities and private companies and what kind of obligations one may impose upon the latter.

7.3 The Transformation of Core Principles of Criminal Law
7.3.1 Control People to Minimise Risk

Technology is pivotal in the development of regulatory legislation that seeks to control more and more areas of life.Footnote 44

In fact, predictive policing is grounded and further supports a social growing desire to control people to minimise risk.Footnote 45 Sociologists such as Ulrich Beck have described the emergence of a ‘risk society’: industrial society produces a number of serious risks and conflicts – including those connected with terrorism and organised crime – and has thus modified the means and legitimisation of state intervention, putting risks and damage control at the centre of society as a response to the erosion of trust among people.Footnote 46

Along similar lines, Feeley and Simon have described a ‘new penology’ paradigm (or ‘actuarial justice’Footnote 47): a risk management strategy for the administration of criminal justice, aiming at securing at the lowest possible cost a dangerous class of individuals whose rehabilitation is deemed futile and impossible.Footnote 48 The focus is on targeting and classifying a suspect group of individuals and making assessments of their likelihood to offend in particular circumstances or when exposed to certain opportunities.

According to David Garland, the economic, technological, and social changes in our society during the past thirty years have reconfigured the response to crime and the sense of criminal justice leading to a ‘culture of control’ counterbalancing the expansion of personal freedom.Footnote 49 In his view, criminal justice policies thus develop from political actors’ desire to ‘do something’ – not necessarily something effective – to assuage public fear, shaped and mobilised as an electoral strategy.

The culture of control together with risk aversion sees technological developments as key enabling factors and is intimately linked to the rise of a surveillance society and the growth of surveillance technologies and infrastructures.

Koops has built upon pre-existing concepts of the culture of control and depicts the current emergence of what he calls ‘crime society’, which combines risk aversion and surveillance tools, with the preventative and architectural approaches to crime prevention and investigation.Footnote 50 Technology supports and facilitates the crucial elements at the basis of a crime society, pushing a further shift towards prevention in the fight against crime.

Finally, the prediction of criminal behaviours is supposed to enable law enforcement authorities to reorganise and manage their presence more efficiently and effectively. However, there is very little evidence as to whether police have, in fact, increased efficiency and improved fairness in daily tasks, and it seems to be very much related to the type of predictive policing under evaluation.

7.3.2 Would Crime-Related Patterns Question Reasonable Suspicion and the Presumption of Innocence?

The emergence of the ‘data-driven society’Footnote 51 allows for the mining of both content and metadata, allegedly inferring crime-related patterns and thus enable pre-emption, prevention, or investigation of offences. In the view of law enforcement authorities and policymakers, by running algorithms on a massive amount of data, it is allegedly possible to predict the occurrence of criminal behaviours.Footnote 52 In fact, data-driven analysis is different from the traditional statistical method because its aim is not merely testing hypotheses but also to find relevant and unexpected correlations and patterns, which may be relevant for public order and security purposes.Footnote 53

For instance, a computer algorithm can be applied to data from past crimes, including crime types and locations, to forecast in which city areas criminal activities are most likely to develop.

The underlying assumption of predictive policing is that certain aspects of the physical and social environment would encourage acts of wrongdoing. Patters emerging from the data could allow individuals to be identified predictively as suspects because past actions create suspicions about future criminal involvement. Moreover, there seems to be the belief that automated measured could provide better insight than traditional police practices, because of a general faith in predictive accuracy.

Yet a number of limits are inherent in predictive policing. It could be hard to obtain usable and accurate data to integrate into predictive systems of policing.Footnote 54 As a consequence, notwithstanding big data perceived objectivity, there is a risk of increased bias in the sampling process. Law enforcement authorities’ focus on a certain ethnic group or neighbourhood could instead take to the systematic overrepresentation of those groups and neighbourhoods in data sets, so that the use of a biased sample to train an artificial intelligence system could be misleading. The predictive model could reproduce the same bias which poisoned the original data set.Footnote 55 Artificial intelligence predictions could even amplify biases, thus fostering profiling and discrimination patterns. The same could happen with reference to the linkage between law enforcement databases and private companies’ data, which could increase errors exponentially, as the gathering of data for commercial purposes is surrounded by less procedural safeguards, thus leading to a diminished quality of such data.Footnote 56 Existing data could be of limited value for predictive policing, possibly resulting in a sort of technology-led version of racial profiling.

Could big data analyses strengthen social stratifications, reproducing and reinforcing the bias that is already present in data sets? Data are often extracted through observations, computations, experiments, and record-keeping. Thus the criteria used for gathering purposes could distort the results of data analyses because of their inherent partiality and selectivity. The bias may over time translate into discrimination and unfair treatment of particular ethnic or societal groups. The link between different data sets and the combined result of big data analyses may then well feed on each other.

Datafication and the interconnection of computing systems which grounds hyper-connectivity is transforming the concept of law, further interlinking it with other disciplines.Footnote 57 Moreover, the regulatory framework surrounding the use of big data analytics is underdeveloped if compared with criminal law. Under extreme circumstances, big data analysis could unfortunately lead to judging individuals on the basis of correlations and inferences of what they might do, rather than what they actually have done.Footnote 58 The gathering, analysis, and deployment of big data are transforming not only law enforcement activities but also core principles of criminal law, such as reasonable suspicion and the presumption of innocence.

A reasonable suspicion of guilt is a precondition for processing information, which would eventually be used as evidence in court. Reasonable suspicion is, however, not relevant in big data analytics. Instead, in a ‘data-driven surveillance society’, criminal intent is somehow pre-empted, and this could, at least to a certain extent, erode the preconditions of criminal law in a constitutional democracy – especially when there is little transparency with reference to profiles inferred and matched with subjects’ data.Footnote 59

Such major change goes even beyond the notorious ‘shift towards prevention’ in the fight against crime witnessed during the last decades.Footnote 60 First, the boundaries of what is a dangerous behaviour are highly contentious, and problems arise with the assessment of future harm.Footnote 61 Second, ‘suspicion’ has replaced an objective ‘reasonable belief’ in most cases in order to justify police intervention at an early stage without the need to envisage evidence-gathering with a view to prosecution.Footnote 62 Traditionally, ‘reasonable grounds for suspicion’ depend on the circumstances in each case. There must be an objective basis for that suspicion based on facts, evidence, and/or intelligence which are relevant to the likelihood of finding an article of a certain kind. Reasonable suspicion should never be supported on the basis of personal factors. It must rely on intelligence or information about an individual or his/her particular behaviour. Facts on which suspicion is based must be specific, articulated, and objective. Suspicion must be related to a criminal activity and not simply to a supposed criminal or group of criminals.Footnote 63 The mere description of a suspect, his/her physical appearance, or the fact that the person is known to have a previous conviction cannot alone, or in combination with each other, become factors for searching such individual. In its traditional conception, reasonable suspicion cannot be based on generalisations or stereotypical images of certain groups or categories of people as more likely to be involved in criminal activity. This has, at least partially, changed.

By virtue of the presumption of innocence, the burden of proof in criminal proceedings rests on the prosecutor and demands serious evidence, beyond reasonable doubt, that a criminal activity has been committed. Such presumption presupposes that a person is innocent until proven guilty. By contrast, data-driven pushes law enforcement in the opposite direction. The presumption of innocence comes along with the notion of equality of arms in criminal proceedings, as well as the safeguard of privacy against unwarranted investigative techniques, and with the right to non-discrimination as a way to protect individuals against prejudice and unfair bias.

Are algorithms in their current state amount to ‘risk forecasting’ rather than actual crime prediction?Footnote 64 The identification of the future location of criminal activities could be possible by studying where and why past times patterns have developed over time. However, forecasting the precise identity of future criminals is not evident.

If suspicion based on correlation, instead of evidence, could successfully lead to the identification of areas where crime is likely to be committed (on the basis of property and place-based predictive policing), it might be insufficient to point at the individual who is likely to commit such crime (on the basis of person-focused technology).Footnote 65

7.3.3 Preventive Justice

Predictive policing could be seen as a feature of preventive justice. Policy-making and crime-fighting strategies are increasingly concerned with the prediction and prevention of future risks (in order, at least, to minimise their consequences) rather than the prosecution of past offences.Footnote 66 Zedner describes a shift towards a society ‘in which the possibility of forestalling risks competes with and even takes precedence over responding to wrongs done’,Footnote 67 and where ‘the post-crime orientation of criminal justice is increasingly overshadowed by the pre-crime logic of security’.Footnote 68 Pre-crime is characterised by ‘calculation, risk and uncertainty, surveillance, precaution, prudentialism, moral hazard, prevention and, arching over all of these, there is the pursuit of security’.Footnote 69 An analogy has been drawn with the precautionary principle developed in environmental law in relation to the duties of public authorities in a context of scientific uncertainty, which cannot be accepted as an excuse for inaction where there is a threat of serious harm.Footnote 70

Although trends certainly existed prior to September 11, the counter-terrorism legislation enacted since then has certainly expanded all previous trends towards anticipating risks. The aim of current counter-terrorism measures is mostly that of a preventive identification, isolation, and control of individuals and groups who are regarded as dangerous and purportedly represent a threat to society.Footnote 71 The risk in terms of mass casualties resulting from a terrorist attack is thought to be so high that the traditional due process safeguards are deemed unreasonable or unaffordable and prevention becomes a political imperative.Footnote 72

Current developments, combined with preventive justice, lead to the so-called predictive reasonable suspicion. In a model of preventive justice, and specifically in the context of speculative security,Footnote 73 individuals are targets of public authorities’ measures; information is gathered irrespective of whether and how it could be used to charge the suspect of a criminal offence or use it in criminal proceedings and eventually at trial.

Law enforcement authorities can thus act not only in the absence of harm but even in the absence of suspicion. Thus there is a grey area for the safeguard of rights of individuals who do not yet fall into an existing criminal law category but are already subject to a measure which could lead to criminal law-alike consequences. At the same time, individual rights (e.g., within the realm of private or administrative law) are not fully actionable/enforceable unless a breach has been committed. However, in order for information to become evidence in court, gathering, sharing, and processing should respect criminal procedure standards. This is often at odds with the use of technologies in predictive policing.

7.4 Concluding Remarks

Law enforcement authorities and intelligence services have already embraced the assumed benefits of big data analyses. It is yet difficult to assess how and to what extent big data are applied to the field of security, irrespective of exploring whether or not their use fosters efficiency or effectiveness. This is also because of secrecy often surrounding law enforcement operations, the experimental nature of new means, and authorities’ understandable reluctance to disclose their functioning to public opinion. ‘Algorithms are increasingly used in criminal proceedings for evidentiary purposes and for supporting decision-making. In a worrying trend, these tools are still concealed in secrecy and opacity preventing the possibility to understand how their specific output has been generated’,Footnote 74 argues Palmiotto, addressing the Exodus case,Footnote 75 while questioning whether opacity represents a threat to fair trial rights.

However, there is still a great need for an in-depth debate about the appropriateness of using algorithms in machine-learning techniques in law enforcement, and more broadly in criminal justice. In particular, there is a need to assess how the substance of legal protection may be weakened by the use of tools such as algorithms and artificial intelligence.Footnote 76

Moreover, given that big data, automation, and artificial intelligence remain largely under-regulated, the extent to which data-driven surveillance societies could erode core criminal law principles such as reasonable suspicion and the presumption of innocence ultimately depends on the design of the surveillance infrastructures. There is thus a need to develop a regulatory framework adding new layers of protection to fundamental rights and safeguards against their erroneous use.

There are some improvements which could be made to increase the procedural fairness of these tools. First, more transparent algorithms could increase their trustworthiness. Second, if designed to remove pre-existing biases in the original data sets, algorithms could also improve their neutrality. Third, when algorithms are in use profiling and (semi-)automated decision-making should be regulated more tightly.Footnote 77

Most importantly, the ultimate decision should always be human. The careful implementations by humans involved in the process could certainly mitigate the vulnerabilities of automated systems. It must remain for a human decision maker or law enforcement authority to decide how to act on any computationally suggested result.

For instance, correlation must not be erroneously interpreted as a causality link, so that ‘suspicion’ is not confused with ‘evidence’. Predictions made by big data analysis must never be sufficient for the purpose of initiating a criminal investigation.

Trust in algorithms both in fully and partially automated decision processes is grounded on their supposed infallibility. There is a tendency (as has been the case in the use of experts in criminal casesFootnote 78) among law enforcement authorities to blindly follow them. Rubberstamping algorithms’ advice could also become a trick to minimise the responsibility of decision maker.

Algorithm-based decisions require time, context, and skills to be adequate in each individual case. Yet, given the complexity of algorithms, judges and law enforcement authorities can at times hardly understand the underlying calculus, and it is thus difficult to question their accuracy, effectiveness, or fairness. This is linked with the transparency paradox surrounding the use of big data:Footnote 79 citizens become increasingly transparent to government, while the profiles, algorithms, and methods used by government organisations are hardly transparent or comprehensible to citizens.Footnote 80 This results in a shift in the balance of power between state and citizen, in favour of the state.Footnote 81


2 Fundamental Rights and the Rule of Law in the Algorithmic Society

1 Pedro Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (Basic Books 2015).

2 One of the most prominent prophets of the idea of a new kind of progress generated through the use of technologies is surely Jeremy Rifkin. See his book The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism (St. Martin’s Press 2014).

3 Marshall McLuhan and Quentin Fiore, The Medium Is the Massage (Ginko Press 1967).

4 Committee of Experts on Internet Intermediaries of the Council of Europe (MSI-NET), ‘Algorithms and Human Rights. Study on the Human Rights Dimensions of Automated Data Processing Techniques and Possible Regulatory Implications’ (2016) DGI(2017)12.

5 According to the European Parliament, ‘Resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL))’ (P8_TA(2017)0051, Bruxelles), ‘a robot’s autonomy can be defined as the ability to take decisions and implement them in the outside world, independently of external control or influence.’

6 According to Statista, Facebook is the biggest social network platform worldwide, with more than 2.7 billion monthly active users in the second quarter of 2020. During the last reported quarter, the company stated that 3.14 billion people were using at least one of the company’s core products (Facebook, WhatsApp, Instagram, or Messenger) each month. To the contrary, as of the end of 2019, Twitter had 152 million monetizable daily active users worldwide.

7 Jack M. Balkin, ‘Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation’ (2017) 51 UCDL Rev 1149; Agnieszka M. Walorska, ‘The Algorithmic Society’ in Denise Feldner (ed), Redesigning Organizations Concepts for the Connected Society (Springer 2020); Giovanni De Gregorio, ‘From Constitutional Freedoms to the Power of the Platforms: Protecting Fundamental Rights Online in the Algorithmic Society’ (2018) 11 Eur J Legal Stud 65.

8 Frank Webster, Theories of the Information Society, 4th ed. (Routledge 2014).

9 Neil M. Richards, ‘The Dangers of Surveillance’ (2012) 126 Harv L Rev 1934;

10 Jonathan Zittrain, The Future of the Internet – And How to Stop It (Yale University Press 2008); Cary Coglianese and David Lehr, ‘Regulating by Robot: Administrative Decision Making in the Machine-Learning Era’ (2016) 105 Geo LJ 1147. Responsibility for the production of recent anxiety in large part can be attributed to Nick Bostrom, Superintelligence (Oxford University Press 2014).

11 E. Morozov uses the expression ‘digital solutionism’ to name the idea that technological innovation should solve every social problem. Evgeny Morozov, To Save Everything, Click Here: The Folly of Technological Solutionism (Public Affairs 2013).

12 Herbert Marcuse, One-Dimensional Man: Studies in the Ideology of Advanced Industrial Society, 2nd ed. (Beacon Press 2019) 1.

13 Michael Veale and Lilian Edwards, ‘Clarity, Surprises, and Further Questions in the Article 29 Working Party Draft Guidance on Automated Decision-Making and Profiling’ (2018) 34 Computer Law Security Review 398.

14 European Commission, ‘White Paper on Artificial Intelligence: A European Approach to Excellence and Trust’ (COM (2020) 65 final, Bruxelles).

15 Inga Kroener and Daniel Neyland, ‘New Technologies, Security and Surveillance’ in Kirstie Ball, Kevin Haggerty, and David Lyon (eds), Routledge Handbook of Surveillance Studies (Routledge 2012) 141.

16 Radha D’Souza, The Surveillance State: A Composition in Four Movements, (Pluto 2019); Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Broadway Books 2016).

17 Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs 2019).

18 Natalie Ram and David Gray, ‘Mass Surveillance in the Age of COVID-19’ (2020) 7 Journal of Law and the Biosciences 1.

19 In the broadest sense, algorithms are encoded procedures for solving a problem by transforming input data into a desired output. As we know the excitement surrounding Big Data is largely attributable to machine learning. Paul Dourish, ‘Algorithms and Their Others: Algorithmic Culture in Context’ (2016) 3 Big Data & Society 1; Tarleton Gillespie, ‘The Relevance of Algorithms’ in Tarleton Gillespie, Pablo J. Boczkowski. and Kirsten A. Foot (eds), Media Technologies: Essays on Communication, Materiality, and Society (MIT Press 2014); Viktor Mayer-Schoenberger and Kenneth Cukier, Big Data: A Revolution That Will Transform How We Live, Work, and Think (Houghton Mifflin Harcourt 2013).

20 This idea – sometimes abbreviated as XAI (explainable artificial intelligence) – means that machines could give access to data about their own deliberative processes, simply by recording them and making them available as data structures. See Wojciech Samek and Klaus-Robert Müller, ‘Towards Explainable Artificial Intelligence’ in Wojciech Samek et al. (eds), Explainable AI: Interpreting, Explaining and Visualizing Deep Learning (Springer 2019); Tim Miller, ‘Explanation in Artificial Intelligence: Insights from the Social Sciences’ (2019) 267 Artificial Intelligence 1 Brent Mittelstadt, Chris Russell, and Sandra Wachter, Explaining Explanations in AI (ACM 2019).

21 Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press 2015); Tal Z. Zarsky, ‘Understanding Discrimination in the Scored Society’ (2014) 89 Wash L Rev 1375.

22 See, for example, the ‘Proposal for a Regulation of the European Parliament and of the Council on a Single Market for Digital Services (Digital Services Act)’ and amending Directive 2000/31/EC, COM (2020) 825 final, 15.12.2020, and the proposals for a Digital Services Act (DSA), COM/2020/842 final, 15.12.2020, and for an Artificial Intelligence Act, COM (2017) 85 final, 21.4.2021.

23 Danielle Keats Citron and Frank Pasquale, ‘The Scored Society: Due Process for Automated Predictions’ (2014) 89 Wash L Rev 1.

24 Giovanni Sartori, ‘Constitutionalism: A Preliminary Discussion’ (1962) 56 American Political Science Review 853.

25 Being that constitutional theory is part of the legal system, this feature distinctively differentiates constitutional law from political philosophy or political sociology.

26 Giorgio Pino, Il costituzionalismo dei diritti struttura e limiti del costituzionalismo contemporaneo (il Mulino 2017).

27 Benjamin Constant, ‘De la liberté des anciens comparée à celle des modernes’ in Collection complète des ouvrages: publiés sur le gouvernement représentatif et la constitution actuelle ou Cours de politique constitutionelle (Plancher 1820).

28 Richard Bellamy, ‘Constitutionalism’ in Bertrand Badie, Dirk Berg-Schlosser, and Leonardo Morlino (eds), International Encyclopedia of Political Science, vol. 2 (SAGE 2011).

29 Andrea Buratti, Western Constitutionalism: History, Institutions, Comparative Law (Springer-Giappichelli 2019).

30 Martin Loughlin, Foundations of Public Law (Oxford University Press 2010).

31 Jean Bodin, On Sovereignty: Four Chapters from the Six Books of the Commonwealth (Cambridge University Press 1992).

32 Carl Schmitt, Political Theology: Four Chapters on the Concept of Sovereignty (University of Chicago Press 1985).

33 Absolutism was the crucible in which this modern concept of sovereignty was forged. As J. Bodin expressed, ‘sovereignty’ is ‘the greatest power of command’ and is ‘not limited either in power, charge, or time certain’. Jean Bodin, Les six livres de la république (Jacques du Puis 1576).

34 As Hart asserted: ‘In any society where there is law, there actually is a sovereign, characterized affirmatively and negatively by reference to the habit of obedience: a person or body of persons whose orders the great majority of the society habitually obey and who does not habitually obey any other person or persons.’ Herbert L. A. Hart, The Concept of Law, 1st ed. (Oxford University Press 1961).

35 Regarding this distinction, see James Bryce, ‘Flexible and Rigid Constitutions’ (1901) 1 Studies in History and Jurisprudence 145, 124.

36 In Marbury v. Madison, 5 U.S. (1 Cr.) 137, 173–180 (1803), Chief Justice Marshall gave the Constitution precedence over laws and treaties, providing that only laws ‘which shall be made in pursuance of the constitution’ shall be ‘the supreme law of the land’. For further information on this topic, see generally Bruce Ackerman, ‘The Rise of World Constitutionalism’ (1997) Virginia Law Review 771; Alec Stone Sweet, Governing with Judges: Constitutional Politics in Europe (Oxford University Press 2000); Ronald Dworkin, Freedom’s Law: The Moral Reading of the American Constitution (Oxford University Press 1999); Michaela Hailbronner, ‘Transformative Constitutionalism: Not Only in the Global South’ (2017) 65 The American Journal of Comparative Law 527; and Mark Tushnet, Advanced Introduction to Comparative Constitutional Law (Edward Elgar 2018). For a specific insight into the Italian experience, see Vittoria Barsotti et al., Italian Constitutional Justice in Global Context (Oxford University Press 2016), 263; Maurizio Fioravanti, ‘Constitutionalism’ in Damiano Canale, Paolo Grossi, and Basso Hofmann (eds), A Treatise of Legal Philosophy and General Jurisprudence, vol. 9 (Springer 2009).

37 In the view of Bodei, from ‘algorithmic capitalism’ (which will use artificial intelligence and robotics to increasingly link economics and politics to certain forms of knowledge) originates a new ‘occult’ power in which ‘the human logos will be more and more subject to an impersonal logos’. See Remo Bodei, Dominio e sottomissione. Schiavi, animali, macchine, Intelligenza Artificiale (il Mulino 2019).

38 Frank Pasquale, ‘Two Narratives of Platform Capitalism’ (2016) 35 Yale L & Pol’y Rev 309.

39 Lawrence Delevingne, ‘U.S. Big Tech Dominates Stock Market after Monster Rally, Leaving Investors on Edge’, Reuters (28 August 2020)

40 Apple Inc, AAPL.O; Inc., AMZN.O; Microsoft Corp, MSFT.O; Facebook Inc., FB.O; and Google parent Alphabet Inc., GOOGL.O.

41 Nicolas Petit, Big Tech and the Digital Economy: The Moligopoly Scenario (Oxford University Press 2020).

43 Airbnb, for example, has developed market power to shape urban planning in smaller cities in the United States. Amazon has received offers from democratically elected mayors to assume political power when the company moves its headquarters to these cities. More importantly, Facebook has become one of the most important actors in political campaigns all over the world, not to mention the famous and controversial case of Cambridge Analytica, when we experienced the disruptive force of the social network for people’s lives in terms of political participation, data protection, and privacy. See Emma Graham-Harrison and Carole Cadwalladr, ‘Data Firm Bragged of Role in Trump Victory’ The Guardian (21 March 2018)

44 Frank Pasquale, ‘From Territorial to Functional Sovereignty: The Case of Amazon’ (accessed 6 December 2020); Denise Feldner, ‘Designing a Future Europe’ in Denise Feldner (ed), Redesigning Organizations: Concepts for the Connected Society (Springer 2020).

45 Andrea Simoncini, ‘Sovranità e potere nell’era digitale’ in Tommaso Edoardo Frosini et al. (eds), Diritti e libertà in internet (Le Monnier Università 2017) 19.

46 Wiener decided to call ‘the entire field of control and communication theory, whether in the machine or in the animal, by the name Cybernetics, which we form from the Greek χυβερνήτης or steersman’. See Norbert Wiener, Cybernetics or Control and Communication in the Animal and the Machine (2nd reissued edn, MIT Press 2019) 18.

47 Ugo Pagallo, ‘Algo-Rhythms and the Beat of the Legal Drum’ (2018) 31 Philosophy & Technology 507.

48 Nicol Turner Lee, ‘Detecting Racial Bias in Algorithms and Machine Learning’ (2018) 16 Journal of Information, Communication and Ethics in Society 252; Jack M. Balkin, ‘Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation’ (2017) 51 UCDL Rev 1149; Ryan Calo, ‘Privacy, Vulnerability, and Affordance’ (2016) 66 DePaul L Rev 591; Oreste Pollicino and Laura Somaini, ‘Online Disinformation and Freedom of Expression in the Democratic Context’ in Sandrine Boillet Baume, Véronique Martenet Vincent (eds), Misinformation in Referenda (Routledge 2021).

49 O’Neil, Weapons of Math Destruction.

50 ‘Robotics Ethics’ (SHS/YES/COMEST-10/17/2 REV, Paris); Jon Kleinberg et al., ‘Discrimination in the Age of Algorithms’ (2018) 10 Journal of Legal Analysis 113; McKenzie Raub, ‘Bots, Bias and Big Data: Artificial Intelligence, Algorithmic Bias and Disparate Impact Liability in Hiring Practices’ (2018) 71 Ark L Rev 529.

51 Pasquale, The Black Box Society.

52 Daniel A. Farber, ‘Another View of the Quagmire: Unconstitutional Conditions and Contract Theory’ (2005) 33 Fla St UL Rev 913 914.

53 Coglianese and Lehr, ‘Regulating by Robot’.

54 Andrew McStay, Emotional AI: The Rise of Empathic Media (SAGE 2018).

55 Vian Bakir and Andrew McStay, ‘Empathic Media, Emotional AI, and the Optimization of Disinformation’ in Megan Boler and Elizabeth Davis (eds), Affective Politics of Digital Media (Routledge 2020) 263.

56 Coglianese and Lehr, ‘Regulating by Robot’, 1152.

57 Andrea Simoncini, ‘Amministrazione digitale algoritmica. Il quadro costituzionale’ in Roberto Cavallo Perin and Diana-Urania Galletta (eds), Il diritto dell’amministrazione pubblica digitale (Giappichelli 2020) 1.

58 Andrea Simoncini, ‘Profili costituzionali dell’amministrazione algoritmica’ (2019) Riv trim dir pubbl 1149.

59 Andrew Guthrie Ferguson, The Rise of Big Data: Surveillance, Race and the Future of Law Enforcement (New York University Press 2017).

60 Lyria Bennett Moses and Janet Chan, ‘Algorithmic Prediction in Policing: Assumptions, Evaluation, and Accountability’ (2018) 28 Policing and Society 806; Gavin J. D. Smith, Lyria Bennett Moses, and Janet Chan, ‘The Challenges of Doing Criminology in the Big Data Era: Towards a Digital and Data-Driven Approach’ (2017) 57 British Journal of Criminology 259; Wim Hardyns and Anneleen Rummens, ‘Predictive Policing as a New Tool for Law Enforcement? Recent Developments and Challenges’ (2018) 24 European Journal on Criminal Policy and Research 201.

61 Karen Yeung, ‘Algorithmic Regulation: A Critical Interrogation’ (2018) 12 Regulation & Governance 505.

62 Albert Meijer and Martijn Wessels, ‘Predictive Policing: Review of Benefits and Drawbacks’ (2019) 42 International Journal of Public Administration 1031.

63 Bruce Schneier, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World (W. W. Norton & Company 2015).

64 David Lyon, Surveillance after September 11 (Polity 2003).

65 Surveillance consists of the ‘collection and processing of personal data, identifiable or not, for the purpose of influencing or controlling those to whom they belong’. Surveillance is a necessary correlative of a risk-based new idea of state power. See David Lyon, Surveillance Society: Monitoring Everyday Life (Open University Press 2001).

66 J. Guelke et al., ‘SURVEILLE Deliverable 2.6: Matrix of Surveillance Technologies’ (2013) Seventh Framework Programme Surveillance: Ethical Issues, Legal Limitations, and Efficiency, FP7-SEC-2011-284725.

67 Joined cases C-293/12 and C-594/12, Digital Rights Ireland (C-293/12) and Seitlinger (C-594/12), EU:C:2014:238, at 37.

68 Andrea Simoncini and Samir Suweis, ‘Il cambio di paradigma nell’intelligenza artificiale e il suo impatto sul diritto costituzionale’ (2019) 8 Rivista di filosofia del diritto 87.

69 According to Statista in 2019, around the 50 per cent of the Italian population accesses information from the Internet.

70 Brent Daniel Mittelstadt et al., ‘The Ethics of Algorithms: Mapping the Debate’ (2016) 3 Big Data & Society 1.

71 Holger Pötzsch, ‘Archives and Identity in the Context of Social Media and Algorithmic Analytics: Towards an Understanding of iArchive and Predictive Retention’ (2018) 20 New Media & Society 3304.

72 Andreas Kappes et al., ‘Confirmation Bias in the Utilization of Others’ Opinion Strength’ (2020) 23 Nature Neuroscience 130.

73 Users tend to aggregate in communities of interest causing reinforcements and support of conformation bias, segregation, and polarization. Erik Longo, ‘Dai big data alle “bolle filtro”: nuovi rischi per i sistemi democratici’ (2019) XII Percorsi costituzionali 29.

74 Lawrence Lessig, Code. Version 2.0 (Basic Books 2006) 2.

75 This is the reason we use the ‘hybrid’ image, coming from the world of the automotive industry: a ‘hybrid’ vehicle means it uses both classical combustion engines and electric power.

3 Inalienable Due Process in an Age of AI: Limiting the Contractual Creep toward Automated Adjudication

1 Frank Pasquale, “A Rule of Persons, Not Machines: The Limits of Legal Automation” (2019) 87 Geo Wash LR 1, 2829.

2 Stephanie Wykstra, “Government’s Use of Algorithm Serves Up False Fraud Charges” Undark (2020)

3 Owen Bowcott, “Court Closures: Sale of 126 Premises Raised Just £34m, Figures ShowThe Guardian (London, Mar 8 2018)

4 Rory van Loo, “Corporation as Courthouse” (2016) 33 Yale J on Reg 547.

5 Frank Pasquale and Glyn Cashwell, “Prediction, Persuasion, and the Jurisprudence of Behaviorism” (2018) 68 U Toronto LJ 63.

6 Julie Cohen, Between Truth and Power (Oxford University Press 2019).

7 Jathan Sadowski and Frank Pasquale, “The Spectrum of Control: A Social Theory of the Smart City” (2015) 20(7) First Monday; Pasquale (Footnote n 1).

8 For one aspect of the factual foundations of this hypothetical, see Social Security Administration, Fiscal Year 2019 Budget Overview (2018) 17–18: “We will study and design successful strategies of our private sector counterparts to determine if a disability adjudicator should access and use social media networks to evaluate disability allegations. Currently, agency adjudicators may use social media information to evaluate a beneficiary’s symptoms only when there is an OIG CDI unit’s Report of Investigation that contains social media data corroborating the investigative findings. Our study will determine whether the further expansion of social media networks in disability determinations will increase program integrity and expedite the identification of fraud.”

9 Frank Pasquale, “Six Horsemen of Irresponsibility” (2019) 79 Maryland LR 105 (discussing exculpatory clauses).

10 For rival definitions of the rule of law, see Pasquale, “A Rule of Persons” (Footnote n 1). The academic discussion of “due process” remains at least as complex as it was in 1977, when the Nomos volume on the topic was published. See, e.g., Charles A. Miller, “The Forest of Due Process Law” in J. Roland Pennock and John W. Chapman (eds), Nomos XVII: Due Process (NYU Press 1977).

11 Pennock, “Introduction” in Pennock and Chapman, Nomos XVII: Due Process (Footnote n 10).

12 419 US 565, 581 (1975). In rare cases, the hearing may wait until the threat posed by the student is contained: “Since the hearing may occur almost immediately following the misconduct, it follows that as a general rule notice and hearing should precede removal of the student from school. We agree with the District Court, however, that there are recurring situations in which prior notice and hearing cannot be insisted upon. Students whose presence poses a continuing danger to persons or property or an ongoing threat of disrupting the academic process may be immediately removed from school. In such cases, the necessary notice and rudimentary hearing should follow.”

13 Henry J. Friendly, “Some Kind of Hearing” (1975) 123 U Pa LR 1267 (listing 11 potential requirements of due process).

14 Kiel Brennan-Marquez and Stephen E. Henderson, “Artificial Intelligence and Role-Reversible Judgment” (2019) 109 J Crim L and Criminology 137.

15 Under the Mathews balancing test, “Identification of the specific dictates of due process generally requires consideration of three distinct factors: First, the private interest that will be affected by the official action; second, the risk of an erroneous deprivation of such interest through the procedures used, and the probable value, if any, of additional or substitute procedural safeguards; and finally, the Government’s interest, including the function involved and the fiscal and administrative burdens that the additional or substitute procedural requirement would entail.” Mathews v. Eldridge 424 US 319, 335 (1976). For an early critique, see Jerry L Mashaw, “The Supreme Court’s Due Process Calculus for Administrative Adjudication in Mathews v. Eldridge: Three Factors in Search of a Theory of Value” (1976) 44 U Chi LR 28.

16 Joanna J. Bryson, Mihailis E. Diamantis, and Thomas D. Grant, “Of, for and by the People: The Legal Lacuna of Synthetic Persons” (2017) 25 Artificial Intelligence and Law 273. For a recent suggestion on how to deal with this problem, by one of the co-authors, see Mihailis Diamantis, “Algorithms Acting Badly: A Solution from Corporate Law” SSRN (accessed 5 Mar 2020)

17 Dana Remus and Frank S. Levy, “Can Robots Be Lawyers? Computers, Lawyers, and the Practice of Law” SSRN (Nov 30 2016); Brian S. Haney, “Applied Natural Language Processing for Law Practice” SSRN (Feb 14 2020) (“The state-of-the-art in legal question answering technology is far from providing any more valuable insight than a simple Google search … [and] legal Q&A is not a promising application of NLP in law practice.”).

18 Frank A. Pasquale and Glyn Cashwell, “Four Futures of Legal Automation” (2015) 63 UCLA LR Discourse 26.

19 See Cohen (Footnote n 6). See also Karen Yeung, “Algorithmic Regulation: A Critical Interrogation” (2018) 12 Regulation and Governance 505.

20 Ellen Dannin, “Red Tape or Accountability: Privatization, Public-ization, and Public Values” (2005) 15 Cornell JL & Pub Pol’y 111, 143 (“If due process requirements governing eligibility determinations for government-delivered services appear to produce inefficiencies, lifting them entirely through reliance on private service delivery may produce unacceptable inequities.”); Jon D. Michaels, Constitutional Coup: Privatization’s Threat to the American Republic (Harvard University Press 2017).

21 Frank Blechschmidt, “All Alone in Arbitration: AT&T Mobility v. Concepcion and the Substantive Impact of Class Action Waivers” (2012) 160 U Pa LR 541.

22 Danielle Keats Citron, “Technological Due Process” (2008) 85 Wash U LR 1249.

23 Danielle Keats Citron and Frank Pasquale, “The Scored Society: Due Process for Automated Predictions” (2014) 89 Wash LR 1; Frank Pasquale and Danielle Keats Citron, “Promoting Innovation While Preventing Discrimination: Policy Goals for the Scored Society” (2015) 89 Wash LR 1413. See also Kate Crawford and Jason Schultz, “Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms” (2014) 55 Boston Coll LR 93; Kate Crawford and Jason Schultz, “AI Systems as State Actors” (2019) 119 Colum LR 1941.

24 Rashida Richardson, Jason M. Schultz, and Vincent M. Southerland, “Litigating Algorithms 2019 US Report: New Challenges to Government Use of Algorithmic Decision Systems” AI Now Institute (September 2019)

25 Monika Zalnieriute, Lyria Bennett Moses and George Williams, “The Rule of Law and Automation of Government Decision-Making” (2019) 82 Modern Law Review 425 (report on automated decision-making). In the UK, see Simon Deakin and Christopher Markou (eds), Is Law Computable? Critical Perspectives on Law and Artificial Intelligence (Bloomsbury Professional, forthcoming); Jennifer Cobbe, “The Ethical and Governance Challenges of AI” (Aug 1 2019) In continental Europe, see the work of COHUBICOL and scholars at Bocconi and Florence, among many other institutions.

26 Cary Coglianese and David Lehr, “Regulating by Robot: Administrative Decision Making in the Machine-Learning Era” (2017) 105 Geo LJ 1147, 1189–90. Note that such inspections may need to be in-depth, lest automation bias lead to undue reassurance. Hramanpreet Kaur et al., “Interpreting Interpretability: Understanding Data Scientists’ Use of Interpretability Tools for Machine Learning” CHI 2020 Paper (accessed Mar 9 2020) (finding “the existence of visualizations and publicly available nature of interpretability tools often leads to over-trust and misuse of these tools”).

27 Andrew D. Selbst and Julia Powles, “Meaningful Information and the Right to Explanation” (2017) 7(4) International Data Privacy Law 233; Gianclaudio Malgieri and Giovanni Comandé, “Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation” (2017) 7(4) International Data Privacy Law 243. But see State v. Loomis 881 NW2d 749 (Wis 2016), cert denied, 137 S Ct 2290 (2017) (“[W]e conclude that if used properly, observing the limitations and cautions set forth herein, a circuit court’s consideration of a COMPAS risk assessment at sentencing does not violate a defendant’s right to due process,” even when aspects of the risk assessment were secret and proprietary.)

28 Electronic Privacy Information Center (EPIC), “Algorithms in the Criminal Justice System: Pre-Trial Risk Assessment Tools” (accessed Mar 6 2020) (“Since the specific formula to determine ‘risk assessment’ is proprietary, defendants are unable to challenge the validity of the results. This may violate a defendant’s right to due process.”).

29 For intellectual history of shifts toward preferring the convenience and reliability of numerical forms of evaluation, see Theodore Porter, Trust in Numbers: The Pursuit of Objectivity in Science and Public Life (Princeton University Press 1995); William Deringer, Calculated Values: Finance, Politics, and the Quantitative Age (Harvard University Press 2018).

30 Antinore v. State, 371 NYS2d 213 (NY App Div 1975); Gorham v. City of Kansas City, 590 P2d 1051 (Kan 1979); Richard Wallace, Comment, “Union Waiver of Public Employees’ Due Process Rights” (1986) 8 Indus Rel LJ 583; Ann C. Hodges, “The Interplay of Civil Service Law and Collective Bargaining Law in Public Sector Employee Discipline Cases” (1990) 32 Boston Coll LR 95.

31 The problem of “rights sacrifice” is not limited to the examples in this paragraph. See also Dionne L. Koller, “How the United States Government Sacrifices Athletes’ Constitutional Rights in the Pursuit of National Prestige” 2008 BYU LR 1465, for an example of outsourcing decision-making to venues without the robustness of traditional due process protections.

32 Peter J. Walker, “Private Firms Earn £500m from Disability Benefit Assessments” The Guardian (Dec 27 2016); Dan Bloom, “Privately-Run DWP Disability Benefit Tests Will Temporarily Stop in New ‘Integrated’ Trial” The Mirror (Mar 2 2020)

33 Robert Pear, “On Disability and on Facebook? Uncle Sam Wants to Watch What You Post” New York Times (2019 Mar 10); see also Footnote n 8.

34 Catharine A. Mackinnon, Toward a Feminist Theory of the State (Harvard University Press 1989).

35 G. A. Cohen, “Where the Action Is: On the Site of Distributive Justice” (1997) 26(1) Philosophy & Public Affairs 330.

36 Daniel A. Farber, “Another View of the Quagmire: Unconstitutional Conditions and Contract Theory” (2006) 33 Fla St LR 913, 914–15.

37 Footnote Ibid., 915 (“Most, if not all, constitutional rights can be bartered away in at least some circumstances. This may seem paradoxical, but it should not be: having a right often means being free to decide on what terms to exercise it or not.”).

40 Frank Pasquale, “Secret Algorithms Threaten the Rule of Law” MIT Tech Review (June 1 2017); Frank Pasquale, Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press 2015).

41 318 F3d 365 (1st Cir 2003).

42 Carrascalao v. Minister for Immigration [2017] FCAFC 107; (2017) 347 ALR 173. For an incisive analysis of this case and the larger issues here, see Will Bateman, “Algorithmic Decision-Making and Legality: Public Law Dimensions” (2019) 93 Australian LJ.

43 Chad M. Oldfather, “Writing, Cognition, and the Nature of the Judicial Function” (2008) 96 Geo LJ 1283.

44 Cary Coglianese and David Lehr, “Regulating by Robot: Administrative Decision: Making in the Machine-Learning Era105 Geo LJ 1147 (2017).

45 Mark Andrejevic, Automated Media (Routledge 2020).

4 Constitutional Challenges in the Emotional AI Era

* The chapter is based on the keynote delivered by P. Valcke at the inaugural conference ‘Constitutional Challenges in the Algorithmic Society’ of the IACL Research Group on Algorithmic State Market & Society – Constitutional Dimensions’, which was held from 9 to 11 May 2019 in Florence (Italy). It draws heavily from the PhD thesis of D. Clifford, entitled ‘The Legal Limits to the Monetisation of Online Emotions’ and defended at KU Leuven – Faculty of Law on July 3, 2019, to which the reader is referred for a more in-depth discussion.

1 For some illustrations, see B. Doerrfeld, ‘20+ Emotion Recognition APIs That Will Leave You Impressed, and Concerned’ (Article 2015) accessed 11 June 2020; M. Zhao, F. Adib and D. Katabi, ‘EQ-Radio: Emotion Recognition using Wireless Signals’ (Paper 2016) accessed 11 June 2020; CB Insights, ‘Facebook’s Emotion Tech: Patents Show New Ways for Detecting and Responding to Users’ Feelings’ (Article 2017) accessed 11 June 2020; R. Murdoch et al., ‘How to Build a Responsible Future for Emotional AI’ (Research Report 2020) accessed 11 June 2020. Gartner predicts that by 2022, 10 per cent of personal devices will have emotion AI capabilities, either on-device or via cloud services, up from less than 1% in 2018: Gartner, ‘Gartner Highlights 10 Uses for AI-Powered Smartphones’ (Press Release 2018) accessed 11 June 2020.

2 Committee of Ministers, ‘Declaration by the Committee of Ministers on the Manipulative Capabilities of Algorithmic Processes’ (Declaration 2019) accessed 11 June 2020, para. 8.

3 Footnote Ibid, para, 6.

4 A. McStay, Emotional AI: The Rise of Empathic Media (SAGE 2018) 3.

5 For more details, see, e.g., J. Stanley, ‘The Dawn of Robot Surveillance’ (Report 2019) accessed 11 June 2020 21–25.

6 Particular employment examples include uses for health care or pseudo-health care (e.g., to detect mood for the purposes of improving mental well-being), road safety (e.g., to detect drowsiness and inattentiveness), employee safety, uses to assess job applicants and people suspected of crimes. See more e.g., A. Fernández-Caballero et al., ‘Smart Environment Architecture for Emotion Detection and Regulation’ [2016] 64 J Biomed Inform 55; Gartner, ‘13 Surprising Uses For Emotion AI Technology’ (Article 2018) accessed 11 June 2020; C. Jee, ‘Emotion Recognition Technology Should Be Banned, Says an AI Research Institute’ (Article 2019) accessed 11 June 2020; J. Jolly, ‘Volvo to Install Cameras in New Cars to Reduce Road Deaths’ (Article 2019) accessed 11 June 2020; Stanley (Footnote n 6) 21–24; D. Clifford, ‘The Legal Limits to the Monetisation of Online Emotions’ (PhD thesis, KU Leuven, Faculty of Law 2019) 12.

7 Clifford (Footnote n 7) 10.

8 Clifford (Footnote n 7) 103.

9 See, e.g., C. Burr, N. Cristianini, and J. Ladyman, ‘An Analysis of the Interaction between Intelligent Software Agents and Human Users’ [2018] MIND MACH 735; C. Burr and N. Cristianini, ‘Can Machines Read Our Minds?’ [2019] 29 MIND MACH 461.

10 L. Stark and K. Crawford, ‘The Conservatism of Emoji: Work, Affect, and Communication’ [2015] 1 SM+S, 1, 8.

11 E. Hatfield, J. Cacioppo, and R. Rapson, ‘Emotional Contagion’ [1993] Curr Dir Psychol Sci 96.

12 See, e.g., A. Kramer, J. Guillory, and J. Hancock, ‘Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks’ (Research Article 2014) accessed 11 June 2020. There are also data to suggest that Facebook had offered advertisers the ability to target advertisements to teenagers based on real-time extrapolation of their mood: N. Tiku, ‘Facebook’s Ability to Target Insecure Teens Could Prompt Backlash’ (Article 2017) accessed 11 June 2020.

13 See, e.g., L. Stark, ‘Algorithmic Psychometrics and the Scalable Subject’ (2018) accessed 11 June 2020; Guardian, ‘Cambridge Analytica Files’ accessed 11 June 2020.

14 Stark (Footnote n 14).

15 Declaration by the Committee of Ministers on the Manipulative Capabilities of Algorithmic Processes (Footnote n 3), para. 8.

16 For more information, see Transparency declaration: one of the co-authors serves as CAHAI’s vice-chair.

17 For example, political micro-targeting, fake news. See more Clifford (Footnote n 7) 13.

18 A well-known video fragment illustrating this (and described by Sunstein in his article , C. Sunstein, ‘Fifty Shades of Manipulation’ [2016] 1 J. Behavioral Marketing 213) is Mad Men’s Don Draper delivering his Kodak Pitch (see at See, e.g., T. Brader, Campaigning for Hearts and Minds: How Emotional Appeals in Political Ads Work (University of Chicago Press 2006); E. Mogaji, Emotional Appeals in Advertising Banking Services (Emerald Publishing Ltd 2018).

19 See, e.g., Article 9 of the Parliament and Council Directive 2010/13/EU on the coordination of certain provisions laid down by law, regulation, or administrative action in Member States concerning the provision of audiovisual media services (Audiovisual Media Services Directive) [2010] OJ L 95.

20 See, e.g., Parliament and Council Directive 2019/2161 amending Council Directive 93/13/EEC and Directives 98/6/EC, 2005/29/EC and 2011/83/EU of the European Parliament and of the Council as regards the better enforcement and modernisation of Union consumer protection rules [2019] OJ L 328.

21 Article 102 of the Treaty on the Functioning of the European Union. See Consolidated Version of the Treaty on the Functioning of the European Union [2012] OJ C 326.

22 In particular, Parliament and Council Regulation 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L 119. Also see Parliament and Council Directive 2002/58/EC concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on Privacy and Electronic Communications) [2002] OJ L 201.

23 Hatfield (Footnote n 12).

24 See, e.g., Kramer (Footnote n 13).

25 Guardian, ‘Cambridge Analytica Files’ (Footnote n 14); Stark (Footnote n 14).

26 Stark (Footnote n 14).

27 Tiku (Footnote n 13).

28 Clifford (Footnote n 7) 112.

29 Stark and Crawford (Footnote n 11) 1, 8.

30 S. Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs 2019).

31 Stark (Footnote n 14).

32 The Norwegian Consumer Council, ‘Deceived by Design’ (Report 2018) accessed 11 June 2020 7.

34 In accordance with the General Data Protection Regulation (Footnote n 23): CNIL, ‘Deliberation of the Restricted Committee SAN-2019-001 of 21 January 2019 Pronouncing a Financial Sanction against GOOGLE LLC’ (Decision 2019) accessed 11 June 2020.

35 Deceptive Experiences to Online Users Reduction Act 2019 accessed 11 June 2020; M. Kelly, ‘Big Tech’s ‘Dark Patterns’ Could Be Outlawed under New Senate Bill’ (Article 2019) accessed 11 June 2020.

36 Landsec, ‘Archived Privacy Policy Piccadilly Lights’ accessed 11 June 2020.

37 According to the Archived Privacy Policy Piccadilly Lights (Footnote n 37), the data collection ended in September 2018.

38 A. McStay and L. Urquhart, ‘“This Time with Feeling?” Assessing EU Data Governance Implications of Out of Home Appraisal Based Emotional AI’. [2019] 24 First Monday 10 accessed 11 June 2020.

39 Council of Europe, Committee on Culture, Science, Education and Media, Rapporteur Mr Jean-Yves LE DÉAUT, ‘Technological Convergence, Artificial Intelligence and Human Rights’ (Report 2017) accessed 11 June 2020 para. 26.

40 European Parliament and Council Directive 2005/29/EC concerning unfair business-to-consumer commercial practices in the internal market [2005] OJ L 149, Annex I. D. Clifford, ‘Citizen-Consumers in a Personalised Galaxy: Emotion Influenced Decision-Making, a True Path to the Dark Side?’, in L. Edwards, E. Harbinja, and B. Shaffer (eds) Future Law: Emerging Technology, Regulation and Ethics (Edinburgh University Press 2020).

41 D. Clifford, ‘The Emergence of Emotional AI Emotion Monetisation and Profiling Risk, Nothing New?’, Ethics of Data Science Conference (Paper 2020, forthcoming).

42 T. Maroney, ‘Law and Emotion: A Proposed Taxonomy of an Emerging Field’ [2019] 30 Law Hum Behav 119.

43 M. Hütter and S. Sweldens, ‘Dissociating Controllable and Uncontrollable Effects of Affective Stimuli on Attitudes and Consumption’ [2018] 45 J Consum Res 320, 344.

44 M. Lee, ‘Understanding Perception of Algorithmic Decisions: Fairness, Trust, and Emotion in Response to Algorithmic Management’ [2018] 5 BD&S 1, 2.

45 Clifford (Footnote n 7) 82.

46 In particular, such technologies allow for the development of inter alia content, formats, and products, or indeed entire campaigns that are optimized (i.e., at least at face value) and tailored by emotion insights. Clifford (Footnote n 7).

47 Sunstein (Footnote n 19).

48 See, e.g., H. Micklitz, L. Reisch and K. Hagen, ‘An Introduction to the Special Issue on “Behavioural Economics, Consumer Policy, and Consumer Law”’ [2011] 34 J Consum Policy 271 accessed 11 June 2020; R. Calo, ‘Digital Market Manipulation’ [2014] 82 Geo Wash L Rev 995; D. Citron and F. Pasquale, ‘The Scored Society: Due Process for Automated Predictions’ [2014] 89 Wash L Rev 1 accessed 11 June 2020; H. Micklitz, A. Sibony, and F. Esposito (eds), Research Methods in Consumer Law (Edward Elgar 2018).

49 Sunstein (Footnote n 19).

50 Clifford (Footnote n 42).

51 See, e.g., the case of Antović and Mirković v. Montenegro concerning the installation of video surveillance equipment in auditoriums at a university, in which the ECtHR emphasized that video surveillance of employees at their workplace, whether covert or not, constituted a considerable intrusion into their ‘private life’ (Antović and Mirković v. Montenegro App no 70838/13 (ECtHR, 28 February 2018) para. 44). See also, e.g., Liberty and Others v. the United Kingdom App no 58243/00 (ECtHR, 1 July 2008); Vukota-Bojić v. Switzerland App no 61838/10 (ECtHR, 18 October 2016; Bărbulescu v. Romania App no 61496/08 (ECtHR, 5 September 2017).

52 R. Calo, ‘The Boundaries of Privacy Harm’ (2011) 86 Indiana Law J 1131, 1147.

53 One should also note surveillance can have a chilling effect even if it is private or public; see N. Richards, ‘The Dangers of Surveillance’ [2013] 126 Harv Law Rev 1934, 1935: ‘[W]e must recognize that surveillance transcends the public/private divide. Public and private surveillance are simply related parts of the same problem, rather than wholly discrete. Even if we are ultimately more concerned with government surveillance, any solution must grapple with the complex relationships between government and corporate watchers.’

54 Stanley (Footnote n 6) 35–36.

55 O. Lynskey, The Foundations of EU Data Protection Law ( First, OUP 2016) 202, 218.

56 R . Warner and R. Sloan, ‘Self, Privacy, and Power: Is It All Over?’ [2014] 17 Tul J Tech & Intell Prop 8.

57 J. Rachels, ‘Why Privacy Is Important’ [1975] 4 Philos Public Aff 323, 323–333, 323-333. The author goes on to discuss how we behave differently depending on who we are talking to, and this has been argued as dishonest or a mask by certain authors; but the author disagrees, saying that these ‘masks’ are, in fact, a crucial part of the various relationships and are therefore not dishonest. See also H. Nissenbaum, ‘Privacy as Contextual Integrity’ [2004] 79 Wash L Rev 119.

58 Clifford (Footnote n 7) 124; Clifford (Footnote n 42).

59 Calo (Footnote n 53) 1142–1143.

60 For example, practical use cases such as the ones in health care or pseudo-health care shed light on the potential for inaccuracy to have damaging effects on the physical and mental well-being of the individual concerned. For details, see, e.g., Clifford (Footnote n 42).

62 AI Now Institute, New York University, ‘AI Now Report 20184; (Report 2018), 14

63 In this regard, it is interesting to refer to the work of Barret, who views the focus on basic emotions as misguided, as such categories fail to capture the richness of emotional experiences. L. Barrett, ‘Are Emotions Natural Kinds?’ [2006] 1 Perspectives on Psychological Science 28, as cited by R. Markwica, Emotional Choices: How the Logic of Affect Shapes Coercive Diplomacy (Oxford University Press 2018) 18, 72.

64 Clifford (Footnote n 42).

65 For more details, see Stanley (Footnote n 6) 38–39.

66 AI Now Report 2018 (Footnote n 63) 14. For a discussion in the context of emotion detection, see also A. McStay, ‘Empathic Media and Advertising: Industry, Policy, Legal and Citizen Perspectives (the Case for Intimacy)’ [2016] 3 BD&S 1, 3–6.

67 B. Koops, ‘On Decision Transparency, or How to Enhance Data Protection after the Computational Turn’ in M. Hildebrandt and K. de Vries (eds), Privacy, Due Process and the Computational Turn: The Philosophy of Law Meets the Philosophy of Technology (Routledge 2013) 199.

68 Koops (Footnote n 68) 199.

69 Parliamentary Assembly, ‘Technological Convergence, Artificial Intelligence and Human Rights’ (Recommendation 2102 2017) accessed 11 June 2020 para. 9.1.5. In relation to bio-medicine, reference can be made to the 1997 Convention for the Protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine: Convention on Human Rights and Biomedicine (also known as ‘Oviedo Convention’). At EU level, see, for example, in the area of robotics and AI the European Parliament resolution on civil law rules on robotics: Parliament resolution with recommendations to the Commission 2015/2103(INL) on Civil Law Rules on Robotics [2015] OJ C 252.

70 P. Bernal, Internet Privacy Rights: Rights to Protect Autonomy (1st ed., Cambridge University Press 2014).

71 E. Harbinja, ‘Post-Mortem Privacy 2.0: Theory, Law, and Technology’ [2017] 31 Int Rev Law Comput Tech 26, 29.

72 Clifford (Footnote n 7) 277.

73 J. Raz, The Morality of Freedom (Clarendon Press, 1986) 247.

74 P. Bernal, Internet Privacy Rights: Rights to Protect Autonomy (1st ed., Cambridge University Press 2014) 2527; Raz (Footnote n 73) 382.

75 Raz (Footnote n 74) 420; Clifford (Footnote n 7).

76 Raz (Footnote n 74) 371; Clifford (Footnote n 7).

77 Raz (Footnote n 74).

78 Bernal (Footnote n 74) 26; Clifford (Footnote n 7).

79 For example, in Pretty v. United Kingdom, the ECtHR found that Article 8 ECHR included the ability to refuse medical treatment and that the imposition of treatment on a patient who has not consented ‘would quite clearly interfere with a person’s physical integrity in a manner capable of engaging the rights protected under art 8(1) of the Convention’. Pretty v. United Kingdom App no 2346/02 (ECtHR, 29 April 2002) para. 63.

80 Clifford (Footnote n 7) 104.

81 See more, e.g., Clifford (Footnote n 7) 104–105.

82 Footnote Ibid, 110.

83 K. Ziegler, ‘Introduction: Human Rights and Private Law − Privacy as Autonomy’ in K. Ziegler (ed), Human Rights and Private Law: Privacy as Autonomy (1st ed., Hart Publishing 2007). This view is shared by Yeung in her discussion of population-wide manipulation; see K. Yeung, ‘A Study of the Implications of Advanced Digital Technologies (Including AI Systems) for the Concept of Responsibility within a Human Rights Framework (DRAFT)’ (Council of Europe 2018) accessed 11 June 2020 29.

84 See, e.g., D. Feldman, 4;Human Dignity as a Legal Value: Part 14; [1999] Public Law 682, 690.

85 For details, see R. van Est and J. Gerritsen, ‘Human Rights in the Robot Age: Challenges Arising from the Use of Robotics, Artificial Intelligence, and Virtual and Augmented Reality’ (Rathenau Instituut Expert report written for the Committee on Culture, Science, Education and Media of the Parliamentary Assembly of the Council of Europe 2017) accessed 11 June 2020 27–28.

86 C. O’Mahony, ‘There Is No Such Thing as a Right to Dignity’ [2012] 10 Int J Const Law 551.

87 O’Mahony (Footnote n 87) 557–558.

88 O’Mahony (Footnote n 87) 552.

89 Its value is emphasized in a number of international treaties and national constitutional documents. For details, see, e.g., O’Mahony (Footnote n 87) 552–553.

90 See, for instance, Pretty v. United Kingdom (2346/02) [2002] ECHR 423 (29 April 2002), where the ECtHR held that the ‘very essence of the Convention is respect for human dignity and human freedom’ (para. 65). The Universal Declaration of Human Rights – on which the ECHR is based – provides that ‘[a]ll human beings are born free and equal in dignity and rights’ (Article 1). For details, see R. van Est and J. Gerritsen (Footnote n 86) 27–28.

91 Except Protocol No. 13 to the Convention for the Protection of Human Rights and Fundamental Freedoms concerning the abolition of the death penalty in all circumstances.

92 R. van Est and J. Gerritsen (Footnote n 86) 27–28.

93 Article 1 of the Charter provides that human dignity is inviolable and shall be respected and protected. See also, e.g., A. Barak, ‘Human Dignity as a Framework Right (Motherright)’, in A. Barak, Human Dignity: The Constitutional Value and the Constitutional Right (Cambridge University Press, 2015) 156169.

94 Case C-377/98 Netherlands v. European Parliament and Council of the European Union [2001] ECR I-7079 paras 70–77.

95 O’Mahony (Footnote n 87) 560.

96 See, e.g., O’Mahony (Footnote n 87); Feldman (Footnote n 85).

97 O’Mahony (Footnote n 87) 574.

98 Feldman (Footnote n 85) 688.

101 For details about interaction between discrimination and dignity, see, e.g., AI Now Report 2018 (Footnote n 63) 14; Feldman (Footnote n 85) 688.

102 Although this report focuses on the employment of technologies in the context of law enforcement, certain insights are relevant both for private and public sectors. European Union Agency for Fundamental Rights, ‘Facial Recognition Technology: Fundamental Rights Considerations in the Context of Law Enforcement’ (Paper 2019) accessed 11 June 2020.

103 Footnote Ibid, 20.

105 Footnote Ibid, 33. Academic researchers have also argued that facial recognition technologies are to be treated as the Plutonium of AI’, ‘nuclear-level threats’, ‘a menace disguised as a gift’, and an ‘irresistible tool for oppression’, which shall be banned entirely and without further delay both in public and private sectors . L. Stark, ‘Facial Recognition Is the Plutonium of AI’ (Article 2019) accessed 11 June 2020; E. Selinger and W. Hartzog, ‘What Happens When Employers Can Read Your Facial Expressions?’ (Article 2019) accessed 11 June 2020; W. Hartzog, ‘Facial Recognition Is the Perfect Tool for Oppression’ (Article 2018) accessed 11 June 2020; E. Selinger, ‘Amazon Needs to Stop Providing Facial Recognition Tech for the Government’ (Article 2018) accessed 11 June 2020; E. Selinger, ‘Why You Can’t Really Consent to Facebook’s Facial Recognition’ (Article 2019) accessed 11 June 2020. It remains to be seen whether legislators will adopt specific rules on face recognition technologies. Although the European Commission apparently contemplated a temporary five-year ban on facial recognition, the final version of its White Paper on Artificial Intelligence of 19 February 2020 no longer draws such a hard line (COM (2020) 65 final); see J. Espinoza, ‘EU Backs Away from Call for Blanket Ban on Facial Recognition Tech’ (Article 2020) accessed 15 June 2020. California recently adopted a Bill, referred to as the Body Camera Accountability Act, which (if signed into law) would ban the use of facial recognition software in police body cameras. See R. Metz, ‘California Lawmakers Ban Facial-Recognition Software from Police Body Cams’ (Article 2019) accessed 11 June 2020.

106 See, e.g., Stanley (Footnote n 6); Barrett (Footnote n 64); Feldman (Footnote n 85).

107 R. van Est and J. Gerritsen (Footnote n 86) 23.

108 Declaration by the Committee of Ministers on the Manipulative Capabilities of Algorithmic Processes (Footnote n 3), para. 9.

110 See, e.g., Yeung (Footnote n 84); Zuboff (Footnote n 31); J. Bublitz, ‘My Mind Is Mine!? Cognitive Liberty as a Legal Concept’ in E. Hildt and A. Franke (eds), Cognitive Enhancement: An Interdisciplinary Perspective (Springer Netherlands 2013).

111 Yeung (Footnote n 84) 79–80.

112 Zuboff (Footnote n 31).

113 Footnote Ibid., 332.

114 Footnote Ibid., 344.

116 Footnote Ibid, 332, 336–337.

118 Footnote Ibid, 332; J. Searle, Making the Social World: The Structure of Human Civilization (Oxford University Press 2010).

119 Bublitz (Footnote n 111).

120 J. Bublitz, ‘Freedom of Thought in the Age of Neuroscience’ [2014] 100 Archives for Philosophy of Law and Social Philosophy 1; Clifford (Footnote n 7) 286.

121 Footnote Ibid, 25.

122 R. van Est and J. Gerritsen (Footnote n 86) 43–45; Clifford (Footnote n 7) 287.

123 R. van Est and J. Gerritsen (Footnote n 86) 43.

124 For reference see R. van Est and J. Gerritsen (Footnote n 86) 43–44.

125 Footnote Ibid, 44.

126 Footnote Ibid, 43–45.

127 Rathenau Institute argues that such principles could be developed by the Council of Europe. R. van Est and J. Gerritsen (Footnote n 86) 26.

128 Yeung (Footnote n 84) 79.

129 Footnote Ibid, 79.

130 Yeung (Footnote n 84); H. Nissenbaum, Privacy in Context: Technology, Policy and the Integrity of Social Life (Stanford Law Books 2010) 83.

131 Yeung (Footnote n 84) 79–80.

132 Parliamentary Assembly (Footnote n 70).

133 Clifford (Footnote n 7) 287. This reminds us of the discussion about the positioning of consumer rights as fundamental rights; see, e.g., S. Deutch, ‘Are Consumer Rights Human Rights? (Includes Discussion of 1985 United Nations Guidelines for Consumer Protection)’ [1994] 32 Osgoode Hall Law Journal. For a general criticism of the creation of new human rights, see Ph. Alston, ‘Conjuring up New Human Rights: A Proposal for Quality Control’ [1984] 78 The American Journal of International Law 607.

134 Clifford (Footnote n 7) 287.

135 See, for instance, Satakunnan Markkinapörssi OY and Satamedia OY v. Finland [2017] ECHR 607, para. 137, in which the ECtHR derived a (limited form of) right to informational self-determination from Article 8 ECHR.

136 For further reference, see Clifford (Footnote n 7) 124–133, and references there to M. Brkan, ‘The Essence of the Fundamental Rights to Privacy and Data Protection: Finding the Way through the Maze of the CJEU’s Constitutional Reasoning’ [2019] 20 German Law Journal; O. Lynskey, ‘Deconstructing Data Protection: The “Added-Value” of a Right to Data Protection in the EU Legal Order’ [2014] 63 International & Comparative Law Quarterly 569; H. Hijmans, The European Union as Guardian of Internet Privacy (Springer International Publishing 2016); G. González Fuster, The Emergence of Personal Data Protection as a Fundamental Right of the EU (Springer International Publishing 2014); G. González Fuster and R. Gellert, ‘The Fundamental Right of Data Protection in the European Union: In Search of an Uncharted Right’ [2012] 26 International Review of Law, Computers & Technology 73; J. Kokott and C. Sobotta, ‘The Distinction between Privacy and Data Protection in the Jurisprudence of the CJEU and the ECtHR’ [2013] International Data Privacy Law; S. Gutwirth, Y. Poullet, P. De Hert, C. de Terwangne, and S. Nouwt (eds) Reinventing Data Protection? (Springer 2009).

137 See, in this regard, for instance, V. Verdoodt, ‘Children’s Rights and Commercial Communication in the Digital Era’, KU Leuven Centre for IT & IP Law Series, n 10, 2020.

138 See, for instance, M. Veale, R. Binns, and L. Edwards, ‘Algorithms That Remember: Model Inversion Attacks and Data Protection Law’ [2018] 376 Philosophical Transactions of the Royal Society A3.

139 Clifford (Footnote n 7) 331.

140 Declaration by the Committee of Ministers on the Manipulative Capabilities of Algorithmic Processes (Footnote n 3), para. 9.

5 Algorithmic Law: Law Production by Data or Data Production by Law?

1 Massimo Airoldi and Daniele Gambetta, ‘Sul mito della neutralità algoritmica’, (2018) 4 The Lab’s Quarterly, 29.

2 Luciano Floridi, The Onlife Manifesto. Being Human in a Hyperconnected Era (Springer, 2015).

3 Max Weber, Economia e società, (Edizioni di Comunità, 1st ed., 1974), 687.

4 Chiara Visentin, ‘Il potere razionale degli algoritmi tra burocrazia e nuovi idealtipi’, The Lab’s Quarterly, 4772, 57, 58.

5 Max Weber, ‘Politics as a Vocation’, in Hans Gehrt (ed.) and C. Wright Mills (trans.), From Max Weber: Essays in Sociology (Oxford University Press, 1946), 77; Economia e società, 685.

6 Max Weber, Economia e società, 260, 262.

7 Max Weber, ‘Politics as a vocation’, 88.

8 Max Weber, Economia e società, 278.

9 Karen Yeung, ‘Why Worry about Decision-Making by Machine?’, in Karen Yeung and Martin Lodge (eds.), Algorithmic Regulation (Oxford University Press, 2019), 24. However, there is a big debate on digital personhood and responsibility; see G. Teubner, ‘Digital Personhood: The Status of Autonomous Software Agents in Private Law’, (2018) Ancilla Iuris, 35. According to the Robotic Charter of the EU Parliament, in the event that a robot can make autonomous decisions, the traditional rules are not sufficient to activate liability for damages caused by a robot, as they would not allow to determine which is the person responsible for the compensation or to demand from this person to repair the damage caused.

10 Max Weber, Economia e società, 269.

13 Thomas Vogl, Cathrine Seidelin, Bharath Ganesh, and Jonathan Bright, ‘Algorithmic Bureaucracy. Managing Competence, Complexity, and Problem Solving in the Age of Artificial Intelligence’ (2019),

14 A. Aneesh, ‘Technologically Coded Authority: The Post-Industrial Decline in Bureaucratic Hierarchies’ (2002) Stanford University Papers,

15 European Commission, White Paper on Artificial Intelligence – A European Approach to Excellence and Trust, ‘The output of the AI system does not become effective unless it has been previously reviewed and validated by a human’,, 21.

16 Furio Ferraresi, ‘Genealogie della legittimità. Città e stato in Max Weber’, (2014) 5 Società Mutamento Politica, 143, 146.

17 On the concept of data extraction, see Deborah De Felice, Giovanni Giuffrida, Giuseppe Giura, Vilhelm Verendel, and Calogero G. Zarba, ‘Information Extraction and Social Network Analysis of Criminal Sentences. A Sociological and Computational Approach’, (2013) Law and Computational Social Science, 243262, 251.

18 Michael Veale and Irina Brass, ‘Administration by Algorithm?’, in Karen Yeung and Martin Lodge (eds.), Algorithmic Regulation (Oxford University Press, 2019), 123125; Anthony J Casey and Anthony Niblett, ‘A Framework for the New Personalization of Law’ (2019) 86 University of Chicago Law Review 333, 335.

19 Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Society (Harvard University Press, 2015), 8.

20 Riccardo Guidotti et al., ‘ A Survey of Methods for Explaining Black Box Models’ (2018), ACM Computing Surveys, February, 1, 18.

21 T. Zarsky, ‘The Trouble with Algorithmic Decisions: An Analytic Road Map to Examine Efficiency and Fairness in Automated and Opaque Decision Making’, (2016) 41 Science, Technology, & Human Values, 119.

22 Riccardo Guidotti et al. at Footnote n. 19, 5.

23 Ira Rubinstein, ‘Big Data: The End of Privacy or a New Beginning?’, (2013) 3 International Data Privacy Law, 74.

24 Marta Infantino, Numera et impera. Gli indicatori giuridici globali e il diritto comparato (Franco Angeli, 2019), 29.

25 Enrico Campo, Antonio Martella, and Luca Ciccarese, ‘Gli algoritmi come costruzione sociale. Neutralità, potere e opacità’, (2018) 4 The Lab’s Quarterly, 7.

26 Max Weber, Economia e società, 278.

27 David BeerThe Social Power of Algorithms’, (2017) 20 Information, Communication & Society, 113.

28 Shoshana Zuboff, The Age of Surveillance Capitalism. The Fight for a Human Future at the New Frontier of Power (Public Affairs, 2019), 376377.

29 Karen Yeung and Martin Lodge, ‘Introduction’, in Karen Yeung and Martin Lodge (eds.), Algorithmic Regulation (Oxford University Press, 2019), 5.

30 Giovanni Tarello, Cultura giuridica e politica del diritto (Il Mulino, 1988), p. 24, 25.

33 According to Dan McQuillan, ‘Algorithmic States of Exception, European Journal of Cultural Studies’, (2015) 18 European Journal of Cultural Studies, 564, 569: ‘While tied to clearly constituted organisational and technical systems, the new operations have the potential to create social consequences that are unaddressed in law.’

36 For a deep analysis of causality and correlation, see Judea Pearl and Dana Mackenzie, The Book of Why: The New Science of Cause and Effect (Penguin Books, 2018), 27.

37 David Nelken, ‘Using the Concept of Legal Culture’, p. 1.

38 Lionel Ching Kang Teo, ‘Are All Pawns in a Simulated Reality? Ethical Conundrums in Surveillance Capitalism’, 10 June 2019,

39 Shoshana Zuboff, The Age of Surveillance Capitalism, The Definition (Penguin Books, 2015).

40 Shoshana Zuboff, ‘Big Other: Surveillance Capitalism and the Prospects of an Information Civilization’, (2015) 30 Journal of Information Technology, 7589.

41 Frederik Z. Borgesius, Discrimination, Artificial Intelligence, and Algorithmic Decision-Making (Council of Europe, Strasbourg, 2018),, 10; Karen Yeung, at Footnote n. 8, 25.

42 Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Dino Pedreschi, and Fosca Giannotti, ‘A Survey of Methods for Explaining Black Box Models’, ACM Computing Surveys, February 2018, 1 ff.

43 European Commission, A European Strategy for Data, 66 final, 19 February 2020,

44 Max Weber, Economia e società, 253.

45 Jennifer Daskal, ‘Data Un-territoriality’, (2015) 125 The Yale Law Journal, 326.

46 Andrew Keane Woods, ‘Litigating Data Sovereignty’, (2018) 128 The Yale Law Journal, 328.

47 Footnote Ibid., 203, 204, 205.

48 David Nelken, ‘Using the Concept of Legal Culture’, (2004) 29 Australian Journal of Legal Philosophy, 4: ‘Given the extent of past and present transfer of legal institutions and ideas, it is often misleading to try and relate legal culture only to its current national context.’

49 Max Weber, Economia e società, 281–282.

50 See Nicolò Muciaccia, ‘Algoritmi e procedimento decisionale: alcuni recenti arresti della giustizia amministrativa’, (2020) 10, 344,

51 Consiglio di Stato, sec VI, 13 December 2019, n. 8472, n. 8473, n. 8474. Against the application of algorithmic decision-making to administrative proceedings, see T.A.R. Lazio Roma, sec. III bis, 27 May 2019, n. 6606 and T.A.R. Lazio Roma, sec. III bis, 13 September 2019, n. 10964.

52 Consiglio di Stato, sec. VI, 8 April 2019, n. 2270. See Gianluca Fasano, ‘Le decisioni automatizzate nella pubblica amministrazione: tra esigenze di semplificazione e trasparenza algoritmica’, (2019) 3 Medialaws,

53 See Enrico Carloni, ‘AI, algoritmi e pubblica amministrazione in Italia’, (2020) 30 Revista de los Estudios de Derecho y Ciencia Política,

54 Consiglio di Stato recalls recital 71 GDPR.

55 Similarly, see T.A.R. Lazio Roma, sec. III bis, 28 May 2019, n. 6686; Consiglio di Stato, sec VI, 4 February 2020, n. 881.

56 Consiglio di Stato, sec. VI, 2 January 2020, Footnote n. 30.

57 Max Weber, Economia e società, 257, 276; Massimo Cacciari, Il lavoro dello spirito, (Adelphi, 2020).

58 On the idea of adapting technology, see Luciano Gallino, Tecnologia e democrazia. Conoscenze tecniche e scientifiche come beni pubblici (Einaudi, 2007), 132, 195.

59 Dan McQuillan at Footnote n. 32, 570.

60 Anthony T. Kronman, Education’s End Why Our Colleges and Universities Have Given Up on the Meaning of Life (Yale University Press, 2007), 205; Margerita Ramajoli, ‘Quale cultura per l’amministrazione pubblica?’, in Beatrice Pasciuta and Luca Loschiavo (eds.), La formazione del giurista. Contributi a una riflessione (Roma Tre-press, 2018), 103.

61 According to Roderick A. Macdonald and Thomas B. McMorrow, ‘Decolonizing Law School’, (2014) 51 Alberta Law Review, 717: ‘The process of decolonizing law school identified by the authors is fundamentally a process of moving the role of human agency to the foreground in designing, building, and renovating institutional orders that foster human flourishing.’

62 David Howarth, ‘Is Law a Humanity (Or Is It More Like Engineering)?’, (2004) 3 Arts & Humanities in Higher Education, 9.

6 Human Rights and Algorithmic Impact Assessment for Predictive Policing

* Support from the Artificial and Natural Intelligence Toulouse Institute (ANITI), ANR-3IA, and the Civil Law Faculty of the University of Ottawa is gratefully acknowledged. I also thank law student Roxane Fraser and the attendees at the Conference on Constitutional Challenges in the Algorithmic Society for their helpful comments, and especially Professor Ryan Calo, Chair of the Panel. This text has been written in 2019 and does not take into account the EC proposal on AI published in April 2021.

1 Preamble section of the Montréal Declaration, accessed 23 May 2019.

2 Guido Noto La Diega, ‘Against Algorithmic Decision-Making’ (2018) accessed 23 May 2019.

3 AINow Institute, ‘Government Use Cases’ accessed on 22 December 2019.

4 AINow Institute, ‘Algorithmic Accountability Policy Toolkit’ (October 2018) accessed 23 May 2019 [Toolkit].

5 Sandra Wachter and Brent Mittelstadt, A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI (2018) Columbia Business Law Review accessed 11 March 2019.

6 European Parliamentary Research Service (EPRS) Study, ‘Panel for the Future of Science and Technology, Understanding Algorithmic Decision-Making: Opportunities and Challenges, March 2019’ (PE 624.261), 21 [PE 624.261].

7 See, for instance, Florian Saurwein, Natascha Just and Michael Latzer, ‘Governance of Algorithms: Options and Limitations’ (2015) vol. 17 (6) info 35–49 accessed 21 January 2020.

8 Toolkit, supra Footnote note 4.

9 PE 624.261, supra Footnote note 6.

10 Walter L. Perry et al., ‘Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations’ (2013) accessed 29 November 2018.

11 Lubor Hruska et al., ‘Maps of the Future, Research Project of the Czeck Republic’ (2015) accessed 23 May 2019 [Maps].

12 Don Casey, Phillip Burrell, and Nick Sumner, ‘Decision Support Systems in Policing’ (2018 (4 SCE)) European Law Enforcement Research Bulletin accessed 23 May 2019.

13 James Harrison, ‘Measuring Human Rights: Reflections on the Practice of Human Rights Impact Assessment and Lessons for the Future’ (2010) Warwick School of Law Research Paper 2010/26 accessed 23 May 2019.

14 National Institute of Justice, ‘Overview of Predictive Policing’ (9 June 2014) accessed 23 May 2019 [NIJ].

15 ‘Tracking Chicago Shooting Victims’ Chicago Tribune (16 December 2019) accessed 16 December 2019.

16 Andrew Fergurson, The Rise of Big Data Policing: Surveillance, Race, and the Future of the Law Enforcement (2017), 21.

17 NIJ, supra Footnote note 14.

18 US Department of Justice, ‘Investigation of the Ferguson Police Department’ (2015) accessed 23 May 2019 [US DJ].

19 Floyd v. City of New York (2013) 739 F Supp 2d 376.

20 Terry v. Ohio (1968) 392 US 1.

21 Issie Lapowsky, ‘How the LAPD Uses Data to Predict Crime’ (22 May 2018) accessed 23 May 2019.

22 ‘PredPol Predicts Gun Violence’ (2013) accessed 23 May 2019.

23 US Patent No. 8,949,164 (Application filed on 6 September 2012) accessed 23 May 2019.

24 George O. Mohler, ‘Marked Point Process Hotspot Maps for Homicide and Gun Crime Prediction in Chicago2014 30(3) International Journal of Forecasting, 491497; ‘Does Predictive Policing Lead to Biased Arrests? Results from a Randomized Controlled Trial, Statistics and Public Policy’ 5:1 1–6 10.1080/2330443X.2018.1438940 accessed 23 May 2019 [Mohler].

25 Ismael Benslimane, ‘Étude critique d’un système d’analyse prédictive appliqué à la criminalité: PredPol®’ CorteX Journal accessed 23 May 2019.

26 Mohler, supra Footnote note 24.

27 BBC News, ‘Kent Police Stop Using Crime Predicting Software’ (28 November 2018) accessed 23 May 2019.

28 See the problem of algorithmic biases with COMPAS: Jeff Larson et al., ‘How We Analyzed the COMPAS Recidivism Algorithm ProPublica’ (2016) accessed 12 August 2018.

29 P. Jeffrey Brantingham, ‘The Logic of Data Bias and Its Impact on Place-Based Predictive Policing’ (2017) 15(2) Ohio State Journal of Criminal Law 473.

30 For instance, Ali Winston, ‘Palantir Has Secretly Been Using New Orleans to Test Its Predictive Policing Technology’ (27 February 2018) accessed 23 May 2019. However, New Orleans ended its Palantir predictive policing program in 2018, after the public’s opposition regarding the secret nature of the agreement: Ali Winston, ‘New Orleans Ends Its Palantir Predictive Policing Program’ (15 March 2018) accessed 23 May 2019.

31 Crime Risk Forecasting, US Patent 9,129,219 (8 September 2015) accessed 23 May 2019.

32 Anupam Chander, ‘The Racist Algorithm? (2017)115 Michigan Law Review 10231045.

33 Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press 2015).

34 Kristian Lum and William Isaac, ‘To Predict and Serve?’ (7 October 2016) 13(5) Significance 1419 accessed 23 May 2019.

35 Rashida Richardson, Jason Schultz, and Kate Crawford, ‘Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice’ (2019) accessed 15 February 2019.

36 Solon Barocas and Andrew D. Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671732; Joshua Kroll et al., ‘Accountable Algorithms’ (2017) 165 U Pa L Rev 633.

37 Solon Barocas and Andrew D. Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671732; Alexandra Chouldechova, ‘Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments’ (2016) accessed 12 August 2018.

38 NYCLU, ‘Stop and Frisk Data’ (14 March 2019) accessed 23 May 2019.

39 US DJ, supra Footnote note 18.

40 PE 624.261, supra Footnote note 6.

41 Danielle Keats Citron, ‘Technological Due Process’ (2008) 85 Washington University Law Review.

42 David Robinson and Logan Koepke, ‘Stuck in a Pattern: Early Evidence on “Predictive Policing” and Civil Rights’ Upturn (August 2016) accessed 23 May 2019.

43 Brennan Center for Justice at New York University, School of Law v. NYPD, Case n. 160541/2016, December 22nd, 2017 (FOIA request (Freedom of Information Law Act)). The judge approved the request and granted access to the Palantir Gotham system used by the NYPD:

44 State of Wisconsin v. Loomis, 371 Wis 2d 235, 2016 WI 68, 881 N W 2d 749 (13 July 2016).

45 For example, Ben Dickson, ‘What Is Algorithmic Bias?’ (26 March 2018) accessed 23 May 2019.

46 For example, AINow Institute

47 For example, Vera Eidelman, ‘Secret Algorithms Are Deciding Criminal Trials and We’re Not Even Allowed to Test Their Accuracy’ (ACLU 15 September 2017) accessed 23 May 2019.

48 Margot E. Kaminski, ‘The Right to Explanation, Explained’ (2018) Berkeley Technology Law Journal 34(1).

49 Margot E. Kaminski and Malgieri, Gianclaudio, ‘Algorithmic Impact Assessments under the GDPR: Producing Multi-layered Explanations’ (2019). U of Colorado Law Legal Studies Research Paper No. 19–28. Available at SSRN:

50 Céline Castets-Renard, ‘Accountability of Algorithms: A European Legal Framework on Automated Decision-Making’ (2019) Fordham Intell. Prop., Media & Ent. Law Journal 30(1). Available at

51 Loi n 78–66 ‘Informatique et Libertés’ enacted on 6 January 1978 and modified by the Law n 2018–493, enacted on 20 June 2018:

52 However, we also have to consider antidiscrimination directives: Directive 2000/43/EC against discrimination on grounds of race and ethnic origin; Directive 2000/78/EC against discrimination at work on grounds of religion or belief, disability, age, or sexual orientation; Directive 2006/54/EC equal treatment for men and women in matters of employment and occupation; Directive 2004/113/EC equal treatment for men and women in the access to and supply of goods and services.

53 The situation is similar in the United States, except the adoption of the NYC Local Law n 2018/049 concerning automated decision systems used by the local agencies. In the state of Idaho, the Bill n 118 concerning the pretrial risk assessment algorithms and the risk to civil rights of automated pretrial tools in criminal justice was enacted on 4 March 2019:

54 See Luciano Floridi et al., ‘AI4People – An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations’ (2018) 28 Minds & Machines 689707.

55 European Commission, ‘Communication to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, Building Trust in Human-Centric Artificial Intelligence’ COM (2019) 168 final.

56 B. Wagner and S. Delacroix, ‘Constructing a Mutually Supportive Interface between Ethics and Regulation’ (14 June 2019):

57 Lilian Edwards and Michael Veale, ‘Enslaving the Algorithm: From a “Right to an Explanation” to a “Right to Better Decisions”?’ (2018) IEEE Security & Privacy 16(3) accessed 5 December 2018.

58 Paul B. de Laat, ‘Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability? (2017) Philos Technol 117.

59 European Parliamentary Research Service (EPRS), ‘Panel for the Future of Science and Technology, A Governance Framework of Algorithmic Accountability and Transparency’ April 2019 (PE 624.262) [PE 624.262]. I exclude the self-regulation solutions, such as ethics committees, because they may, in fact, be a way to manage public image and avoid government regulation. See Ben Wagner, Ethics as an Escape from Regulation: From Ethics-Washing to Ethics-Shopping? (Amsterdam University Press, 2018); Yeung Karen et al., AI Governance by Human Rights-Centred Design, Deliberation and Oversight: An End to Ethics Washing (Oxford University Press, 2019). Luciano Floridi, ‘Translating Principles into Practices of Digital Ethics: Five Risks of Being Unethical’ (2019) Philosophy & Technology 32(2).

60 For instance , Marion Oswald et al., ‘Algorithmic Risk Assessment Policing Models: Lessons from the Durham HART Model and “Experimental” Proportionality’ (2017) Information & Communications Technology Law accessed 23 May 2019.

61 Directive on Automated Decision-Making (2019)

62 Government of Canada, Algorithmic Impact Assessment (8 March 2019) accessed 23 May 2019.

63 PE 624.262, supra Footnote note 60.

64 See a similar recommendation in EPRS Study PE 624.262, supra Footnote note 60.

7 Law Enforcement and Data-Driven Predictions at the National and EU Level A Challenge to the Presumption of Innocence and Reasonable Suspicion?

1 See, e.g., H Fenwick (ed), Development in Counterterrorist Measures and Uses of Technology (Routledge 2012). See also, on policing more specifically, National Institute of Justice, Research on the Impact of Technology on Policing Strategy in the 21st Century. Final Report, May 2016,, accessed 27 July 2020; J Byrne and G Marx, ‘Technological Innovations in Crime Prevention and Policing. A Review of the Research on Implementation and Impact’ (2011) 20(3) Cahiers Politiestudies 17–40.

2 B Hoogenboom, The Governance of Policing and Security: Ironies, Myths and Paradoxes (Palgrave Macmillan 2010).

3 J Chan, ‘The Technology Game: How Information Technology Is Transforming Police Practice’ (2001) 1 Journal of Criminal Justice 139.

4 D Broeders et al., ‘Big Data and Security Policies: Serving Security, Protecting Freedom’ (2017) WRR-Policy Brief 6.

5 For instance, data acquisition is a kind of data processing architecture for big data, which has been understood as the process of gathering, filtering, and cleaning data before the data are put in a data warehouse or any other storage solution. See K Lyko, M Nitzschke, and A-C Ngonga Ngomo, ‘Big Data Acquisition’ in JM Cavanillas et al. (eds), New Horizons for a Data-Driven Economy. A Roadmap for Usage and Exploitation of Big Data in Europe (Springer 2015).

6 S Brayne, ‘The Criminal Law and Law Enforcement Implications of Big Data’ (2018) 14 Annual Review of Law and Social Science 293.

7 RK Hill, ‘What an Algorithm Is’ (2016) 29 Philosophy and Technology 3559; TH Cormen et al., Introduction to Algorithms (3rd ed., The MIT Press 2009).

8 K Yeung for the Expert Committee on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT), A Study of the Implications of Advanced Digital Technologies (Including AI Systems) for the Concept of Responsibility within a Human Rights Framework, Council of Europe study DGI(2019)05, September 2019,, accessed 27 July 2020.

9 On the role of algorithms and automated decisions in security governance, as well as numerous concerns associated with the notion of ‘algorithmic regulation’, see L Amoore and R Raley, ‘Securing with Algorithms: Knowledge, Decision, Sovereignty’ (2017) 48(1) Security Dialogue 3; C Aradau and T Blancke, ‘Governing Others: Anomaly and the Algorithmic Subject of Security’ (2018) 3(1) European Journal of International Security 1.

10 See M Oswald et al., ‘Algorithmic Risk Assessment Policing Models: Lessons from the Durham HART Model and “Experimental” Proportionality’ (2018) 27(2) Information & Communications Technology Law 223; P MacFarlane, ‘Why the Police Should Use Machine Learning – But Very Carefully’, The Conversation, 21 August 2019,, accessed 27 July 2020; D Lehr and P Ohm, ‘Playing with the Data: What Legal Scholars Should Learn about Machine Learning’ (2017) 51 UCDL Rev 653; ‘Reinventing Society in the Wake of Big Data. A Conversation with Alex “Sandy” Pentland’, The Edge,, accessed 27 July 2020.

11 Although crime prevention should be rational and based on the best possible evidence. See BC Welsh and DP Farrington, ‘Evidence-Based Crime Prevention’ in BC Welsh and DP Farrington (eds), Preventing Crime (Springer 2007).

12 See BJ Koops, ‘Technology and the Crime Society. Rethinking Legal Protection’ (2009) 1(1) Law, Innovation and Technology 93.

13 M Leese, ‘The New Profiling’ (204) 45(5) Security Dialogue 494.

14 For an in-depth study, see GG Fuster, Artificial Intelligence and Law Enforcement. Impact on Fundamental Rights. Study Requested by the LIBE Committee. Policy Department for Citizens’ Rights and Constitutional Affairs, PE 656.295, July 2020.

15 A Završnik, ‘Criminal Justice, Artificial Intelligence Systems, and Human Rights’ (2020) 20 ERA Forum 567; P Hayes et al., ‘Algorithms and Values in Justice and Security’ (2020) 35 Artificial Intelligence and Society 533.

16 C Kuner, F Cate, O Lynskey, C Millard, N Ni Loideain, and D Svantesson, ‘An Unstoppable Force and an Immoveable Object? EU Data Protection Law and National Security’ (2018) 8 International Data Privacy Law 1; O Lynskey, ‘Criminal Justice Profiling and EU Data Protection Law’ (2019) 15 International Journal of Law in Context 162; R Bellanova, ‘Digital, Politics and Algorithms. Governing Digital Data through the Lens of Data Protection’ (2017) 20(3) European Journal of Social Theory 329; J Hernandez Ramos et al., ‘Towards a Data-Driven Society: A Technological Perspective on the Development of Cybersecurity and Data Protection Policies’ (2020) 18(1) IEEE Security and Privacy 28.

17 F Doshi-Velez and M Kortz, ‘Accountability of AI Under the Law: The Role of Explanation’ (2017) Berkman Klein Center Working Group on Explanation and the Law, Berkman Klein Center for Internet & Society working paper,, accessed 25 August 2020.

18 A Braga et al., ‘Moving the Work of Criminal Investigators Towards Crime Control’ in New Perspectives in Policing, (Harvard Kennedy School 2011); The European Commission for the Efficiency of justice (CEPEJ, Council of Europe), European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment, adopted at the 31st plenary meeting of the CEPEJ (Strasbourg, 3–4 December 2018),, accessed 20 July 2020; Council of Europe’s MIS-NET, ‘Study on the Human Rights Dimensions of Automated Data Processing Techniques and Possible Regulatory Implications’,, accessed 2 August 2020.

19 The fundamental right to effective judicial protection has been one of the pillars of European integration, codified by the Treaty of Lisbon in Article 47 of the EU Charter of Fundamental Rights and Article 19(1) TEU. The CJEU has been insisting on the access for individuals to the domestic judicial review of any acts that may affect the interests of these individuals. Thus the CJEU sought to ensure not only the subjective legal protection of these individuals but also the objective legality of domestic administrative action implementing EU law, as well as ensuing unity and consistency in the application of EU law across different jurisdictions. However, specific requirements stemming from the right to effective judicial protection are not always clear. Effective judicial protection is largely a judge-made concept. There has been no comprehensive legislative harmonisation of domestic procedural provisions applied to implement EU law. See M Safjan and D Dusterhaus, ‘A Union of Effective Judicial Protection: Addressing a Multi-level Challenge through the Lens of Article 47 CFREU’ (2014) 33 Yearbook of European Law 3; R Barents, ‘EU Procedural Law and Effective Judicial Protection’ (2014) 51 Common Market Law Review 1437, 1445 ff.

20 S Lohr, ‘The Promise and Peril of the “Data-Driven Society”’, New York Times, 25 February 2013,, accessed 27 July 2020.

21 AG Ferguson, ‘Policing Predictive Policing’ (2017) 94(5) Washington University Law Review 1115, 11281130.

22 C Cocq and F Galli, ‘The Catalysing Effect of Serious Crime on the Use of Surveillance Technologies for Prevention and Investigation Purposes’ (2013) 4(3) NJECL 256.

23 O Gross, ‘Chaos and Rules’ (2003) 112 Yale Law Journal 1011, 1090; D Dyzenhaus, ‘The Permanence of the Temporary’ in RJ Daniels et al. (eds), The Security of Freedom (University of Toronto Press 2001).

24 For example, A Bauer and F Freynet, Vidéosurveillance et vidéoprotection (PUF 2008); EFUS, Citizens, Cities and Video Surveillance, towards a Democratic and Responsible Use of CCTV (EFUS 2010), 183184; Vidéo-surveillance Infos, ‘Dispositif de sécurité au stade de France: ergonomie et évolutivité’ (14 October 2011).

25 See, e.g., MFH Hirsch Ballin, Anticipative Criminal Investigations. Theory and Counter-terrorism Practice in the Netherlands and the United States (TMC Asser Press 2012).

26 R Van Brakel and P De Hert, ‘Policing, Surveillance and Law in a Pre-crime Society: Understanding the Consequences of Technology-Based Strategies’ (2011) 3(20) Cahiers Politiestudies Jaargang 163.

27 G González Fuster and A Scherrer, ‘Big Data and Smart Devices and Their Impact on Privacy’, Study for the European Parliament, Directorate General for Internal Policies, Policy Department C: Citizens’ Rights and Constitutional Affairs, Civil Liberties, Justice and Home Affairs, PE 536.455, Sept 2015.

28 ‘Datafication’ indicates the increasing on data-driven technologies.

29 The Internet of Things is the interconnection via the Internet of computing devices embedded in everyday objects, enabling them to send and receive data. See J Davies and C Fortuna (eds), The Internet of Things: From Data to Insight (Wiley 2020).

30 S Lohr (Footnote n 20).

31 See J Ballaschk, Interoperability of Intelligence Networks in the European Union: An Analysis of the Policy of Interoperability in the EU’s Area of Freedom, Security and Justice and Its Compatibility with the Right to Data Protection, PhD thesis, University of Copenhagen 2015 (still unpublished); F Galli, ‘Interoperable Databases: New Cooperation Dynamics in the EU AFSJ?’ in Special Issue a cura di D Curtin e FB Bastos (eds) (2020) 26(1) European Public Law 109130.

32 KF Aas et al. (eds), Technologies of Insecurity. The Surveillance of Everyday Life (Routledge 2009); see P De Hert and S Gutwirth, ‘Interoperability of Police Databases within the EU: An Accountable Political Choice’ (2006) 20 (12) International Review of Law, Computers and Technology 2135; V Mitsilegas, ‘The Borders Paradox’ in H Lindahl (ed), A Right to Inclusion and Exclusion? (Hart 2009), at 56.

33 See J Vervaele, ‘Terrorism and Information Sharing between the Intelligence and Law Enforcement Communities in the US and the Netherlands: Emergency Criminal Law?’ (2005) 1(1) Utrecht Law Review 1.

34 C Cocq and F Galli (Footnote n 22).

35 Regulation (EU) 2019/817 of the European Parliament and of the Council of 20 May 2019 on establishing a framework for interoperability between EU information systems in the field of borders and visa, OJ L 135/27, 22.5.2019; Regulation (EU) 2019/818 of the European Parliament and of the Council of 20 May 2019 on establishing a framework for interoperability between EU information systems in the field of police and judicial cooperation, asylum and migration, OJ L 135/85, 22.5.2019.

36 In May 2004, the European Commission issued a Communication to the Council of Europe and the European Parliament aiming at enhancing law enforcement access to information by law enforcement agencies.

37 M Ananny and K Crawford, ‘Seeing without Knowing: Limitations of the Transparency Ideal and Its Application to Algorithmic Accountability’ (2018) 20 New Media and Society 973; Eleni Kosta and Magda Brewczyńska, ‘Government Access to User Data’ in RM Ballardini, P Kuoppamäki, and O Pitkänen (eds), Regulating Industrial Internet through IPR, Data Protection and Competition Law (Kluwer Law Intl 2019), ch 13.

38 See FH Cate and JX Dempsey (eds), Bulk Collection: Systematic Government Access to Private-Sector Data (Oxford University Press 2017).

39 V Mitsilegas, ‘The Transformation of Privacy in an Era of Pre-emptive Surveillance’ (2015) 20 Tilburg Law Review 3557; HE De Busser, ‘Privatisation of Information and the Data Protection Reform’ in S Gutwirth et al. (eds), Reloading Data Protection: Multidisciplinary Insights and Contemporary Challenges (Springer 2013).

40 P Hustinx, ‘EU Data Protection Law: The Review of Directive 95/46/EC and the Proposed General Data Protection Regulation’ in M Cremona (ed), New Technologies and EU Law (Oxford University Press 2017).

41 See Recital no. 19 and art. 2(d), GDPR.

42 An interesting example are the data sets of the EU-US Passenger Name Records and Terrorism Financing Programs. See R Bellanova and M De Goede, ‘The Algorithmic Regulation of Security: An Infrastructural Perspective’ (2020) Regulation and Governance.

43 AG Ferguson, The Rise of Big Data Policing (NYU Press 2017).

44 K Brennan-Marquez, ‘Big Data Policing and the Redistribution of Anxiety’ (2018) 15 Ohio State Journal of Criminal Law 487; J Byrne and D Rebovich (2007), The New Technology of Crime, Law and Social Control (Criminal Justice Press 2007).

45 S Leman-Langlois, Technocrime: Technology, Crime, and Social Control (Willan Publishing 2008).

46 U Beck, Risk Society: Towards a New Modernity (Sage 1992), 21.

47 O Gandy, Race and Cumulative Disadvantage: Engaging the Actuarial Assumption, The B. Aubrey Fisher Memorial Lecture, University of Utah, 18 October 2007.

48 MM Feeley and J Simon, ‘The New Penology’ (1992) 30(4) Criminology 449.

49 D Garland, The Culture of Control (Oxford University Press 2001).

50 Koops (Footnote n 12).

51 A Pentland, ‘The Data-Driven Society’,, October 2013, 79,, accessed 27 July 2020.

52 H-B Kang, ‘Prediction of Crime Occurrence from Multi-modal Data Using Deep Learning’ (2017) 12(4) PLoS ONE.

53 M Hildebrandt, ‘Criminal Law and Technology in a Data-Driven Society’ in Oxford Handbook of Criminal Law (Oxford University Press 2018).

54 AG Ferguson, The Rise of Big Data Policing. Surveillance, Race, and the Future of Law Enforcement (NYU Press 2017).

55 K Lum and W Isaac, ‘To Predict and Serve?’ (2016) 13(5) Significance 14.

56 AG Ferguson, ‘Big Data and Predictive Reasonable Suspicion’ (2015) 163(2) University of Pennsylvania Law Review 327.

57 M Hildebrandt, Smart Technologies and the End(s) of Law: Novel Entanglements of Law and Technology (Elgar 2015).

58 Yet individuals also make discriminatory choices, and there is no evidence that artificial intelligence systems would necessarily do worse.

59 P Nemitz, ‘Constitutional Democracy and Technology in the Age of AI’ (2018) 376 Philosophical Transactions of the Royal Society.

60 See F Galli, The Law on Terrorism. The United Kingdom, France and Italy Compared (Bruylant 2015).

61 See K Sugman Stubbs and F Galli, ‘Inchoate Offences. The Sanctioning of an Act Prior to and Irrespective of the Commission of Any Harm’ in F Galli and A Weyembergh (eds), EU Counter-terrorism Offences (Ed de l’Université Libre de Bruxelles 2011), 291. Child and Hunt concisely point out the lack of justification for the existence of the special part inchoate offences. See J Child and A Hunt, ‘Risk, Pre-emption, and the Limits of the Criminal Law’ in K Doolin et al. (eds), Whose Criminal Justice? State or Community? (Waterside Press 2011), 51.

62 Proactive/anticipative criminal investigations have a primary preventive function, combined with evidence gathering for the purpose of an eventual prosecution. See MFH Hirsch Ballin (Footnote n 25).

63 Ferguson (Footnote n 56).

64 Walter P. Perry and others, Predictive Policing. The Role of Crime Forecasting in Law Enforcement Operations (Rand 2013).

65 Ferguson (Footnote n 56).

66 L Zedner, ‘Fixing the Future?’ in S Bronnit et al. (eds), Regulating Deviance (Hart Publishing 2008).

67 L Zedner, ‘Pre-crime and Post-criminology?’ (2007) 11 Theoretical Criminology 261.

70 See E Fisher, ‘Precaution, Precaution Everywhere’ (2002) 9 Maastricht Journal of European and Comparative Law 7. The analogy is made by L Zedner, ‘Preventive Justice or Pre-punishment?’ (2007) 60 CLP 174, 201.

71 L Amoore and M de Goede (eds), Risk and the War on Terror (Routledge 2008); L Amoore, ‘Risk before Justice: When the Law Contests Its Own Suspension’ (2008) 21(4) Leiden Journal of International Law 847; C Aradau and R van Munster, ‘Governing Terrorism through Risk: Taking Precautions, (Un)knowing the Future’ (2007) 13(1) European Journal of International Relations 89; U Beck, ‘The Terrorist Threat: World Risk Society Revisited’ (2002) 19(4) Theory, Culture and Society 39.

72 A Ashworth and L Zedner, ‘Prevention and Criminalization: Justifications and Limits’ (2012) 15 New Crim LR 542. By contrast, with reference to automated decision-making, see also DK Citron and F Pasquale, ‘The Scored Society: Due Process for Automated Prediction Easy’ (2014) 89 Washington Law Review 1.

73 See M De Goede, Speculative Security (University of Minnesota Press 2012).

74 F Palmiotto, ‘Algorithmic Opacity as a Challenge to the Rights of the Defense’, Robotic & AI Law Society, blog post, 6 September 2019

75 C Anesi et al., ‘Exodus, gli affari dietro il malware di stato che spiava gli italiani’, Wired, 18 November 2019,, accessed 27 July 2020.

76 A Sachoulidou, ‘The Transformation of Criminal Law in the Big Data Era: Rethinking Suspects’ and Defendants’ Rights using the Example of the Right to Be Presumed Innocent’, EUI Working Paper, MWP, RSN 2019/35.

77 D Spiegelhalter, ‘Should We Trust Algorithms?’, Harvard Data Science Review,, accessed 27 July 2020.

78 PW Grimm, ‘Challenges Facing Judges Regarding Expert Evidence in Criminal Cases’ (2018) 86(4) Fordham Law Review 1601.

79 N Richards and H King, ‘Three Paradoxes of Big Data’ (2013) 66 Stanford Law Review Online 41,

80 According to Palmiotto, there is a risk to transform the criminal justice system in a ‘system of machinery’ where individuals only what machines are yet uncapable of pursuing. See F Palmiotto, ‘The Blackbox on Trial. The Impact of Algorithmic Opacity on Fair Trial Right in Criminal Proceedings’ in M Ebers and M Cantero-Gamito (eds), Algorithmic Governance and Governance of Algorithms (Springer 2020).

81 See F Pasquale, The Black Box Society (Harvard University Press 2015); S Zuboff, The Age of Surveillance Capitalism (Public Affairs 2019).