Hostname: page-component-8448b6f56d-m8qmq Total loading time: 0 Render date: 2024-04-19T02:16:40.633Z Has data issue: false hasContentIssue false

Paths to Digital Justice: Judicial Robots, Algorithmic Decision-Making, and Due Process

Published online by Cambridge University Press:  15 September 2020

Pedro RUBIM BORGES FORTES*
Affiliation:
Federal University of Rio de Janeiro
Rights & Permissions [Opens in a new window]

Abstract

The paths to digital justice focus on the challenges of contemporary digital societies in reaching automated decision-making processes through software, algorithms, and information technology without loss of its human quality and the guarantees of due process. In this context, this article reflects on the possibilities of establishing judicial robots in substitution for human judges, by examining whether artificial intelligence and algorithms may support judicial decision-making independently and without human supervision. The point of departure for this analysis comes from the experience of criminal justice systems with software for judgment of the possibility of recidivism of criminal defendants. Algorithmic decision-making may improve the public good in support of judicial decision-making, but the analysis of current technology and our standards for due process of law recommends caution on the conclusion that robots may replace human judges and satisfy our expectations for explainability and fairness in adjudication.

Type
Law and Artificial Intelligence in Asia
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2020

1. INTRODUCTION

This article explores the theme of the paths to digital justice, developing a line of research that was pioneered by Hazel Genn with a particular focus on the challenges of contemporary societies and the potential demand for automated decision-making through judicial robots.Footnote 1 On one hand, information technology opens new avenues for conflict resolution, especially due to the increased capacity to process massive amounts of information and to provide automated responses for a large number of claims that would otherwise be too costly and probably left unresolved. On the other hand, artificial intelligence provides innovative opportunities for case management within the setting of complex litigation, as software may be programmed and trained to identify identical, similar, and analogous cases in a way that reduces the volume of litigation and provides speedy decisions.Footnote 2

The paths to digital justice provide various elements for our reflection, especially for our institutional imagination on the possibilities brought by information technology, big data, and algorithmic decision-making. Not surprisingly, contemporary scholarship invites us to speculate and consider the establishment of judicial robots—that is, artificial intelligence producing decisions that are currently made by human judges, like judgments, sentences, and interlocutory decisions, among others.Footnote 3 In this context, one research question that emerges is the following: How may algorithms support juridical decision-making? Inevitably, this reflection also invites us to think about whether algorithmic decision-making could eventually substitute for judicial decision-making? Nowadays, we already imagine and speculate that algorithms may eventually replace human beings in deciding juridical controversies.

More than just a simple science-fiction story, criminal judges already use software to evaluate the potential recidivism of criminal defendants. The Correctional Offender Management Profiling for Alternative Sanctions (“COMPAS”) is a risk-assessment tool widely used in the US and has supported judicial decisions in large numbers of concrete cases.Footnote 4 The algorithm was developed to assess potential recidivism and to support judicial decision-making related to the imprisonment or the release of a criminal defendant based on information technology.Footnote 5 Additionally, contemporary societies have developed various technological tools that are functionally equivalent to judicial decisions and may substitute for judges not by placing robots in robes, but by providing a low-cost, speedy, and informal alternative through online dispute resolution, for instance.Footnote 6 In this case, Internet-users already seek new pathways to digital justice through online platforms that may reduce the demand for the traditional justice system. Instead of filing a claim at a small-claims court, individuals simply file their complaints at these digital platforms that may eventually be more efficient than traditional courts due to lower costs, speedy procedures, and technologically informed outcomes.Footnote 7

This essay explores the potential of judicial robots and algorithmic decision-making based on these recent experimental pathways to digital justice and the quest for due process of law. Algorithms are powerful tools and it is difficult to imagine our future without themFootnote 8 but, if crucial decisions “were entirely delegated to an algorithm, we would be entitled to feel uneasy, even if presented with compelling evidence that, on average, the machines make better decisions than the humans.”Footnote 9 Therefore, their incorporation into decision-making processes requires careful analysis and algorithmic auditing for control of procedure and fairness.Footnote 10 Instead of demonizing algorithmic decision-making and proposing their ban, this essay investigates their potential for improving the public good, for satisfying our expectations for explainability and fairness in adjudication. Importantly, this essay addresses themes that are still under development, being exploratory in the sense that it provides more questions than answers, by navigating through somewhat unknown waters.

In addition to this introduction, Section 2 investigates COMPAS, explaining the role of this new technology for risk assessment, the critique of discriminatory bias made by ProPublica, and the defence of the precision and objectivity of this tool. Section 3 discusses the important judicial precedent of State v. Loomis Footnote 11 and points of concern related to due process of law when algorithms support decision-making regarding asymmetry of information, individuation, and fairness in adjudication. Section 4 explores the possibilities and limitations of the pathways to digital justice with the potential of application for specific tasks of repetitive character, but with strong limitations of explainability and how institutions are essential for setting the relevant rules of the game. Section 5 brings inconclusive remarks.

2. JUDICIAL ROBOTS? LESSONS FROM COMPAS

Our point of departure for imagining judicial robots is inevitably the experience of COMPAS in contemporary US. Software developers defined COMPAS as “an automated decision-support software package that integrates risk and needs assessment with several other domains, including sentencing decisions, treatment and case management, and recidivism outcomes.”Footnote 12 Presented as fourth-generation (4G) correctional assessment technology, COMPAS incorporated insights from a series of different explanatory theories of criminality, such as “low self-control theory, strain theory or social exclusion, social control theory (bonding), routine activities-opportunity theory, sub-cultural or social learning theories, and a strengths or good lives perspective.”Footnote 13 As part of this comprehensive approach, the software requires information related to these theoretically relevant factors and eight criminogenic predictive factors, including professional history and educational skills; safe housing and financial conditions; and emotional, social, and familiar support.Footnote 14 The information is gathered through the databases of the criminal justice system, by integrating sentencing decisions, institutional processing, case management, treatments, and outcomes as support for correctional authorities.Footnote 15

Interestingly, COMPAS is a prodigious example of the mathematical turn in legal analysis,Footnote 16 as the probability of recidivism is measured through a risk scale developed as part of a regression model trained to predict new offences in a probation sample.Footnote 17 The system calculates a recidivism risk decile score, by translating into numbers information related to criminal involvement, non-compliance, violence, criminal association, substance abuse, financial difficulties, vocational or educational problems, family criminality, social environment, leisure, residential instability, social isolation, criminal attitudes, and criminal personality.Footnote 18 Importantly, one of the original goals of the software developers consisted exactly of reducing subjectivity, inconsistency, bias, stereotyping, and vulnerability.Footnote 19 In this context, COMPAS emerged as a technological tool with strong internal consistency and predictive validity in comparison with other similar risk-predictive instruments.Footnote 20

Therefore, we may discuss what we eventually gain and lose as part of this transformation of traditional decision-making into algorithmically supported decision-making. Initially, there is a problem concerning how human actors deal with these new technologies. Our perception of these new technologies is strongly influenced by our contact with anthropomorphic robots in popular culture.Footnote 21 Because of images of personification of robots in science fiction, our imagination may consider that new technologies are analogous to human decision-making processes. In this sense, unsurprisingly, our society may consider that algorithms reason like the human mind and are ultimately superior because of their stronger processing power. However, contemporary algorithms are designed and programmed to pursue specific tasks and are not capable of general intelligence.Footnote 22 In other words, artificial intelligence remains task-oriented and technological tools are designed for their particular and specific objects rather than to generally think, reflect, and decide.Footnote 23 Therefore, the typical mistake of imagining that artificial intelligence is simply a superior manifestation of human intelligence should be avoided. Not only should we not blindly trust algorithmically decisions, but also we should critically assess and evaluate data processing, systemic calibration, and legitimacy regarding transparency/opacity and justification for decisions.

COMPAS was also recently challenged in the public-opinion courts, as ProPublica published a critical piece entitled “Machine Bias,” in which it accused COMPAS of being “biased against blacks.”Footnote 24 According to ProPublica, there is empirical evidence that the algorithm hidden behind COMPAS leads to significant racial disparities by making mistakes with Black and White defendants in very different ways.Footnote 25 First, ProPublica provides anecdotal evidence on how Black defendants were considered to pose a high risk of recidivism in comparison to White defendants, even when their personal record and characteristics of the case did not seem to indicate dangerousness or a clear probability of committing another crime. For instance, one 18-year-old Black girl who decided to ride a small child’s bicycle and was arrested for theft was considered high-risk—Brisha Borden scored 8 in the risk-assessment scale—and a 41-year-old White man previously convicted for armed robbery who was also arrested for petty theft was considered low-risk—Vernon Prater scored 3 in the risk-assessment scale.Footnote 26 Second, ProPublica considers that these risk-assessment tools are remarkably unreliable in forecasting future criminal behaviour. In concrete terms, only 20% of the people predicted to commit violent crimes eventually really did so.Footnote 27 In terms of general crimes, the algorithm was accurate in 61% of the cases, but ProPublica criticized this percentage as being just “somewhat more accurate than a coin flip.”Footnote 28 Third, in terms of racial disparities, the mathematical formula was considered biased because it significantly produced false positives for Black defendants and false negatives for White defendants: Black defendants were wrongly labelled as future criminals at almost twice the rate of White defendants and White defendants were mislabelled as low-risk more often than Black defendants.Footnote 29 Fourth, the company responsible for the software does not disclose the mathematical formula and the calculations used for the risk scores, so that defendants and the public are unable to understand the reasons for the disparities.Footnote 30 Therefore, only the results are shared with a defendant’s attorney and they rarely have an opportunity to challenge their assessments.Footnote 31

In response to ProPublica, software developers wrote an article in which they criticized ProPublica’s piece and strongly rejected the conclusion that the software discriminated against Black defendants.Footnote 32 In their review of the empirical evidence used by ProPublica related to a sample of pre-trial defendants in Broward County, Florida, the software developers pointed to several statistical and technical errors, especially because the different base rates of recidivism for Blacks and Whites were not taken into account.Footnote 33 As Hannah Fry puts it, their explanation reveals that the algorithm leads to biased outcomes because reality is biased:

unless the fraction of people who commit crimes is the same in every group of defendants, it is mathematically impossible to create a test which is equally accurate at prediction across the board and makes false positives and false negative mistakes at the same rate for every group of defendants.Footnote 34

In other words, the reason for the racial disparity comes from the fact that rates of arrest are not equivalent across racial groups and the algorithm simply reproduces the predictable consequences of a deeply unbalanced society: “until all groups are arrested at the same rate, this kind of bias is a mathematical certainty.”Footnote 35

Another important argument in defence of the risk-assessment tool is that COMPAS consists simply of software to support judicial decision-making regarding the probabilities of reoffending and risks of recidivism, and was not designed “to make absolute predictions about success or failure.”Footnote 36 This is an important point, because the critique of ProPublica seems to suggest that COMPAS should emulate the accurate prediction of magical oracles, as in the film Minority Report, for instance.Footnote 37 These risk-assessment tools follow a mathematical technique developed by Ernest Burgess, a professor at the University of Chicago who built in 1928 a tool to measure the probability of criminal behaviour that was superior to human intuition.Footnote 38 Nowadays, the best algorithms use the technique of random forests based on decision trees, but predictions are based on patterns from data and are often only marginally more accurate than random guessing.Footnote 39 In the end, COMPAS should not be compared to magical oracles or mechanisms for perfect prediction, but to the concrete alternative of human judgment without the support of this risk-assessment tool. In this context, COMPAS may have two advantages over this alternative: the consistency of always giving exactly the same answer for the same set of circumstances; and the efficiency of processing the data better and making better predictions.Footnote 40

This debate is relevant for our reflection on the capacity of algorithms in supporting judicial decision-making. A different but similar debate involves algorithmic decision-making and due process of law. Points of concern related to due process include this idealization of algorithms perceived either as god-like or devil-like artefacts. A realistic perspective of these technological tools is necessary. Likewise, algorithmic decision-making has been famously labelled a “black box” and we should consider the capacity for justification, explanation, and transparency of these complex processes. Moreover, due process of law also implies a discussion of the potential of general artificial intelligence and the existence of judicial robots as substitutes for human judges. These points are discussed in the next section.

3. ALGORITHMIC DECISION-MAKING AND DUE PROCESS OF LAW

Not only was COMPAS criticized in public opinion, but also was challenged in court. In State v. Loomis, Eric Loomis challenged the state of Wisconsin’s use of proprietary, closed-source risk-assessment software as part of his sentencing to six years in prison, by alleging that it violated the defendant’s due process of law.Footnote 41 Basically, Loomis presented three arguments against the use of COMPAS during sentencing:

(1) it violates a defendant’s right to be sentenced based upon accurate information, in part because the proprietary nature of COMPAS prevents him from assessing its accuracy; (2) it violates a defendant’s right to an individualized sentence; and (3) it improperly uses gendered assessments in sentencing.Footnote 42

This judicial challenge echoed a concern voiced by the then-Attorney General Eric Holder in his speech at the 57th Annual Meeting of the National Association of Criminal Defense Lawyers in 2014: even if he acknowledged the best intentions of software programmers in developing these risk-assessment algorithms, these tools could undermine the quest for individualized and equal justice.Footnote 43 In his own words,

By basing sentencing decisions on static factors and immutable characteristics—like the defendant’s education level, socioeconomic background, or neighborhood—they may exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society.Footnote 44

Importantly, these algorithms have been originally developed for decisions on the probation and release of criminal defendants, but the extrapolation of algorithmic decision-making into sentencing required a much more careful process.Footnote 45

In the judgment, the State Supreme Court of Wisconsin decided that the use of COMPAS in sentencing did not violate due process of law or the defendant’s right to individualized sentencing, the use of accurate information, and impartiality (absence of discriminatory bias).Footnote 46 According to the court, there is no violation of due process of law because the defendant had access to the COMPAS score and the respective report, and had an opportunity to refute, supplement, and explain the COMPAS risk-assessment score.Footnote 47 Some criticized the court for failing to account for the fact that this software was developed by Northpointe—a for-profit company with a millionaire contract with the state of Wisconsin and a “biased party that cannot be relied upon to determine the accuracy of the risk assessment score.”Footnote 48 According to this opinion, the company has a strong conflict of interests and refuses to explain the value given and the breakdown for each factor, hiding details of the algorithm by alleging that it is proprietary and that the secret of the code is a core part of their business.Footnote 49 Therefore, access to the score and the respective report would be insufficient for the protection of defendant’s due-process rights, as access to the source code would be a necessary means to investigate any potential misinformation or miscalculation of risk-assessment scores.Footnote 50

Even the examination of the algorithmic source code may not be sufficient for the constitutional analysis of the judicial use of the risk-assessment tool. Lawrence Lessig popularized the notion that code is law and that we should examine legally the normativity embedded in the algorithm and commands derived from the mathematical formulas behind software.Footnote 51 However, nowadays, fixation with the unconstitutionality of a source code may imply a misunderstanding of the functioning of contemporary algorithms, as the normative analysis of the code “is unlikely to reveal any explicit discrimination.”Footnote 52 With the advent of a new generation of artificial intelligence and machine learning in which algorithmic decision-making depends on the training data,Footnote 53 examination of fairness depends on the inputs given to the algorithm and a criminal defendant “should be asking to see the data used to train the algorithm and the weights assigned to each input factor” instead of only the source code.Footnote 54 In this context, for instance, the racial discrimination attributed to COMPAS by ProPublica could eventually reveal itself as the result of a geographical discrimination or an implicit and unintentional bias due to the use of a ZIP code—which may be a proxy for race, especially in racially segregated areas.Footnote 55

The State Supreme Court of Wisconsin did not strike down the use of COMPAS in sentencing, but accepted that judicial application of these algorithmic risk-assessment tools may be problematic and issued warning labels to other judges with the following cautionary notes:

(1) the proprietary nature of COMPAS has been invoked to prevent disclosure of information relating to how factors are weighted or how risk scores are to be determined; (2) risk assessment compares defendants to a national sample, but no cross-validation study for a Wisconsin population has yet been completed; (3) some studies of COMPAS risk assessment scores have raised questions about whether they disproportionately classify minority offenders as having a higher risk of recidivism; (4) risk assessment tools must be constantly monitored and re-normed for accuracy due to changing populations and subpopulations.Footnote 56

These procedural safeguards to inform and alert judges are, however, normally ineffective means of transforming judicial behaviour regarding these technological tools because warning labels ignore the judge’s inability to evaluate them, to assess the weight of concerns behind criticisms, and to consider professional pressures to use COMPAS with deference while sentencing criminal defendants.Footnote 57

Even if the court’s opinion indicates prudence about the enthusiasm for algorithmic risk assessment in sentencing, problems are not limited to simply questions that may be answered by careful judicial reflection.Footnote 58 Without the adequate information on algorithmic processing, judges are unable to properly calibrate their interpretations of COMPAS and to modulate their consideration of risk-assessment tools.Footnote 59 In practical terms, defying algorithmic recommendations may be challenging and unusual for individual judges, especially because of the role that “heuristics” and “anchoring” play in supporting judicial decision-making.Footnote 60 Even if supporters of risk-assessment tools claim that these evaluations improve the quality of sentencing by making it more transparent and rational, these warning labels indicate the potential problems of algorithmic-based sentencing and suggest “considerable caution in assessing the qualitative value of these new technologies.”Footnote 61

Particularly in the case of COMPAS, we may also be dealing with a special case of the “black box” effect—that is, a case in legal rules and/or judicial decisions may maintain the algorithmic process secret—a legal black box—and perhaps even the code and/or type of artificial intelligence may maintain the algorithmic process unknown even to the software developers—a technical black box.Footnote 62 The opacity of the COMPAS algorithm comes from the proprietary characteristics of legally protected source codes that remain unknown to the defendant, her defence attorney, and the criminal judge.Footnote 63 Additionally, even if the court demanded transparency and the publication of the algorithmic formula as an open source to everyone as a prerequisite for the use of risk-assessment tools in criminal sentencing, there is the possibility that decisional rules would emerge in ways in which no one—not even the software developers—may be able to explain regarding why and how certain algorithmic decisions are made.Footnote 64 For instance, machine-learning algorithms of an artificial neural network (ANN) learn through a complex layered structure and their decisional rules are not programmed a priori and are usually unintelligible to humans.Footnote 65 Even if we may imagine that these algorithms are less biased than human judges, these machine-learning algorithms operate according to data used in their training and reproduce discrimination present in input data representative of our biased world.Footnote 66 A prodigious example of this problem comes from Microsoft’s bot Tay—launched in Twitter to behave like a regular young woman—who learned to express obscene vulgarities and hateful offences to minorities in less than one day.Footnote 67 Therefore, more than just being facially neutral, algorithms may need to be constantly retrained and affirmatively corrected against biases, so that they incorporate equal opportunity in their design and do not align with some unfair societal tendencies.Footnote 68 In the case of COMPAS, the parties could investigate whether the algorithm “has affirmatively been trained against the racism of the world.”Footnote 69 Without this sort of “algorithmic affirmative action,” algorithms may arguably be more biased than human judges.Footnote 70

Likewise, COMPAS should be evaluated according to the same standards as expected from the other actors in the criminal justice system. One of the complexities of the mathematical turn in legal analysis is the magic spell of the translation of words into numbers. Consequently, we fail to critically assess the normativity embedded in algorithmic commands because of a belief in the power of science, technology, and the objectivity of mathematical formulas.Footnote 71 However, criminal sentencing is not an easy task and may not be reduced to the output of a risk-assessment tool.Footnote 72 If we analogize COMPAS with an expert witness, the expected standard in the criminal justice system involves cross-examination. In this context, a defendant should have an equivalent right to interrogate the algorithm, especially due to the potential discrimination hidden in the data.Footnote 73 In this case, one relevant safeguard for criminal defendants could be to demand transparency and publicity of the algorithmic process, so that defendants and their attorneys may challenge algorithmic decision-making in courts.Footnote 74 Additionally, the General Data Protection Regulation (GDPR) already protects individuals against unfair algorithmic decision-making by establishing a right to information and to opt out from automated decision-making by demanding human intervention.Footnote 75 However, State v. Loomis and the controversy over COMPAS show how difficult it is to strike the right balance in terms of the level of information and freedom from automation.

The next section discusses the possibilities and limitations of the application of robots in judgments, algorithmic decision-making, and the pathways of digital justice.

4. PATHWAYS TO DIGITAL JUSTICE: POSSIBILITIES AND LIMITATIONS

The case-study of COMPAS invites a reflection on the future pathways of digital justice, the positioning of algorithms, the search for general artificial intelligence, data processing as reproduction, and a “black box.” Statistically, technology may be very efficient, as shown by the extremely low accident rate with autonomous cars.Footnote 76 However, the current state of computer science reveals that machine-learning algorithms may not learn to develop patience and planning skills. For instance, one comprehensive piece of research on an ANN algorithm playing Atari games found that it could play 29 out of 49 games at the human level, beating professional players at most of them.Footnote 77 However, human players are far better than machine-learning algorithms in 20 of the tested games.Footnote 78 If you remember, for instance, Ms Pac-Man, the ANN algorithm, scores only 12% of the score achieved by a professional player, especially because it is a game that involves patience and caution, and planning strategically on how to protect Ms Pac-Man from the ghost attacks is beyond the capacity of this machine-learning algorithm.Footnote 79 This comprehensive study reveals that the algorithm fails on Atari games that require planning.Footnote 80 One important lesson for our speculative reflection on the development of judicial robots is that contemporary artificial intelligence may not produce its decisions with prudence, which seems an essential quality for adjudication.

Another important discussion emerging from the Atari game-playing study is how much a computer may learn from scratch, which is a central question in understanding how far we are from creating general artificial intelligence.Footnote 81 In contrast to the human brain and its spontaneous understanding of different contexts, the ANN algorithm is trained for the performance of specific tasks, developing specialized artificial intelligence based on that particular training and being unable to perform tasks for which it did not receive specific training.Footnote 82 There is a lot of debate about whether general artificial intelligence will be possible in the future and perhaps ANN algorithms will remain confined to the performance of specific tasks and will not develop the general capacity for intelligence like that of humans.Footnote 83 Because, today, nobody seems to know how to create a robot with general intelligence, smart technology is confined to its specific domains, now and for the foreseeable future.Footnote 84 Nowadays, there is a lot of speculation about the possibility of artificial intelligence replacing human judgment, but it seems that there is strong potential for support in repetitive judicial activities rather than for a judicial robot fully replacing a human judge. For instance, reported cases of robot lawyers consist of automated systems for support with repetitive tasks like Ross—a technological researcher of documents and cases to assist lawyers through natural language processing—and Donotpay—a bot that functions as a digital assistant for appeals against parking tickets through document-assembly production.Footnote 85 Therefore, the path of digital justice seems to point more towards technological support for decision-making rather than to robots in robes making automated decisions without human intervention.

One important point for reflection comes from the fact that algorithmic decision-making normally reproduces patterns from the past. The case-study of COMPAS demonstrates that the outputs of the risk-assessment tool are based in the historical experience gathered through big data and representative of a large set of past decisions taken in real cases. Because algorithms are designed to find and recreate the pattern in the data sets that they were trained on, they learn to reproduce the status quo bias.Footnote 86 In addition to the potential for machine bias revealed by ProPublica, there is an even deeper problem related to path dependency in adjudication.Footnote 87 In other words, ANN algorithms are trained to produce outputs based on existing input and judicial robots would arguably produce sentences based only on existing precedent. Therefore, in this context, judicial robots would probably be unable to produce fair counter-hegemonic decisions that depart from precedent. The current state of computer science suggests that an algorithm would probably not be able to produce a watershed decision like that in Brown v. Board of Education, for instance.Footnote 88 In constitutional terms, therefore, algorithmic decision-making may contain a hidden conservative bias and a tendency to reproduce the status quo that would be problematic in terms of the counter-majoritarian role of courts and rights protection according to the democratic rule of law.Footnote 89

Another important point of concern related to artificial intelligence and judicial decision-making consists of the necessary explainability of the outcomes as part of rights protection in the democratic rule of law. Nowadays, understanding the reasons for a particular application of code that resulted in an injustice may be very difficult.Footnote 90 Our societies should monitor and prevent algorithmic injustice, by demanding more responsibility from those who collect data, build systems, and apply these rules.Footnote 91 Lack of transparency is part of the problem due to the fact that tech companies keep their algorithms secret.Footnote 92 One potential response for the normative control of algorithms consists of auditing to ensure the safety and fairness of algorithmic decision-making.Footnote 93 In comparison to the call for full publicity and open-source artificial intelligence, algorithmic audit may be performed by a controlled and discrete professional group of experts who may intervene to correct the decisional rules without revealing the code and other proprietary information to the public at large.Footnote 94 Auditing poses a series of challenges, because sophisticated algorithms may not reveal their true effects during testing, they may be able to circumvent the recommendations of auditors by building links in data sets, and public authorities may not be able to oversee their development and keep pace with the tech industry.Footnote 95 Especially difficult is the quest for transparency and explanations in the case of machine learning, when algorithms are programmed to reprogram their codes and even software designers may not be aware of the processing, which varies also according to the data used to train the algorithm.Footnote 96 Because judicial decisions are supposed to contain justifications and logical explanations of their rationale, software developers need to create an explanatory technology that may provide explanations for the technological reasoning behind algorithmic decision-making for the eventual application of these technologies in sentencing.Footnote 97 The ethical and legal requirements for transparency are not limited to publicity, but are more related to potential independent scrutiny, a shared system of justification that is comprehensible to others, and a system of accountability with checks and balances for correcting errors.Footnote 98

A crucial point for reflection is our personal standpointFootnote 99 on whether we would accept being judged by a judicial robot or would prefer to be judged by a human being. Some commentators sympathize with the idea of having an algorithm working with judges to support their work and help them to overcome their cognitive limitations, systematic bias, and random error.Footnote 100 On the other hand, others consider that there are some activities that are essentially human and that digital systems should not perform, even if the outcome may be technically better than the product of a human mind.Footnote 101 Outsourcing the activity of judging to a robot may be problematic also from a constitutional perspective in terms of the impermissible delegation of powers.Footnote 102 When imagining the future of digital justice and the potential for judicial robots and for algorithmic decision-making, we should think empirically and carefully collect the necessary data for assessing the potential pathways to justice.Footnote 103 We need to understand the demands of human beings and how artificial intelligence may reduce the asymmetries of power and information that they experience with the traditional judicial system.

There is a large potential for information technology as an enabler of access to justice, facilitating the aggregation of repetitive claims and enabling collective actors to protect relevant social interests more efficiently.Footnote 104 However, electronic gatekeepers may also create obstacles and limitations for individuals to protect their rights, such as mandatory electronic mediation as a prerequisite for accessing courts.Footnote 105 Importantly, we should consider empirically the potential avenues for digital justice and how human actors will interact with the multiple doors of the judicial system to design the pathways to digital justice and evaluate possibilities and limitations. In this sense, it seems really difficult to imagine a judicial robot replacing a Supreme Court Justice, but it will be soon equally difficult to imagine a Supreme Court Justice not working with the support of electronic clerks that collaborate with legal research and document-assembly production. At the other extreme, artificial intelligence with human supervision may be adopted to initiate dialogues towards mediation and other forms of alternative dispute resolution. Especially in the case of small claims of reduced complexity, low costs, and repetitive application of the law, there is potential for electronic arbitration requested by a defendant and binding only for him and not for the plaintiff.Footnote 106 In these cases, an important distinction may come from the different uses of technology. In COMPAS, the probability of recidivism supports deliberation in a criminal judgment. In contrast, information technology supports civil-liability judgments normally by sorting similar cases and aggregating them in preparation for a comprehensive judicial decision. Most cases of torts are decided through objective liability and without the necessity for detailed examination of fault and subjective responsibility typical of criminal judgments.

Finally, the role of institutions is essential for enabling organizations and setting the relevant rules of the game. The COMPAS case-study reveals the immediate need for innovative legal education and judicial training for the use of technological tools in adjudication. The adoption of algorithmic decision-making also requires the development of a code of ethics for artificial intelligence. Software developers emerge as the new “Philosopher Kings” and their way of handling ethics, explaining themselves, and managing accountability and dialogue is critical for ethics in digital societies.Footnote 107 However, the university education of artificial-intelligence tribes normally excludes learning about the human condition and no mandatory courses “teach students how to detect bias in data sets, how to apply philosophy to decision-making, or the ethics of inclusivity.”Footnote 108 Importantly, the relationship between ethics and law shapes the development of new technologies, as legal judgments are useful for ethical considerations and vice versa.Footnote 109 In this sense, the scrutiny of COMPAS by courts is welcome and necessary, because artificial intelligence may potentially transform fundamental tenets of our legal system.Footnote 110 The application of a code of ethics depends on the institution behind it,Footnote 111 it being necessary that the judiciary should establish its own institutional guidelines and code of ethics for using artificial intelligence according to the concrete demands for digital justice. Additionally, a set of constitutional rules—like a Bill of Rights for artificial intelligence—should be incorporated into our legal system. An algorithmic Bill of Rights seemed like science fiction in the Three Laws of Robotics by Isaac Asimov, but the idea of a set of basic norms and fundamental laws for artificial intelligence has inspired contemporary scholars to propose a constitutional regime for the regulation of the relationship between humans and robots.Footnote 112 Nowadays, the definition of rules for algorithmic decision-making, judicial robots, and due process of law emerge as an inevitable part of the constitutional rules for artificial intelligence and the path to digital justice.

5. AN UNFINISHED STORY: INCONCLUSIVE REMARKS

The story of the pathways to digital justice is still unfinished. Platforms must provide fair and efficient channels for dispute resolution to gain trust and survive in the online environment.Footnote 113 Once they do so through their resolution centres, citizens will start to ask why the small-claims court is so inconvenient and inefficient in comparison.Footnote 114 The evolution of digital justice indicates the substitution of physical settings for virtual ones, the emergence of models based on sharing data instead of confidentiality, and the potential shift from human intervention to automated processes of decision-making.Footnote 115 Access to justice may be enhanced through algorithms that can provide responses to a large number of disputes.Footnote 116 In these legal borderlands of law and technology, the role of judicial robots, the scope of algorithmic decision-making, and the protective safeguards to due process of law are still open and will depend on the social demands for pathways to justice.Footnote 117

This essay explores the possibilities and limits of an unfinished story and is closed with inconclusive remarks. One crucial question is whether to substitute human judges for judicial robots and the response depends on concrete social demands, the stage of development of artificial intelligence, and the decisional rules of the game. Perhaps an interesting analogy comes from navigation and the fact that the US had abandoned military training through celestial navigation due to the advent of GPS and brought it back in 2015 to the Naval Academy to help navy sailors to navigate the high seas.Footnote 118 Therefore, even if we rely on information technology as a support for decision-making, we should not abandon our core competency for making human judgment, as we would be left “without a rich sense of where we are and where we’re going.”Footnote 119 Interestingly, some tech giants rely on human judgment for their own internal dispute resolution. For instance, Facebook decided to curate its trending topics with a team of human journalists. After being accused of a progressive bias for excluding some conservative stories,Footnote 120 robots replaced humans as judges of trending topics and consequently fake news may become part of the newsfeed more easily now without the supervision of humans.Footnote 121 In the end, the path to justice in digital societies will come not only from the mathematical logic of algorithms, but also from our social experiences and how we reconcile the efficiency and precision of algorithmic decision-making with constitutional safeguards of due process of law.

Footnotes

This article was originally presented at the RCSL Conference in Oñati in 2019 in a panel organized by Ji Weidong and Håkan Hydén. I am grateful to them for the organization of the session and the invitation to publish in this Special Issue too. Stefan Larsson, Bregjie Dijksterhuis, and Alfons Bora provided interesting feedback to the paper presentation that helped to improve its quality. This article is part of a research project developed at the Laboratory of Institutional Studies (LETACI) at the National Law School, Federal University of Rio de Janeiro, and I am grateful to Dean Carlos Bolonha for his invaluable support. David Restrepo Amariles and Sabine Gless provided valuable feedback that improved the quality of this article. For their collaboration during the editorial process, the author is thankful to Matthias Vanhullebusch and the anonymous reviewers of the Journal. Errors are all mine.

*

Visiting Professor of the Doctoral Programme of the National Law School at the Federal University of Rio de Janeiro and Public Prosecutor at the Attorney General’s Office of Rio de Janeiro. DPhil (Oxford), JSM (Stanford), LLM (Harvard), MBE (COPPE), LLB (UFRJ), BA (PUC). Correspondence to Pedro Fortes, Faculdade Nacional de Direito (FND), PPGD-UFRJ, Rua Moncorvo Filho, n. 8, Centro, 20211-340, Rio de Janeiro, RJ, Brazil. E-mail address: pfortes@stanfordalumni.org.

1. On the metaphor of paths to justice, see Genn & Beinart(Reference Genn and Beinart1999); Genn & Paterson (Reference Genn and Paterson2001).

2. For instance, a recent news report from Brazil narrates that the Attorney General’s Office of Rio de Janeiro uses robots for the identification of repetitive claims that may lead to the filing of a collective action for consumer protection with the support of artificial intelligence. See Casemiro, Luques, & Martins (Reference Casemiro, Luques and Martins2018).

4. Brennan, Dietrich, & Ehret (Reference Brennan, Dietrich and Ehret2009).

5. Footnote Ibid., pp. 22–3.

6. See generally Cortés (Reference Cortés2010); Cortés (Reference Cortés2018).

7. Katsh & Rabinovich-Einy (Reference Katsh and Rabinovich-Einy2017).

8. Even the most critical authors discussing automated inequality and racial oppression proposed humanizing rather than banning technology; see Eubanks (Reference Eubanks2017), p. 212; Noble (Reference Noble2018), pp. 179–81.

9. Rees (Reference Rees2018), p. 89.

10. O’Neil (Reference O’Neil2017), pp. 211–2.

11. State v. Loomis, 881 N.W.2d 749 (Wis. Reference Freeman2016).

12. Brennan, Dietrich, & Ehret, supra note 4, pp. 22–3.

13. Footnote Ibid ., p. 23.

15. Footnote Ibid., p. 24.

17. Brennan, Dietrich, & Ehret, supra note 4, p. 25.

19. Footnote Ibid ., pp. 21–2.

20. Footnote Ibid., pp. 31–2.

21. Turner (Reference Turner2019), pp. 4–6.

22. Footnote Ibid., pp. 6–7.

23. Footnote Ibid.; see also Restrepo Amariles (Reference Restrepo Amariles and Barfield2020).

25. Footnote Ibid., p. 3.

26. Footnote Ibid., pp. 1–2.

27. Footnote Ibid., p. 3

30. Footnote Ibid., p. 4.

31. Footnote Ibid., p. 6.

32. Dieterich, Mendoza, & Brennan (Reference Dieterich, Mendoza and Brennan2016), p. 1.

33. Footnote Ibid., pp. 1–2.

34. Fry (Reference Fry2018), p. 68, emphasis in original.

35. Footnote Ibid., p. 69.

36. Flores, Bechtel, & Lowenkamp (Reference Flores, Bechtel and Lowenkamp2016), p. 46.

37. On predictive police, see Clegg (Reference Clegg2017), pp. 131–4.

38. Fry, supra note 34, p. 55.

39. Footnote Ibid., pp. 56–8.

40. Footnote Ibid., p.59.

41. 881 N.W.2d 749 (Wis. Reference Freeman2016).

42. Footnote Ibid., p. 757.

43. Eric Holder, US Attorney General, Speech at the National Association of Criminal Defense Lawyers 57th Annual Meeting (1 August 2014).

45. Freeman (Reference Freeman2016), p. 83.

46. State v. Loomis, 881 N.W.2d 749 (Wis. Reference Freeman2016).

47. Footnote Ibid., p. 761.

48. Freeman, supra note 45, p. 92.

49. Footnote Ibid., pp. 93–4.

50. Footnote Ibid ., p. 94.

51. Lessig (Reference Lessig2006).

52. Israni (Reference Israni2017), pp. 2–3.

53. See generally Alpaydin (Reference Alpaydin2016).

54. Israni, supra note 52, p. 2.

56. State v. Loomis, 881 N.W.2d 749 (Wis. Reference Freeman2016), 763–4.

57. Harvardlawreview.org (2017).

58. Footnote Ibid., pp. 8–9.

60. Footnote Ibid., p. 12.

61. Footnote Ibid., p. 13.

62. Pasquale (Reference Pasquale2015); Liu, Lin, & Chen (Reference Liu, Lin and Chen2019), p. 138.

63. Liu, Lin, & Chen, supra note 62.

64. Footnote Ibid., p. 17.

65. Footnote Ibid., p. 18.

66. Israni, supra note 52, p. 2.

67. Susskind (Reference Susskind2018), p. 37.

68. Israni, supra note 52, pp. 2–3.

69. Footnote Ibid., p. 3.

73. Liu, Lin, & Chen, supra note 62, p. 135.

74. Footnote Ibid., pp. 140–1.

75. Footnote Ibid., p. 141; on the GDPR, see generally Voigt & von dem Bussche (Reference Voigt and von dem Bussche2017).

76. For instance, the first accident of a car equipped with Chauffeur—Google’s technology for autonomous driving—occurred after seven years and 1.45 million miles of testing on public roads; see Burns (Reference Burns2018), p. 303.

77. Sumpter (Reference Sumpter2018), p. 219.

81. Footnote Ibid ., p. 221.

82. Footnote Ibid., p. 223.

83. Footnote Ibid ., p. 238.

84. Polson & Scott (Reference Polson and Scott2018), p. 7.

85. See e.g. Remus & Levy (Reference Remus and Levy2017), p. 501.

86. Polson & Scott, supra note 84, pp. 234–5.

87. O’Neil, supra note 10, p. 204.

88. Brown v. Board of Education of Topeka, 347 U.S. 483 (1954).

89. Or perhaps this argument may contain a hidden conservative bias in imagining that algorithms should act like humans—that is, an anthropomorphic perspective could shape our views of limitations of algorithmic decision-making.

90. Susskind, supra note 67, p. 293.

91. Footnote Ibid ., pp. 293–4.

92. Footnote Ibid., p. 355.

95. Erzachi & Stucke (Reference Erzachi and Stucke2016), pp. 230–1; on the subject of institutional capacities, see also Leal & Arguelhes (Reference Leal and Arguelhes2016), pp. 192–213.

96. Boddington (Reference Boddington2017), pp. 20–1.

97. Pasquale (Reference Pasquale2017), p. 1252.

98. Boddington, supra note 96, pp. 20–1.

99. On standpoint, see Twining (Reference Twining2009); Twining (Reference Twining2019); Fortes (Reference Fortes2019).

100. Fry, supra note 34, pp. 76–8.

101. Susskind, supra note 67, p. 361.

102. Gless & Wohlers, supra note 3.

103. On the need for empirical evidence to assess the impact of artificial intelligence, see Boddington, supra note 96, p. 9.

104. On the need for more efficient enforcement of collective actions, see Fortes, supra note 99.

105. See Maclean & Dijksterhuis (Reference Maclean and Dijksterhuis2019).

106. For the potential of algorithmic dispute resolution, see Barnett & Treleaven (Reference Barnett and Treleaven2018).

107. Boddington, supra note 96, p. 21.

108. Webb (Reference Webb2019), p. 61. On the relevance of experimentalism and institutional innovation in contemporary legal education, see Falcão & Delfino (Reference Falcão and Delfino2019).

109. Boddington, supra note 96, p. 25.

110. Footnote Ibid., p. 26.

111. Footnote Ibid., p. 47.

112. Balkin (Reference Balkin2017); Hosaganar (Reference Hosaganar2019), pp. 205–24.

113. Katsh & Rabinovich-Einy, supra note 7, p. 54.

114. Footnote Ibid., p.153.

115. Footnote Ibid., pp. 162–3.

116. Footnote Ibid., p. 180.

117. On the legal borderlands, see Fortes & Kampourakis (Reference Fortes and Kampourakis2019). Future research should examine, for instance, algorithmic decision-making in relation to civil justice and its potential for adjudication of mass torts especially through online dispute resolution (ODR) developed with artificial intelligence. Likewise, civil liability for judicial errors caused by robots is an important theme for future research.

118. Navy.mil (2015).

119. Frischmann & Selinger (Reference Frischmann and Selinger2018), p. 31.

120. Webb, supra note 108.

121. Mcintyre (Reference Mcintyre2018), pp. 94–7, 118–9.

References

REFERENCES

Alpaydin, Ethem (2016) Machine Learning, Cambridge: MIT Press.Google Scholar
Angwin, Julia, Larson, Jeff, Mattu, Surya, & Kirchner, Lauren (2016) “Machine Bias: There’s Software Used across the Country to Predict Future Criminals: And It’s Biased against Blacks,” www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (accessed 10 October 2019).Google Scholar
Balkin, Jack (2017) “Data Law and Policy: The Three Laws of Robotics in the Age of Big Data.” 78 Ohio State Law Journal 1217–41.Google Scholar
Barnett, Jeremy, & Treleaven, Philip (2018) “Algorithmic Dispute Resolution: The Automation of Professional Dispute Resolution Using AI and Blockchain Technology.” 61 The British Computer Society 399408.Google Scholar
Boddington, Paula (2017) Towards a Code of Ethics for Artificial Intelligence, Cham: Springer.CrossRefGoogle Scholar
Brennan, Tim, Dietrich, William, & Ehret, Beate (2009) “Evaluating the Predictive Validity of the Compas Risk and Needs Assessment System.” 36 Criminal Justice and Behavior 2140.CrossRefGoogle Scholar
Burns, Lawrence (2018) Autonomy: The Quest to Build the Driverless Car and How It Will Reshape Our World, London: William Collins.Google Scholar
Casemiro, Luciana, Luques, Ione, & Martins, Gabriel (2018) “MP do Rio Usa Robô de Inteligência Artificial na Justiça para Barrar Abusos de Empresas,” https://oglobo.globo.com/economia/defesa-do-consumidor/mp-do-rio-usa-robo-de-inteligencia-artificial-na-justica-para-barrar-abusos-de-empresas-23134722 (accessed 15 October 2019).Google Scholar
Clegg, Brian (2017) Big Data: How the Information Revolution is Transforming Our Lives, London: Icon Books.Google Scholar
Cortés, Pablo (2010) Online Dispute Resolution for Consumers in the European Union, Abingdon: Routledge.CrossRefGoogle Scholar
Cortés, Pablo (2018) The Law of Consumer Redress in an Evolving Digital Market: Upgrading from Alternative to Online Dispute Resolution, Cambridge: Cambridge University Press.Google Scholar
Dieterich, William, Mendoza, Christina, & Brennan, Tim (2016) COMPAS Risk Scales: Demonstrating Accuracy Equity and Predictive Parity, Wheat Ridge, CO: Northpoint Inc.Google Scholar
Erzachi, Ariel, & Stucke, Maurice (2016) Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy, Cambridge: Harvard University Press.Google Scholar
Eubanks, Virginia (2017) Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, New York: St. Martin’s Press.Google Scholar
Falcão, Joaquim, & Delfino, Pedro (2019) “Experimentalismo e Análise Institucional no Curso FGV Direito Rio: Um Projeto em Construção.” 5 Revista de Estudos Institucionais 119.CrossRefGoogle Scholar
Flores, Anthony, Bechtel, Kristin, & Lowenkamp, Christopher (2016) “False Positives, False Negatives, and False Analyses: A Rejoinder to ‘Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks’.” 80 Probation 3846.Google Scholar
Fortes, Pedro Rubim Borges (2015) “How Legal Indicators Influence a Justice System and Judicial Behavior: The Brazilian National Council of Justice and ‘Justice in Numbers’.” 47 The Journal of Legal Pluralism and Unofficial Law 3955.CrossRefGoogle Scholar
Fortes, Pedro Rubim Borges (2019) “O Fenômeno da Ilicitude Lucrativa.” 5 Revista de Estudos Institucionais 104–32.CrossRefGoogle Scholar
Fortes, Pedro Rubim Borges, & Kampourakis, Ioannis (2019) “Exploring Legal Borderlands: Introducing the Theme.” 5 Revista de Estudos Institucionais 639–55.CrossRefGoogle Scholar
Freeman, Katherine (2016) “Algorithmic Injustice: How The Wisconsin Supreme Court Failed to Protect Due Process Rights in State v. Loomis .” 18 North Carolina Journal of Law & Tecnhology 75106.Google Scholar
Frischmann, Brett, & Selinger, Evan (2018) Re-Engineering Humanity, Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Fry, Hannah (2018) Hello World: Being Human in the Age of Algorithms, New York: W. W. Norton & Company.Google Scholar
Genn, Hazel, & Beinart, Sarah (1999) Paths to Justice: What People Do and Think About Going to Law, Oxford: Hart.Google Scholar
Genn, Hazel, & Paterson, Alan (2001) Paths to Justice Scotland: What People in Scotland Do and Think about Going to Law, London: Bloomsbury.Google Scholar
Gless, Sabine, & Wohlers, Wolfgang (2019) “Subsumtionsautomat 2.0 Künstliche Intelligenz Statt Menschlicher Richter?,” in Böse, M., Schumann, K. H., & Toepel, F., eds., Festschrift zum 70. Geburtstag von Professor Dr. Dr. hc mult. Urs Kindhäuser. Baden-Baden: Nomos Verlagsgesellschaft mbH & Co. KG.Google Scholar
Harvardlawreview.org (2017) “State v. Loomis: Wisconsin Supreme Court Requires Warning Before Use of Algorithmic Risk Assessment in Sentencing,” https://harvardlawreview.org/2017/03/state-v-loomis/ (accessed 15 October 2019).Google Scholar
Hosaganar, Kartik (2019) How Algorithms Are Shaping Our Lives and How We Can Stay in Control, New York: Viking.Google Scholar
Israni, Ellora (2017) “Algorithmic Due Process: Mistaken Accountability and Attribution in State v. Loomis,” https://jolt.law.harvard.edu/digest/algorithmic-due-process-mistaken-accountability-and-attribution-in-state-v-loomis-1 (accessed 15 October 2019).Google Scholar
Katsh, M. Ethan, & Rabinovich-Einy, Orna (2017) Digital Justice: Technology and the Internet of Disputes, Oxford: Oxford University Press.CrossRefGoogle Scholar
Leal, Fernando, & Arguelhes, Diego Werneck (2016) “Dois Problemas de Operacionalização do Argumento de ‘Capacidades Institucionais’.” 2 Revista de Estudos Institucionais 192213.CrossRefGoogle Scholar
Lessig, Lawrence (2006) Code, New York: Basic Books.Google Scholar
Liu, Han-Wei, Lin, Ching-Fu, & Chen, Yu-Jie (2019) “Beyond State v. Loomis: Artificial Intelligence, Government Algorithmization, and Accountability.” 27 International Journal of Law and Information Technology 122–41.CrossRefGoogle Scholar
Maclean, Mavis, & Dijksterhuis, Bregjie, eds. (2019) Digital Family Justice: From Alternative Dispute Resolution to Online Dispute Resolution, Oxford: Hart.CrossRefGoogle Scholar
Mcintyre, Lee (2018) Post-Truth, Cambridge: MIT Press.CrossRefGoogle Scholar
Navy.mil (2015) “Charting a New Course: Celestial Navigation Returns to USNA,” https://www.navy.mil/submit/display.asp?story_id=91555 (accessed 15 October 2019).Google Scholar
Noble, Safiya Umoja (2018) Algorithms of Oppression: How Search Engines Reinforce Racism, New York: New York University Press.CrossRefGoogle Scholar
O’Neil, Cathy (2017) Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, London: Penguin.Google Scholar
Pasquale, Frank (2015) The Black Box Society, Cambridge: Harvard University Press.CrossRefGoogle Scholar
Pasquale, Frank (2017) “Toward a Fourth Law of Robotics: Preserving Attribution, Responsibility, and Explainability in an Algorithmic Society.” 78 Ohio State Law Journal 1243–55.Google Scholar
Polson, Nicholas, & Scott, James (2018) AIQ: How Artificial Intelligence Works and How We Can Harness Its Power for a Better World, London: Bantam Press.Google Scholar
Rees, Martin (2018) On the Future: Prospects for Humanity, Princeton, NJ: Princeton University Press.Google Scholar
Remus, Dana,& Levy, Frank (2017) “Can Robots Be Lawyers: Computers, Lawyers, and the Practice of Law.” 30 Georgetown Journal of Legal Ethics 501–58.Google Scholar
Restrepo Amariles, David (2014) “The Mathematical Turn: L’Indicateur Rule of Law dans la Politique de Développement de la Banque Mondiale,” in Frydman, B. & van Waeyenbergen, A., eds., Gouverner par les Standards et les Indicateurs: de Hume au Rankings, Bruxelles: Bruylant, 193234.Google Scholar
Restrepo Amariles, David (2020) “Algorithmic Decisions Systems: Using Automation and Machine Learning in the Public Administration,” in Barfield, W., ed., Cambridge Handbook of Law and Algorithms, Cambridge: Cambridge University Press, forthcoming.Google Scholar
Sumpter, David (2018) Outnumbered: From Facebook and Google to Fake News and Filter Bubbles: The Algorithms that Control our Lives, London: Bloomsbury.Google Scholar
Susskind, Jamie (2018) Future Politics: Living Together in a World Transformed by Tech, Oxford: Oxford University Press.Google Scholar
Turner, Jacob (2019) Robot Rules: Regulating Artificial Intelligence, London: Palgrave Macmillan.CrossRefGoogle Scholar
Twining, William (2009) General Jurisprudence: Understanding Law from a Global Perspective, Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Twining, William (2019) Jurist in Context: A Memoir, Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Voigt, Paul, & von dem Bussche, Axel (2017) The EU General Data Protection Regulation (GDPR): A Practical Guide, Cham: Springer.CrossRefGoogle Scholar
Webb, Amy (2019) The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity, New York: Public Affairs.Google Scholar