Skip to main content Accessibility help
×
Hostname: page-component-76fb5796d-zzh7m Total loading time: 0 Render date: 2024-04-26T16:11:06.385Z Has data issue: false hasContentIssue false

3 - Inalienable Due Process in an Age of AI: Limiting the Contractual Creep toward Automated Adjudication

from Part I - Algorithms, Freedom, and Fundamental Rights

Published online by Cambridge University Press:  01 November 2021

Hans-W. Micklitz
Affiliation:
European University Institute, Florence
Oreste Pollicino
Affiliation:
Bocconi University
Amnon Reichman
Affiliation:
University of California, Berkeley
Andrea Simoncini
Affiliation:
University of Florence
Giovanni Sartor
Affiliation:
European University Institute, Florence
Giovanni De Gregorio
Affiliation:
University of Oxford

Summary

If states begin to impose such contractual bargains for automated administrative determinations, the ‘immoveable object’ of inalienable due process rights will clash with the ‘irresistible force’ of legal automation and libertarian conceptions of contractual ‘freedom.’ This chapter explains why legal values must cabin (and often trump) efforts to ‘fast track’ cases via statistical methods, machine learning (ML), or artificial intelligence. Part I explains how due process rights, while flexible, should include four core features in all but the most trivial or routine cases: the ability to explain one’s case, a judgment by a human decisionmaker, an explanation for that judgment, and an ability to appeal. Part II demonstrates why legal automation threatens those rights. Part III critiques potential bargains for legal automation, and concludes that the courts should not accept them. Vulnerable and marginalized persons should not be induced to give up basic human rights, even if some capacious and abstract versions of utilitarianism project they would be ‘better off’ by doing so.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2021
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

3.1 Introduction

Automation is influencing ever more fields of law. The dream of disruption has permeated the US and British legal academies and is making inroads in Australia and Canada, as well as in civil law jurisdictions. The ideal here is law as a product, simultaneously mass producible and customizable, accessible to all and personalized, openly deprofessionalized.Footnote 1 This is the language of idealism, so common in discussions of legal technology – the Dr. Jekyll of legal automation.

But the shadow side of legal tech also lurks behind many initiatives. Legal disruption’s Mr. Hyde advances the cold economic imperative to shrink the state and its aid to the vulnerable. In Australia, the Robodebt system of automated benefit overpayment adjudication clawed back funds from beneficiaries on the basis of flawed data, false factual assumptions, and misguided assumptions about the law. In Michigan, in the United States, a similar program (aptly named “MIDAS,” for Michigan Integrated Data Automated System) “charged more than 40,000 people, billing them about five times the original benefits” – and it was later discovered that 93 percent of the charges were erroneous.Footnote 2 Meanwhile, global corporations are finding the automation of dispute settlement a convenient way to cut labor costs. This strategy is particularly tempting on platforms, which may facilitate millions of transactions each day.

When long-standing appeals to austerity and business necessity are behind “access to justice” initiatives to promote online dispute resolution, some skepticism is in order. At the limit, jurisdictions may be able to sell off their downtown real estate, setting up trusts to support a rump judicial system.Footnote 3 To be sure, even online courts require some staffing. But perhaps an avant-garde of legal cost cutters will find some inspiration from US corporations, which routinely decide buyer versus seller disputes in entirely opaque fashion.Footnote 4 In China, a large platform has charged “citizen juries” (who do not even earn money for their labor but, rather, reputation points) to decide such disputes. Build up a large enough catalog of such encounters, and a machine learning system may even be entrusted with deciding disputes based on past markers of success.Footnote 5 A complainant may lose credibility points for nervous behavior, for example, or gain points on the basis of long-standing status as someone who buys a great deal of merchandise or pays a taxes in a timely manner.

As these informal mechanisms become more common, they will test the limits of due process law. As anyone familiar with the diversity of administrative processes will realize, there is an enormous variation at present in how much opportunity a person is entitled to state their case, to demand a written explanation for a final (or intermediate) result, and to appeal. A black lung benefits case differs from a traffic violation, which in term differs from an immigration case. Courts permit agencies a fair amount of flexibility to structure their own affairs. Agencies will, in all likelihood, continue to pursue an agenda of what Julie Cohen has called “neoliberal managerialism” as they reorder their processes of investigation, case development, and decision-making.Footnote 6 That will, in turn, bring in more automated and “streamlined” processes, which courts will be called upon to accommodate.

While judicial accommodations of new agency forms are common, they are not automatic. At some point, agencies will adopt automated processes that courts can only recognize as simulacra of justice. Think, for instance, of an anti-trespassing robot equipped with facial recognition, which could instantly identify and “adjudicate” a person overstepping a boundary and text that person a notice of a fine. Or a rail ticket monitoring system that would instantly convert notice of a judgment against a person into a yearlong ban on the person buying train tickets. Other examples might be less dramatic but also worrisome. For example, consider the possibility of “mass claims rejection” for private health care providers seeking government payment for services rendered to persons with government-sponsored health insurance. Such claims processing programs may simply compare a set of claims to a corpus of past denied claims, sort new claimants’ documents into categories, and then reject them without human review.

In past work, I have explained why legislators and courts should reject most of these systems, and should always be wary of claims that justice can be automated.Footnote 7 And some initial jurisprudential stirrings are confirming that normative recommendation. For example, there has been a backlash against red-light cameras, which automatically cite drivers for failing to obey traffic laws. And even some of those who have developed natural language processing for legal settings have cautioned that they are not to be used in anything like a trial setting. These concessions are encouraging.

And yet there is another danger lurking on the horizon. Imagine a disability payment scheme that offered something like the following “contractual addendum” to beneficiaries immediately before they began receiving benefits:

The state has a duty to husband resources and to avoid inappropriate payments. By signing below, you agree to the following exchange. You will receive $20 per month extra in benefits, in addition to what you are statutorily eligible for. In exchange, you agree to permit the state (and any contractor it may choose to employ) to review all your social media accounts, in order to detect behavior indicating you are fit for work. If you are determined to be fit for work, your benefits will cease. This determination will be made by a machine learning program, and there will be no appeal.Footnote 8

There are two diametrically opposed ways of parsing such a contract. For many libertarians, the right to give up one’s rights (here, to a certain level of privacy and appeals) is effectively the most important right, since it enables contracting parties to eliminate certain forms of interference from their relationship. By contrast, for those who value legal regularity and due process, this “addendum” is anathema. Even if it is possible for the claimant to re-apply after a machine learning system has stripped her of benefits, the process offends the dignity of the claimant. A person must pass on whether such a grave step is to be taken.

These divergent approaches are mirrored in two lines of US Supreme Court jurisprudence. On the libertarian side, the Court has handed down a number of rulings affirming the “right” of workers to sign away certain rights at work, or at least the ability to contest their denial in court.Footnote 9 Partisans of “disruptive innovation” may argue that startups need to be able to impose one-sided terms of service on customers, so that investors will not be deterred from financing them. Exculpatory clauses have spread like kudzu, beckoning employers with the jurisprudential equivalent of a neutron bomb: the ability to leave laws and regulations standing, without any person capable of enforcing them.

On the other side, the Supreme Court has also made clear that the state must be limited in the degree to which it can structure entitlements when it is seeking to avoid due process obligations. A state cannot simply define an entitlement to, say, disability benefits, by folding into the entitlement itself an understanding that it can be revoked for any reason, or no reason at all. On this dignity-centered approach, the “contractual addendum” posited above is not merely one innocuous add-on, a bit of a risk the claimant must endure in order to engage in an arms’ length exchange for $20. Rather, it undoes the basic structure of the entitlement, which included the ability to make one’s case to another person and to appeal an adverse decision.

If states begin to impose such contractual bargains for automated administrative determinations, the “immoveable object” of inalienable due process rights will clash with the “irresistible force” of legal automation and libertarian conceptions of contractual “freedom.” This chapter explains why legal values must cabin (and often trump) efforts to “fast track” cases via statistical methods, machine learning (ML), or artificial intelligence. Section 3.2 explains how due process rights, while flexible, should include four core features in all but the most trivial or routine cases: the ability to explain one’s case, a judgment by a human decision maker, an explanation for that judgment, and the ability to appeal. Section 3.3 demonstrates why legal automation often threatens those rights. Section 3.4 critiques potential bargains for legal automation and concludes that the courts should not accept them. Vulnerable and marginalized persons should not be induced to give up basic human rights, even if some capacious and abstract versions of utilitarianism project they would be “better off” by doing so.

3.2 Four Core Features of Due Process

Like the rule of law, “due process” is a multifaceted, complex, and perhaps even essentially contested concept.Footnote 10 As J. Roland Pennock has observed, the “roots of due process grow out of a blend of history and philosophy.”Footnote 11 While the term itself is a cornerstone of the US and UK legal systems, it has analogs in both public law and civil law systems around the world.

While many rights and immunities have been evoked as part of due process, it is important to identify a “core” conception of it that should be inalienable in all significant disputes between persons and governments. We can see this grasping for a “core” of due process in some US cases, where the interest at stake was relatively insignificant but the court still decided that the person affected by government action had to have some opportunity to explain him or herself and the contest the imposition of a punishment. For example, in Goss v. Lopez, students who were accused of misbehavior were suspended from school for ten days. The students claimed they were due some kind of hearing before suspension, and the Supreme Court agreed:

We do not believe that school authorities must be totally free from notice and hearing requirements if their schools are to operate with acceptable efficiency. Students facing temporary suspension have interests qualifying for protection of the Due Process Clause, and due process requires, in connection with a suspension of 10 days or less, that the student be given oral or written notice of the charges against him and, if he denies them, an explanation of the evidence the authorities have and an opportunity to present his side of the story.Footnote 12

This is a fair encapsulation of some core practices of due process, which may (as the stakes rise) become supplemented by all manner of additional procedures.Footnote 13

One of the great questions raised by the current age of artificial intelligence (AI) is whether the notice and explanation of the charges (as well as the opportunity to be heard) must be discharged by a human being. So far as I can discern, no ultimate judicial authority has addressed this particular issue in the due process context. However, given that the entire line of case law arises in the context of humans confronting other humans, it does not take a stretch of the imagination to imagine such a requirement immanent in the enterprise of due process.

Moreover, legal scholars Kiel Brennan-Marquez and Henderson argue that “in a liberal democracy, there must be an aspect of ‘role-reversibility’ to judgment. Those who exercise judgment should be vulnerable, reciprocally, to its processes and effects.”Footnote 14 The problem with robot or AI judges is that they cannot experience punishment the way that a human being would. Role-reversibility is necessary for “decision-makers to take the process seriously, respecting the gravity of decision-making from the perspective of affected parties.” Brennan-Marquez and Henderson derive this principle from basic principles of self-governance:

In a democracy, citizens do not stand outside the process of judgment, as if responding, in awe or trepidation, to the proclamations of an oracle. Rather, we are collectively responsible for judgment. Thus, the party charged with exercising judgment – who could, after all, have been any of us – ought to be able to say: This decision reflects constraints that we have decided to impose on ourselves, and in this case, it just so happens that another person, rather than I, must answer to them. And the judged party – who could likewise have been any of us – ought to be able to say: This decision-making process is one that we exercise ourselves, and in this case, it just so happens that another person, rather than I, is executing it.

Thus, for Brennan-Marquez and Henderson, “even assuming role-reversibility will not improve the accuracy of decision-making; it still has intrinsic value.”

Brennan-Marquez and Henderson are building on a long tradition of scholarship that focuses on the intrinsic value of legal and deliberative processes, rather than their instrumental value. For example, applications of the US Supreme Court’s famous Mathews v. Eldridge calculus have frequently failed to take into account the effects of abbreviated procedures on claimants’ dignity.Footnote 15 Bureaucracies, including the judiciary, have enormous power. They owe litigants a chance to plead their case to someone who can understand and experience, on a visceral level, the boredom and violence portended by a prison stay, the “brutal need” resulting from the loss of benefits (as put in Goldberg v. Kelly), the sense of shame that liability for drunk driving or pollution can give rise to. And as the classic Morgan v. United States held, even in complex administrative processes, the one who hears must be the one who decides. It is not adequate for persons to play mere functionary roles in an automated judiciary, gathering data for more authoritative machines. Rather, humans must take responsibility for critical decisions made by the legal system.

This argument is consistent with other important research on the dangers of giving robots legal powers and responsibilities. For example, Joanna Bryson, Mihailis Diamantis, and Thomas D. Grant have warned that granting robots legal personality raises the disturbing possibility of corporations deploying “robots as liability shields.”Footnote 16 A “responsible robot” may deflect blame or liability from the business that set it into the world. This is dangerous because the robot cannot truly be punished: it lacks human sensations of regret or dismay at loss of liberty or assets. It may be programmed to look as if it is remorseful upon being hauled into jail, or to frown when any assets under its control are seized. But these are simulations of human emotion, not the thing itself. Emotional response is one of many fundamental aspects of human experience that is embodied. And what is true of the robot as an object of legal judgment is also true of robots or AI as potential producers of such judgments.

3.3 How Legal Automation and Contractual Surrender of Rights Threaten Core Due Process Values

There is increasing evidence that many functions of the legal system, as it exists now, are very difficult to automate.Footnote 17 However, as Cashwell and I warned in 2015, the legal system is far from a stable and defined set of tasks to complete. As various interest groups jostle to “reform” legal systems the range of procedures needed to finalize legal determinations may shrink or expand.Footnote 18 There are many ways to limit existing legal processes, or simplify them, in order to make it easier for computation to replace or simulate them. The clauses mentioned previously – forswearing appeals of judgments generated or informed by machine learning or AI – would make non-explainable AI far easier to implement in legal systems.

This type of “moving the goalposts” may be accelerated by extant trends toward neoliberal managerialism in public administration.Footnote 19 This approach to public administration is focused on throughput, speed, case management, and efficiency. Neoliberal managerialists urge the public sector to learn from the successes of the private sector in limiting spending on disputes. One potential here is simply to outsource determinations to private actors – a move widely criticized elsewhere.Footnote 20 I am more concerned here with a contractual option: to offer to beneficiaries of government programs an opportunity for more or quicker benefits, in exchange for an agreement not to pursue appeals of termination decisions, or to thereby accepting their automated resolution.

I focus on the inducement of quicker or more benefits, because it appears to be settled law (at least in the US) that such restrictions of due process cannot be embedded into benefits themselves. A failed line of US Supreme Court decisions once attempted to restrict claimants’ due process rights by insisting that the government can create property entitlements with no due process rights attached. On this reasoning, a county might grant someone benefits with the explicit understanding that they could be terminated at any time without explanation: the “sweet” of the benefits could include the “bitter” of sudden, unreasoned denial of them. In Cleveland Board of Education v. Loudermill (1985), the Court finally discarded this line of reasoning, forcing some modicum of reasoned explanation and process for termination of property rights.

What is less clear now is whether side deals might undermine the delicate balance of rights struck by Loudermill. In the private sector, companies have successfully routed disputes with employees out of process-rich Article III courts, and into stripped-down arbitral forums, where one might even be skeptical of the impartiality of decision-makers.Footnote 21 Will the public sector follow suit? Given some current trends in the foreshortening of procedure and judgment occasioned by public sector automation, the temptation will be great.

These concerns are a logical outgrowth of a venerable literature critiquing rushed, shoddy, and otherwise improper automation of legal decision-making. In 2008, Danielle Keats Citron warned that states were cutting corners by deciding certain benefits (and other) claims automatically, on the basis of computer code that did not adequately reflect the complexity of the legal code it claimed to have reduced to computation.Footnote 22 Virginia Eubanks’s Automating Inequality has identified profound problems in governmental use of algorithmic sorting systems. Eubanks tells the stories of individuals who lose benefits, opportunities, and even custody of their children, thanks to algorithmic assessments that are inaccurate or biased. Eubanks argues that complex benefits determinations are not something well-meaning tech experts can “fix.” Instead, the system itself is deeply problematic, constantly shifting the goal line (in all too many states) to throw up barriers to access to care.

A growing movement for algorithmic accountability is both exposing and responding to these problems. For example, Citron and I coauthored work setting forth some basic procedural protections for those affected by governmental scoring systems.Footnote 23 The AI Now Institute has analyzed cases of improper algorithmic determinations of rights and opportunities.Footnote 24 And there is a growing body of scholarship internationally exploring the ramifications of computational dispute resolution.Footnote 25 As this work influences more agencies around the world, it is increasingly likely that responsible leadership will ensure that a certain baseline of due process values applies to automated decision-making.

Though they are generally optimistic about the role of automation and algorithms in agency decision-making, Coglianese and Lehr concede that one “due process question presented by automated adjudication stems from how such a system would affect an aggrieved party’s right to cross-examination. … Probably the only meaningful way to identify errors would be to conduct a proceeding in which an algorithm and its data are fully explored.”Footnote 26 This type of examination is at the core of Keats Citron’s concept of technological due process. It would require something like a right to an explanation of the automated profiling at the core of decision.Footnote 27

3.4 Due Process, Deals, and Unraveling

However, all such protections could be undone. The ability to explain oneself, and to hear reasoned explanations in turn, is often framed as being needlessly expensive. This expense of legal process (or administrative determinations) has helped fuel a turn to quantification, scoring, and algorithmic decision procedures.Footnote 28 A written evaluation of a person (or comprehensive analysis of future scenarios) often requires subtle judgment, exactitude in wording, and ongoing revision in response to challenges and evolving situations. A pre-set formula based on limited, easily observable variables, is far easier to calculate.Footnote 29 Moreover, even if individuals are due certain explanations and hearings as part of law, they may forego them in some contexts.

This type of rights waiver has already been deployed in some contexts. Several states in the United States allow unions to waive the due process rights of public employees.Footnote 30 We can also interpret some Employee Retirement Income Security Act (ERISA) jurisprudence as an endorsement and approval of a relatively common situation in the United States: employees effectively signing away a right to a more substantive and searching review of adverse benefit scope and insurance coverage determinations via an agreement to participate in an employer-sponsored benefit plan. The US Supreme Court has gradually interpreted ERISA to require federal courts to defer to plan administrators, echoing the deference due to agency administrators, and sometimes going beyond it.Footnote 31

True, Loudermill casts doubt on arrangements for government benefits premised on the beneficiary’s sacrificing due process protections. However, a particularly innovative and disruptive state may decide that the opinion is silent as to the baseline of what constitutes the benefit in question, and leverage that ambiguity. Consider a state that guaranteed health care to a certain category of individuals, as a “health care benefit.” Enlightened legislators further propose that the disabled, or those without robust transport options, should also receive assistance with respect to transportation to care. Austerity-minded legislators counter with a proviso: to receive transport assistance in addition to health assistance, beneficiaries need to agree to automatic adjudication of a broad class of disputes that might arise out of their beneficiary status.

The automation “deal” may also arise out of long-standing delays in receiving benefits. For example, in the United States, there have been many complaints by disability rights groups about the delays encountered by applicants for Social Security Disability Benefits, even when they are clearly entitled to them. On the other side of the political spectrum, some complain that persons who are adjudicated as disabled, and then regain capacities to work, are able to keep benefits for too long after they regain the capacity to work. This concern (and perhaps some mix of cruelty and indifference) motivated British policy makers who promoted “fit for work” reviews by private contractors.Footnote 32

It is not hard to see how the “baseline” of benefits might be defined narrowly, and all future benefits would be conditioned in this way. Nor are procedures the only constitution-level interest that may be “traded away” for faster access to more benefits. Privacy rights may be on the chopping block as well. In the United States, the Trump administration proposed reviews of the social media of persons receiving benefits.Footnote 33 The presumption of such review is that a picture of, say, a self-proclaimed depressed person smiling, or a self-proclaimed wheelchair-bound person walking, could alert authorities to potential benefits fraud. And such invasive surveillance could again feed into automated review, which could be flagged by such “suspicious activity” in a way similar to the activation of investigation at US fusion centers by “suspicious activity reports.”

What is even more troubling about these dynamics is the way in which “preferences” to avoid surveillance or preserve procedural rights might themselves become new data points for suspicion or investigation. A policymaker may wonder about the persons who refuse to accept the new due-process-lite “deal” offered by the state: What have they got to hide? Why are they so eager to preserve access to a judge and the lengthy process that may entail? Do they know some discrediting fact about their own status that we do not, and are they acting accordingly? Reflected in the economics of information as an “adverse selection problem,” this kind of speculative suspicion may become widespread. It may also arise as a byproduct of machine learning: those who refuse to relinquish privacy or procedural rights may, empirically, turn out to be more likely to pose problems for the system, or non-renewal of benefits, than those who trade away those rights. Black-boxed flagging systems may silently incorporate such data points into their own calculations.

The “what have you got to hide” rationale leads to a phenomenon deemed “unraveling” by economists of information. This dynamic has been extensively analyzed by the legal scholar Scott Peppet. The bottom line of Peppet’s analysis is that every individual decision to reveal something about himself or herself may also create social circumstances that pressure others to also disclose. For example, if only a few persons tout their grade point average (GPA) on their resumes, that disclosure may merely be an advantage for them in the job-seeking process. However, once 30 percent, 40 percent, 50 percent, or more of job-seekers include their GPAs, human resources personnel reviewing the applications may wonder about the motives of those who do not. If they assume the worst about non-revealers, it becomes a rationale for all but the very lowest GPA holders to reveal their GPA. Those at, say, the thirtieth percentile, reveal their GPA to avoid being confused with those in the twentieth or tenth percentile, and so on.

This model of unraveling parallels similar theorizing in feminist theorizing. For example, Catharine Mackinnon insisted that the “personal is political,” in part because any particular family’s division of labor helped either reinforce or challenge dominant patterns.Footnote 34 A mother may choose to quit work and stay home to raise her children, while her husband works fifty hours a week, and that may be an entirely ethical choice for her family. However, it also helps reinforce patterns of caregiving and expectations in that society which track women into unpaid work and men into paid work. It is not merely accommodating but also promoting gendered patterns of labor.Footnote 35 Like a path through a forest trod ever clearer of debris, it becomes the natural default.

This inevitably social dimension of personal choice also highlights the limits of liberalism in addressing due process trade-offs. Civil libertarians may fight the direct imposition of limitations of procedural or privacy rights by the state. However, “freedom of contract” may itself be framed as a civil liberties issue. If a person in great need wants immediate access to benefits, in exchange for letting the state monitor his social network feed (and automatically terminate benefits if suspect pictures are posted), the bare rhetoric of “freedom” also pulls in favor of permitting this deal. We need a more robust and durable theory of constitutionalism to preempt the problems that may arise here.

3.5 Backstopping the Slippery Slope toward Automated Justice

As the spread of plea bargaining in the United States shows, there is a clear and present danger of the state using its power to make an end-run around protections established in the constitution and guarded by courts. When a prosecutor threatens a defendant with a potential hundred-year sentence in a trial, or a plea for five to eight years, the coercion is obvious. By comparison, given the sclerotic slowness of much of the US administrative state, giving up rights in order to accelerate receipt of benefits is likely to seem to many liberals a humane (if tough) compromise.

Nevertheless, scholars should resist this “deal” by further developing and expanding the “unconstitutional conditions” doctrine. Daniel Farber deftly explicates the basis and purpose of the doctrine:

[One] recondite area of legal doctrine [concerns] the constitutionality of requiring waiver of a constitutional right as a condition of receiving some governmental benefit. Under the unconstitutional conditions doctrine, the government is sometimes, but by no means always, blocked from imposing such conditions on grants. This doctrine has long been considered an intellectual and doctrinal swamp. As one recent author has said, “[t]he Supreme Court’s failure to provide coherent guidance on the subject is, alas, legendary.”Footnote 36

Farber gives several concrete examples of the types of waivers that have been allowed over time. “[I]n return for government funding, family planning clinics may lose their right to engage in abortion referrals”; a criminal defendant can trade away the right to a jury trial for a lighter sentence. Farber is generally open to the exercise of this right to trade one’s rights away.Footnote 37 However, even he acknowledges that courts need to block particularly oppressive or manipulative exchanges of rights for other benefits. He offers several rationales for such blockages, including one internal to contract theory and another based on public law grounds.Footnote 38 Each is applicable to many instances of “automated justice.”

Farber’s first normative ground for unconstitutional conditions challenges to waivers of constitutional rights is the classic behavioral economics concern about situations “where asymmetrical information, imperfect rationality, or other flaws make it likely that the bargain will not be in the interests of both parties.”Footnote 39 This rationale applies particularly well to scenarios where black-box algorithms (or secret data) are used.Footnote 40 No one should be permitted to accede to an abbreviated process when the foundations of its decision-making are not available for inspection. The problem of hyperbolic discounting also looms large. A benefits applicant in brutal need of help may not be capable of fully thinking through the implications of trading away due process rights. Bare concern for survival occludes such calculations.

The second normative foundation concerns the larger social impact of the rights-waiver bargain. For example, Farber observes, “when the agreement would adversely affect the interests of third parties in some tangible way,” courts should be wary of it. The unraveling dynamic described above offers one example of this type of adverse impact on third parties from rights sacrifices. Though it may not be immediately “tangible,” it has happened in so many other scenarios that it is critical for courts to consider whether particular bargains may pave the way to a future where the “choice” to trade away a right is effectively no choice at all, because the cost of retaining it is a high level of suspicion generated by exercising (or merely retaining the right to exercise) the right.

Under this second ground, Farber also mentions that we may “block exchanges that adversely affect the social meaning of constitutional rights, degrading society’s sense of its connection with personhood.” Here again, a drift toward automated determination of legal rights and duties seems particularly apt for targeting. The right of due process at its core means something more than a bare redetermination by automated systems. Rather, it requires some ability to identify a true human face of the state, as Henderson and Brennan-Marquez’s work (discussed previously) suggests. Soldiers at war may hide their faces, but police do not. We are not at war with the state; rather, it is supposed to be serving us in a humanly recognizable way. The same is true a fortiori of agencies dispending benefits and other forms of support.

3.6 Conclusion: Writing, Thinking, and Automation in Administrative Processes

Claimants worried about the pressure to sign away rights to due process may have an ally within the administrative state: persons who now hear and decide cases. AI and ML may ease their workload, but could also be a prelude to full automation. Two contrasting cases help illuminate this possibility. In Albathani v. INS (2003), the First Circuit affirmed the Board of Immigration Appeals’ policy of “affirmance without opinion” (AWO) of certain rulings by immigration judges.Footnote 41 Though “the record of the hearing itself could not be reviewed” in the ten minutes which the Board member, on average, took to review each of more than fifty cases on the day in question, the court found it imperative to recognize “workload management devices that acknowledge the reality of high caseloads.” However, in a similar Australian administrative context, a judge ruled against a Minister in part due to the rapid disposition of two cases involving more than seven hundred pages of material. According to the judge, “43 minutes represents an insufficient time for the Minister to have engaged in the active intellectual process which the law required of him.”Footnote 42

In the short run, decision-makers at an agency may prefer the Albathani approach. As Chad Oldfather has observed in his article “Writing, Cognition, and the Nature of the Judicial Function,” unwritten, and even visceral, snap decisions have a place in our legal system.Footnote 43 They are far less tiring to generate than a written record and reasoned elaboration of how the decision-maker applied the law to the facts. However, in the long run, when the reduction of thought and responsibility for review reduces to a certain vanishing point, it is difficult for decision-makers to justify their own interposition in the legal process. A “cyberdelegation” to cheaper software may be proper then.Footnote 44

We must connect current debates on the proper role of automation in agencies to requirements for reasoned decision-making. It is probably in administrators’ best interests for courts to actively ensure thoughtful decisions by responsible persons. Otherwise, administrators may ultimately be replaced by the types of software and AI now poised to take over so many other roles now performed by humans. The temptation to accelerate, abbreviate, and automate human processes is, all too often, a prelude to destroying them.Footnote 45

Footnotes

1 Frank Pasquale, “A Rule of Persons, Not Machines: The Limits of Legal Automation” (2019) 87 Geo Wash LR 1, 2829.

2 Stephanie Wykstra, “Government’s Use of Algorithm Serves Up False Fraud Charges” Undark (2020) https://undark.org/2020/06/01/michigan-unemployment-fraud-algorithm.

3 Owen Bowcott, “Court Closures: Sale of 126 Premises Raised Just £34m, Figures ShowThe Guardian (London, Mar 8 2018) www.theguardian.com/law/2018/mar/08/court-closures-people-facing-days-travel-to-attend-hearings.

4 Rory van Loo, “Corporation as Courthouse” (2016) 33 Yale J on Reg 547.

5 Frank Pasquale and Glyn Cashwell, “Prediction, Persuasion, and the Jurisprudence of Behaviorism” (2018) 68 U Toronto LJ 63.

6 Julie Cohen, Between Truth and Power (Oxford University Press 2019).

7 Jathan Sadowski and Frank Pasquale, “The Spectrum of Control: A Social Theory of the Smart City” (2015) 20(7) First Monday https://firstmonday.org/ojs/index.php/fm/article/view/5903/4660; Pasquale (Footnote n 1).

8 For one aspect of the factual foundations of this hypothetical, see Social Security Administration, Fiscal Year 2019 Budget Overview (2018) 17–18: “We will study and design successful strategies of our private sector counterparts to determine if a disability adjudicator should access and use social media networks to evaluate disability allegations. Currently, agency adjudicators may use social media information to evaluate a beneficiary’s symptoms only when there is an OIG CDI unit’s Report of Investigation that contains social media data corroborating the investigative findings. Our study will determine whether the further expansion of social media networks in disability determinations will increase program integrity and expedite the identification of fraud.”

9 Frank Pasquale, “Six Horsemen of Irresponsibility” (2019) 79 Maryland LR 105 (discussing exculpatory clauses).

10 For rival definitions of the rule of law, see Pasquale, “A Rule of Persons” (Footnote n 1). The academic discussion of “due process” remains at least as complex as it was in 1977, when the Nomos volume on the topic was published. See, e.g., Charles A. Miller, “The Forest of Due Process Law” in J. Roland Pennock and John W. Chapman (eds), Nomos XVII: Due Process (NYU Press 1977).

11 Pennock, “Introduction” in Pennock and Chapman, Nomos XVII: Due Process (Footnote n 10).

12 419 US 565, 581 (1975). In rare cases, the hearing may wait until the threat posed by the student is contained: “Since the hearing may occur almost immediately following the misconduct, it follows that as a general rule notice and hearing should precede removal of the student from school. We agree with the District Court, however, that there are recurring situations in which prior notice and hearing cannot be insisted upon. Students whose presence poses a continuing danger to persons or property or an ongoing threat of disrupting the academic process may be immediately removed from school. In such cases, the necessary notice and rudimentary hearing should follow.”

13 Henry J. Friendly, “Some Kind of Hearing” (1975) 123 U Pa LR 1267 (listing 11 potential requirements of due process).

14 Kiel Brennan-Marquez and Stephen E. Henderson, “Artificial Intelligence and Role-Reversible Judgment” (2019) 109 J Crim L and Criminology 137.

15 Under the Mathews balancing test, “Identification of the specific dictates of due process generally requires consideration of three distinct factors: First, the private interest that will be affected by the official action; second, the risk of an erroneous deprivation of such interest through the procedures used, and the probable value, if any, of additional or substitute procedural safeguards; and finally, the Government’s interest, including the function involved and the fiscal and administrative burdens that the additional or substitute procedural requirement would entail.” Mathews v. Eldridge 424 US 319, 335 (1976). For an early critique, see Jerry L Mashaw, “The Supreme Court’s Due Process Calculus for Administrative Adjudication in Mathews v. Eldridge: Three Factors in Search of a Theory of Value” (1976) 44 U Chi LR 28.

16 Joanna J. Bryson, Mihailis E. Diamantis, and Thomas D. Grant, “Of, for and by the People: The Legal Lacuna of Synthetic Persons” (2017) 25 Artificial Intelligence and Law 273. For a recent suggestion on how to deal with this problem, by one of the co-authors, see Mihailis Diamantis, “Algorithms Acting Badly: A Solution from Corporate Law” SSRN (accessed 5 Mar 2020) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3545436.

17 Dana Remus and Frank S. Levy, “Can Robots Be Lawyers? Computers, Lawyers, and the Practice of Law” SSRN (Nov 30 2016) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2701092; Brian S. Haney, “Applied Natural Language Processing for Law Practice” SSRN (Feb 14 2020) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3476351 (“The state-of-the-art in legal question answering technology is far from providing any more valuable insight than a simple Google search … [and] legal Q&A is not a promising application of NLP in law practice.”).

18 Frank A. Pasquale and Glyn Cashwell, “Four Futures of Legal Automation” (2015) 63 UCLA LR Discourse 26.

19 See Cohen (Footnote n 6). See also Karen Yeung, “Algorithmic Regulation: A Critical Interrogation” (2018) 12 Regulation and Governance 505.

20 Ellen Dannin, “Red Tape or Accountability: Privatization, Public-ization, and Public Values” (2005) 15 Cornell JL & Pub Pol’y 111, 143 (“If due process requirements governing eligibility determinations for government-delivered services appear to produce inefficiencies, lifting them entirely through reliance on private service delivery may produce unacceptable inequities.”); Jon D. Michaels, Constitutional Coup: Privatization’s Threat to the American Republic (Harvard University Press 2017).

21 Frank Blechschmidt, “All Alone in Arbitration: AT&T Mobility v. Concepcion and the Substantive Impact of Class Action Waivers” (2012) 160 U Pa LR 541.

22 Danielle Keats Citron, “Technological Due Process” (2008) 85 Wash U LR 1249.

23 Danielle Keats Citron and Frank Pasquale, “The Scored Society: Due Process for Automated Predictions” (2014) 89 Wash LR 1; Frank Pasquale and Danielle Keats Citron, “Promoting Innovation While Preventing Discrimination: Policy Goals for the Scored Society” (2015) 89 Wash LR 1413. See also Kate Crawford and Jason Schultz, “Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms” (2014) 55 Boston Coll LR 93; Kate Crawford and Jason Schultz, “AI Systems as State Actors” (2019) 119 Colum LR 1941.

24 Rashida Richardson, Jason M. Schultz, and Vincent M. Southerland, “Litigating Algorithms 2019 US Report: New Challenges to Government Use of Algorithmic Decision Systems” AI Now Institute (September 2019) https://ainowinstitute.org/litigatingalgorithms-2019-us.html.

25 Monika Zalnieriute, Lyria Bennett Moses and George Williams, “The Rule of Law and Automation of Government Decision-Making” (2019) 82 Modern Law Review 425 (report on automated decision-making). In the UK, see Simon Deakin and Christopher Markou (eds), Is Law Computable? Critical Perspectives on Law and Artificial Intelligence (Bloomsbury Professional, forthcoming); Jennifer Cobbe, “The Ethical and Governance Challenges of AI” (Aug 1 2019) www.youtube.com/watch?v=ujZUCSQ1_e8. In continental Europe, see the work of COHUBICOL and scholars at Bocconi and Florence, among many other institutions.

26 Cary Coglianese and David Lehr, “Regulating by Robot: Administrative Decision Making in the Machine-Learning Era” (2017) 105 Geo LJ 1147, 1189–90. Note that such inspections may need to be in-depth, lest automation bias lead to undue reassurance. Hramanpreet Kaur et al., “Interpreting Interpretability: Understanding Data Scientists’ Use of Interpretability Tools for Machine Learning” CHI 2020 Paper (accessed Mar 9 2020) www-personal.umich.edu/~harmank/Papers/CHI2020_Interpretability.pdf (finding “the existence of visualizations and publicly available nature of interpretability tools often leads to over-trust and misuse of these tools”).

27 Andrew D. Selbst and Julia Powles, “Meaningful Information and the Right to Explanation” (2017) 7(4) International Data Privacy Law 233; Gianclaudio Malgieri and Giovanni Comandé, “Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation” (2017) 7(4) International Data Privacy Law 243. But see State v. Loomis 881 NW2d 749 (Wis 2016), cert denied, 137 S Ct 2290 (2017) (“[W]e conclude that if used properly, observing the limitations and cautions set forth herein, a circuit court’s consideration of a COMPAS risk assessment at sentencing does not violate a defendant’s right to due process,” even when aspects of the risk assessment were secret and proprietary.)

28 Electronic Privacy Information Center (EPIC), “Algorithms in the Criminal Justice System: Pre-Trial Risk Assessment Tools” (accessed Mar 6 2020) https://epic.org/algorithmic-transparency/crim-justice/ (“Since the specific formula to determine ‘risk assessment’ is proprietary, defendants are unable to challenge the validity of the results. This may violate a defendant’s right to due process.”).

29 For intellectual history of shifts toward preferring the convenience and reliability of numerical forms of evaluation, see Theodore Porter, Trust in Numbers: The Pursuit of Objectivity in Science and Public Life (Princeton University Press 1995); William Deringer, Calculated Values: Finance, Politics, and the Quantitative Age (Harvard University Press 2018).

30 Antinore v. State, 371 NYS2d 213 (NY App Div 1975); Gorham v. City of Kansas City, 590 P2d 1051 (Kan 1979); Richard Wallace, Comment, “Union Waiver of Public Employees’ Due Process Rights” (1986) 8 Indus Rel LJ 583; Ann C. Hodges, “The Interplay of Civil Service Law and Collective Bargaining Law in Public Sector Employee Discipline Cases” (1990) 32 Boston Coll LR 95.

31 The problem of “rights sacrifice” is not limited to the examples in this paragraph. See also Dionne L. Koller, “How the United States Government Sacrifices Athletes’ Constitutional Rights in the Pursuit of National Prestige” 2008 BYU LR 1465, for an example of outsourcing decision-making to venues without the robustness of traditional due process protections.

32 Peter J. Walker, “Private Firms Earn £500m from Disability Benefit Assessments” The Guardian (Dec 27 2016) www.theguardian.com/society/2016/dec/27/private-firms-500m-governments-fit-to-work-scheme; Dan Bloom, “Privately-Run DWP Disability Benefit Tests Will Temporarily Stop in New ‘Integrated’ Trial” The Mirror (Mar 2 2020) www.mirror.co.uk/news/politics/privately-run-dwp-disability-benefit-21617594.

33 Robert Pear, “On Disability and on Facebook? Uncle Sam Wants to Watch What You Post” New York Times (2019 Mar 10) www.nytimes.com/2019/03/10/us/politics/social-security-disability-trump-facebook.html; see also Footnote n 8.

34 Catharine A. Mackinnon, Toward a Feminist Theory of the State (Harvard University Press 1989).

35 G. A. Cohen, “Where the Action Is: On the Site of Distributive Justice” (1997) 26(1) Philosophy & Public Affairs 330.

36 Daniel A. Farber, “Another View of the Quagmire: Unconstitutional Conditions and Contract Theory” (2006) 33 Fla St LR 913, 914–15.

37 Footnote Ibid., 915 (“Most, if not all, constitutional rights can be bartered away in at least some circumstances. This may seem paradoxical, but it should not be: having a right often means being free to decide on what terms to exercise it or not.”).

40 Frank Pasquale, “Secret Algorithms Threaten the Rule of Law” MIT Tech Review (June 1 2017) www.technologyreview.com/s/608011/secret-algorithms-threaten-the-rule-of-law/; Frank Pasquale, Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press 2015).

41 318 F3d 365 (1st Cir 2003).

42 Carrascalao v. Minister for Immigration [2017] FCAFC 107; (2017) 347 ALR 173. For an incisive analysis of this case and the larger issues here, see Will Bateman, “Algorithmic Decision-Making and Legality: Public Law Dimensions” (2019) 93 Australian LJ.

43 Chad M. Oldfather, “Writing, Cognition, and the Nature of the Judicial Function” (2008) 96 Geo LJ 1283.

44 Cary Coglianese and David Lehr, “Regulating by Robot: Administrative Decision: Making in the Machine-Learning Era105 Geo LJ 1147 (2017).

45 Mark Andrejevic, Automated Media (Routledge 2020).

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×