Skip to main content Accessibility help
×
Hostname: page-component-cc8bf7c57-fxdwj Total loading time: 0 Render date: 2024-12-11T17:11:52.566Z Has data issue: false hasContentIssue false

Part II - Automated States

Published online by Cambridge University Press:  16 November 2023

Zofia Bednarz
Affiliation:
University of Sydney
Monika Zalnieriute
Affiliation:
University of New South Wales, Sydney

Summary

Type
Chapter
Information
Money, Power, and AI
Automated Banks and Automated States
, pp. 93 - 170
Publisher: Cambridge University Press
Print publication year: 2023
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

5 The Automated Welfare State Challenges for Socioeconomic Rights of the Marginalised

Terry Carney
Footnote *

More recently, administrative agencies have introduced ‘new public analytics’ approaches, using data-driven technologies and risk models to reshape how commonplace administrative decisions are produced.Footnote 1

5.1 Introduction

Artificial intelligence (AI) is a broad church. Automated decision-making (ADM), a subset of AI, is the form of technology most commonly encountered in public administration of the social services, a generic term which includes income support (social security) and funding or provision of services such as disability support funding under Australia’s National Disability Insurance Scheme (NDIS). New public analytics is a label that nicely captures how ADM is deployed as the contemporary form of public administration.Footnote 2

ADM has long been an integral aid to the work of hard-pressed human administrators exercising their delegated social security powers in Centrelink (the specialist service delivery arm of the federal government department called Services Australia). Early digitisation of social security benefits administration not only resulted in considerable efficiency gains but provided the guide-rails that protected against the more egregious errors or decline in decision-making quality as staffing was drastically reduced in scale and shed higher levels skills and experience. Automation as such has not been the issue; the issue is a more recent one of a breakneck rush into a ‘digital first future’Footnote 3 and the abysmal failure of governance, design, ethics, and legal rectitude associated with the $1.8 billion robodebt catastrophe.Footnote 4 As Murphy J observed in his reasons approving the class action settlement, this was a ‘shameful chapter in the administration of the Commonwealth social security system and a massive failure of public administration [which] should have been obvious to the senior public servants charged with overseeing the Robodebt system and to the responsible Minister at different points’; a verdict echoed by the Royal Commissioner in her July 2023 Report.Footnote 5

ADM is only a technology. Like all new technologies, there are extremes of dystopian and utopian evaluative tropes, though a mature assessment often involves a more ambiguous middle ground.Footnote 6 Like the history of other new technological challenges to law, the answers may call for innovative new approaches, rather than extension of existing remedies. Robodebt was ultimately brought to heel by judicial review and class actions, but the much vaunted ‘new administrative law’ machinery of the 1970sFootnote 7 was seriously exposed. Merits review failed because government ‘gamed it’Footnote 8 while the other accountability mechanisms proved toothless.Footnote 9 So radical new thinking is called for.Footnote 10 AI for its part ranges in form from computational aids (or automation) to neural network ‘machine learning’ systems. Even agreed taxonomies of AI are still in development, including recently by the Organisation for Economic Co-operation and Development (OECD), with its four-fold schema of context; data and input; AI model; and task and output.Footnote 11

The focus of this chapter on social security and social services is apt, because Services Australia (as the former Department of Human Services is now called) was envisaged by the Digital Transformation Agency (‘DTA’ formerly ‘Office’) as ‘the first department to roll out intelligent technologies and provide new platforms to citizenry, in accordance with the then DTA’s roadmap for later adoption by other agencies’.Footnote 12 The focus of this chapter is on the associated risk of digital transformation in the social services, of three main forms. First, the risk due to the heightened vulnerabilities of clients of social services.Footnote 13 Second, the risk from inadequate design, consultation, and monitoring of ADM initiatives in the social services.Footnote 14 And finally, the heightened risk associated with particular ADM technologies.

The next section of the chapter (Section 5.2) reviews selected ADM/AI examples in social services in Australia and elsewhere. To draw out differences in levels of risk of various initiatives it takes as a loose organising principle Henman’sFootnote 15 observation that the risks and pitfalls of AI increase along a progression – lowest where it involves recognising ‘patterns’, higher where individuals are ‘sorted’ into categories, and highest where AI is used to make ‘predictions’. Section 5.3 discusses the harm inflicted on vulnerable clients of social services when ADM and AI risks are inadequately appreciated, and some options for better regulation and accountability. It questions both the capacity of traditional judicial and administrative machinery in holding AI to account, and the relevance and durability of those ‘values’ in the face of the transformational power of this technology to subordinate and remake law and social policy to instead reflect AI values and processes.

Restoration of trust in government is advanced in a short conclusion (Section 5.4) as being foundational to risk management in the social services. Trust is at the heart of the argument made for greater caution, more extensive co-design, and enhanced regulatory oversight of ADM in the social services.

5.2 Issues Posed by Automation and ADM

Three issues in particular stand out for social services in Australia. First, the comprehensibility or otherwise of the system for citizens engaging with it. Second, the compatibility or otherwise of ADM in case management. Finally, the risks and benefits of ‘predictive’ ADM in the social services.

5.2.1 Comprehensibility Issues
5.2.1.1 Early Centrelink Adoption of Digitisation and Decision Aids

Prior to robodebt, Centrelink clients concerns mainly centred on intelligibility of digitised social security records and communications, and the ability to understand automation of rate calculations or scoring of eligibility tools. The ADEX and MultiCal systems for debt calculations generate difficult-to-comprehend and acronym-laden print-outs of the arithmetic. This is because the measures were designed for convenience of internal inputting of data rather than ease of consumer comprehension.

The combination of deeply unintelligible consumer documentation and time-poor administrators often leaves too little time to detect less obvious keying or other errors. Internal review officer reconsiderations instead often focus on very basic sources of error such as couple status.Footnote 16 While external merits tribunal members do have the skills and expertise to penetrate the fogFootnote 17 this only rectifies a very small proportion of such errors (only 0.05% in the case of robodebts), and only for those with the social capital or resources to pursue their concern.

Lack of transparency of communications with run-of-the-mill social security clients remains problematic for want of investment in provision of the ‘public facing’ front-end interfaces (or correspondence templates) to convert an almost 100 per cent digital environment into understandable information for the public. Instead, new investment was initially in pilots to classify and file supporting documents for claims processing.Footnote 18 Only in recent years were expressions of interest sought for general customer experience upgrades of the MyGov portal,Footnote 19 reinforced by allocation of $200 million in the 2021–2022 budget for enhancements to provide a ‘simpler and more tailored experience for Australians based on their preferences and interactions’, but also including virtual assistants or chatbots.Footnote 20

Comprehensibility of debt calculations and other routine high incidence transactions to ordinary citizens surely should be the first reform priority. Transparency to citizens certainly hinges on it. Availability of accurate information to recipients of ADM-based welfare is fundamental to individual due process. This was demonstrated by the contrast between Australia’s failure to explain adequately the basis of yearly income variations under its unlawful ‘robodebt’ calculations, compared to the way case officers in the Swedish student welfare program provided explanations and an immediate opportunity to rectify inaccurate information.Footnote 21 Even review bodies such as the Administrative Appeals Tribunal (AAT) would benefit from comprehensibility of the basis of decisions. It would benefit from time freed up to concentrate on substantive issues, due to no longer having to pick their way through the morass of computer print-outs and multiple acronyms simply to create an accessible narrative of issues in dispute.Footnote 22

5.2.2 ADM Case Management Constraints
5.2.2.1 The (Aborted) NDIS Chatbot

The NDIS is seen as a pathbreaker for digitisation in disability services.Footnote 23 But the National Disability Insurance Authority (NDIA) was obliged to abort roll-out of its sophisticated chatbot, called Nadia.

Nadia was designed to assume responsibility for aspects of client interaction and case management. The chatbot was built as a machine learning cognitive computing interface, involving ‘data mining and pattern recognition to interact with humans by means of natural language processing’.Footnote 24 It was to have an ability to read and adjust to emotions being conveyed, including by lightening the interaction such as by referencing information about a person’s favourite sporting team. However, it did not proceed beyond piloting. As a machine learning system it needed ongoing access to a large training set of actual NDIS clients to develop and refine its accuracy. Rolling it out was correctly assessed as carrying too great ‘a potential risk, as one incorrect decision may disrupt a person’s ability to live a normal life’.Footnote 25

This risk of error is a serious one, not only for the person affected by it, but also to the confidence of people in public administration. Given the sophistication required of ‘human’ chatbots, it presently must be doubted whether a sufficient standard of performance and avoidance of risk can be attained for vulnerable social security or disability clients. As Park and HumphreyFootnote 26 suggest, that ability to give human-like cues to end users means that the chatbot ‘need[s] to be versatile and adaptable to various conditions, including language, personality, communication style and limits to physical and mental capacities’. This inability of ADM to bridge the ‘empathy gap’ is why it is so strongly argued that such administrative tasks should remain in the hands of human administrators.Footnote 27 Even smartphone digital reporting proved highly problematic for vulnerable income security clients such as young single parents under (now abolished) ParentsNext.Footnote 28 So, it was surely hoping too much to expect better outcomes in the much more challenging NDIS case management environment.

Such issues are not confined to Australia or to case management software of course. Ontario’s ‘audit trail’ welfare management software, deployed to curb a perceived problem of over-generosity, was found to have ‘decentred’ or displaced caseworkers from their previous role as authoritative legal decision-makers.Footnote 29 The caseworkers responded by engaging in complicated work-arounds to regain much of their former professional discretion. As Raso concluded, ‘[s]oftware that requires individuals to fit into pre-set menu options may never be sophisticated enough to deliver complex social benefits to a population as diverse as [Ontario’s welfare] recipients’.Footnote 30

A US federal requirement to automate verification of Medicaid remuneration of disability caregivers provides yet another example. The state of Arkansas adopted an inflexibly designed and user-unfriendly service app (with optional geo-location monitoring). This proved especially problematic for clients who were receiving ‘self-directed’ care. Care workers were unable to step outside the property boundaries on an errand or to accompany the person without triggering a ‘breach’ of the service being provided. Unlike Virginia, Arkansas had neglected to take advantage of the ability to exempt self-care, or remove problematic optional elements.Footnote 31

5.2.2.2 NDIA’s Aborted ADM Assessment and Planning Reforms

In 2021 public attention was drawn to an NDIA proposal to replace caseworker evaluations by objective rating ‘scores’ when assessing eligibility for the NDIS, and to also serve as a basis for providing indicative packages of funding support. This was shelved on 9 July 2021,Footnote 32 at least in that form.Footnote 33 The measure was designed to address inequities around access and size of packages. The stated policy objective was to improve equity of access between different disability groups and between those with and those without access to a good portfolio of recent medical reports, as well as reduce staffing overheads and processing time.Footnote 34 Subjective assessments of applicant-provided medical reports were to have been replaced by objective ‘scores’ from a suite of functional incapacity ‘tools’. Rating scores were designed not only to improve consistency of NDIS access decisions, but also generate one of 400 personas/presumptive budgets.Footnote 35

The rating tool and eligibility leg of this reform was not true ADM. That aspect mirrored the historical reform trajectory for Disability Support Pension (DSP) and Carer Allowance/Payments (CA/CP). Originally eligibility for DSP (then called Invalid Pension, IP) was based on showing that an applicant experienced an actual or real life 85 per cent ‘incapacity for work’.Footnote 36 In the 1990s this was transformed from an enquiry about the real human applicant to becoming an abstraction – assessing the theoretical ability or not of people with that class of functional impairment to be able to perform any job anywhere in the country – and requiring minimum scores under impairment tables rating functional impairment (leaving extremely narrow fields/issues for subjective classification of severity). These and associated changes significantly reduced the numbers found eligible for these payments.Footnote 37 Similar changes were made for CA and CP payments. The proposed NDIS assessment tools, distilled from a suite of existing measures and administered by independent assessors (as for DSP), followed the disability payment reform pathway. The risks here were twofold. First that the tool would not adequately reflect the legislative test; second, that the scoring basis would not be transparent or meaningful to clients of the NDIS and their family and advisers.Footnote 38

The reform did however have a genuine ADM component in its proposed case planning function. The assessment tool was intended not only to determine eligibility for NDIS access but also to then generate one of 400 different ‘template’ indicative funding packages. This leg of the proposed reform was criticised as robo-planning which would result in lower rates of eligibility, smaller and less appropriate packages of support, and loss of individualisation (including loss of personal knowledge reflected in medical reports no longer to be part of the assessment) along with a substantial reduction of human engagement with case planners.Footnote 39

This was a true deployment of ADM in social services, highlighting Henman’s middle range risks around ADM categorisation of citizens, as well as risks from devaluing professional casework skills, as further elaborated in the next section.

5.2.3 Predictive ADM

Risks associated with ADM are arguably most evident when it is predictive in character.Footnote 40 This is illustrated by the role predictive tools play in determining the level and adequacy of employment services for the unemployed in Australia,Footnote 41 and the way compliance with the allocated program of assistance to gain work is tied to retention of eligibility for or the rate of unemployment payments.Footnote 42 The accuracy or otherwise of the prediction is key to both experiences.

5.2.3.1 Predictive ADM Tools in Employment Services and Social Security

Predictive ADM tools to identify those at greatest risk of long-term unemployment operate by allocating people to homogenous bands according to predictors unemployment duration (statistical profiling). These statistical profiling predictions are much more accurate than random allocation, but still misclassify some individuals. They also fail to identify or account for causal reasons for membership of risk bands.Footnote 43 Human assessments are also liable to misclassify, but professional caseworkers lay claim to richer understandings of causal pathways, which may or may not be borne out in practice.

Predictive tools are constructed in two main ways. As an early pioneer, Australia’s Job Seeker Classification Instrument (JSCI) was developed and subsequently adjusted using logistic regression.Footnote 44 Other international designs are constructed using machine learning which interrogates very large data sets to achieve higher accuracy of prediction, as in the Flemish tool.Footnote 45 As with all ADM predictive tools, reflection and reinforcement of bias is an issue: ‘[b]y definition, high accuracy models trained on historical data to satisfy a bias preserving metric will often replicate the bias present in their training data’.Footnote 46

While there is a large literature on the merits or otherwise of possible solutions for unacceptable bias and discrimination in AI, statistical profiling poses its own quite nuanced ethical challenges. Membership of a racial minority is associated with longer durations of unemployment for instance. But the contribution of racial minority to allocation to a statistical profile band can be either bitter or sweet. Sweet if placement in that band opens a door to voluntarily obtaining access to employment services and training designed to counteract that disadvantage (positive discrimination). Bitter if band placement leads to involuntary imposition of requirements to participate in potentially punitive victim blaming programs such as work for the dole. This risk dilemma is real. Thus, a study of the Flemish instrument found that jobseekers not born in the country were 2.6 times more likely to wrongly be classified as at high risk of long-term unemployment.Footnote 47

Nor is the issue confined to the more obvious variables. It arises even with superficially more benign correlations, such as the disadvantage actually suffered from having a long duration of employment for a single employer prior to becoming unemployed. Its inclusion in the predictive algorithm is more acceptable if this results in accessing programs to help counter the disadvantage, such as projecting the human capital benefits of past loyalty to the previous employer compared to likely future sunk costs associated with other applicants with more varied employment histories. But its inclusion is ethically more problematic if it only exposes the person to greater likelihood of incurring income support or other sanctions. Other examples of predictive legal analytics also show that the normative aspect of the law is often supplanted by causal inference drawn from a data set, which may or may not reflect the relevant legal norms.Footnote 48

To a considerable degree, the contribution of statistical profiling hinges on the way it is used. The lack of engagement with causal factors and the arbitrariness or bias of some variables constituting the algorithm is magnified where caseworkers are left with little scope for overriding the initial band allocation. This is the case with Australia’s JSCI, a risk compounded by lack of transparency of the algorithm’s methodology.Footnote 49 These risks are lessened in employment services systems which leave caseworkers in ultimate control, drawing on assistance from a profiling tool. That is the way the tools are used in Germany, Switzerland, Greece, and Slovenia.Footnote 50

This analysis of the risks associated with predictive tools in employment services is consistent with findings from other areas of law. For example decisions grounded in pre-established facts, such as aspects of aggravating and mitigating criminal sentencing considerations may be more amenable to computation, overcoming perceived deficiencies of instinctive synthesis sentencing law.Footnote 51 Distinguishing between administrative decisions as either rule-based or discretionary may prove also useful, because ADM applied to discretionary decisions may result in a failure to lawfully exercise discretion.Footnote 52 Discretionary tasks high in complexity and uncertainty arguably fare better under human supervision and responsibility, such as by a caseworker.Footnote 53

For its part, Australia mitigates the risk of JSCI predictive errors in two ways. First, an employment services assessment may be conducted by a contracted health or allied health professional in certain circumstances. This is possible where it is shown that there are special barriers to employment, a significant change of circumstances or other indications of barriers to employment participation.Footnote 54 The weakness of this is that it occurs only in exceptional circumstances, rather than as part of routine caseworker fine tuning of the overly crude and harsh streaming recommendations resulting from application of the JSCI. So it is essentially confined to operating as a vulnerability modifier.

Second, a new payment system has been introduced to break the overly rigid nexus between the JSCI determined stream and the level of remuneration paid to employment providers for a person in that stream. The old rigid payment regime was exposed as perverse both by academic researchFootnote 55 and the government’s own McPhee Report.Footnote 56 Rather than encourage investment in assisting people with more complex needs it perversely encouraged parking or neglect of such cases in order to concentrate on obtaining greater rewards from assisting those needing little if any help to return to work. The New Enhanced Services model decouples service levels and rates of payment to providers for achieved outcomes, ‘which provides some additional flexibility, so a participant with a High JSCI but no non-vocational barriers can be serviced in Tier 1 but still attract the higher outcome payments’.Footnote 57

The obvious question is why Australia’s employment services structure and JSCI instrument survived with so little refinement to its fundamentals for nearly two decades after risks were first raised.Footnote 58 Davidson argues convincingly that path-dependence and cheapness were two main reasons why it took until the McPhee Report to effect systemic change.Footnote 59 It is suggested here that another part of the answer lies in a lack of appetite for and difficulty of realising processes of co-design with welfare clients and stakeholders. Certainly, recent experience with co-design in Denmark demonstrates that it is possible to construct a more sophisticated and balanced system which avoids the worst of the adverse effects of statistical profiling and welfare conditionality.Footnote 60 The case for co-production with users is not new to Australian public administration.Footnote 61 Co-design is particularly favoured where risks of discrimination and exclusion are present.Footnote 62

In theory co-design of employment services should also be possible in Australia, but the history of top-down and very harsh ‘work-first’ welfare-to-work policiesFootnote 63 suggests that its realisation is unlikely.

5.3 Responding to the ‘Power’ of AI

[I]n our current digital society, there are three phenomena that simultaneously connect and disconnect citizens from government and impede millions of individuals from exercising their rights on equal terms: bureaucracy, technology, and power asymmetries.Footnote 64

ADM and AI technology in the social services carries a potential both to harm participants as well as to radically transform services by compressing the range of social policy options considered in program design much in the same way these technologies can change the bases of legal accountability (Section 5.3.2).

The power of poorly conceived ADM and AI to inflict unacceptable harm on vulnerable citizens reliant on social services is well established. The challenge here lies in finding ways of mitigating that risk, as discussed below.

5.3.1 The Vulnerability Challenge in Social Services

Common to assessing all of these examples of automation and artificial intelligence in welfare is the impact on vulnerable clients. That vulnerability cannot be overstated. As Murphy J wrote in approving the robodebt class action settlement in Prygodicz:

It is fundamental that before the state asserts that its citizens have a legal obligation to pay a debt to it, and before it recovers those debts, the debts have a proper basis in law. The group of Australians who, from time to time, find themselves in need of support through the provision of social security benefits is broad and includes many who are marginalised or vulnerable and ill-equipped to properly understand or to challenge the basis of the asserted debts so as to protect their own legal rights. Having regard to that, and the profound asymmetry in resources, capacity and information that existed between them and the Commonwealth, it is self-evident that before the Commonwealth raised, demanded and recovered asserted social security debts, it ought to have ensured that it had a proper legal basis to do so. The proceeding revealed that the Commonwealth completely failed in fulfilling that obligation.Footnote 65

The pain and sufferingFootnote 66 from the abysmal failure of governance, ethics, and legal rectitude in the $1.8 billion robodebt catastropheFootnote 67 was ultimately brought to heel by judicial review and class actions. Yet as already mentioned, the much vaunted ‘new administrative law’ remedial machinery of the 1970s was seriously exposed. Merits review failed because government ‘gamed it’ by failing to further appeal over 200 adverse rulings that would have made the issue public.Footnote 68 Other accountability mechanisms also proved toothless.Footnote 69 Holding ADM to account through judicial remedies is rarely viable, though very powerful when it is apt.Footnote 70 Judicial review is costly to mount, gameable, and confined to those risks stemming from clear illegality. Robodebt was a superb but very rare exception to the rule, despite the November 2019 settlement victory in the AmatoFootnote 71 test case action, and the sizeable class action compensation settlement subsequently achieved in Prygodicz.Footnote 72 A test case launched prior to Amato was subject to government litigational gaming. That challenge was halted by the simple step of a very belated exercise of the statutory power to waive the ‘debt’. The same fate could have befallen Amato had the then government been less stubborn in refusing to pay interest on the waived debt.Footnote 73 For its part, the reasons approving the Prygodicz settlement makes clear how remote is the prospect of establishing a government duty of care in negligence, much less establishing proof of breach of any duty of care.Footnote 74

Administrative law judicial or merits review redress predicated on an ‘after-the-event’ interrogation of the process of decision-making or the lawfulness (and merits in the case of tribunal review) of the reasons for decisions is further undermined by the character of ADM and AI decision-making. This is because neither the decision-making processes followed, nor the thinned down/non-existent reasons generated by the ‘new technological analytics’Footnote 75 are sufficiently amenable to traditional doctrine.Footnote 76 For example, bias arising from the data and code underlying ADM together with biases arising from any human deference to automated outputs, pose evidentiary challenges which may not be capable of being satisfied for the purpose of meeting the requirements of the rule against bias in judicial review.Footnote 77 The ability to bend traditional administrative law principles of due process, accountability, and proportionality to remedy the concerns posed by ADM thus appears to be quite limited.Footnote 78

Outranking all of these concerns, however, is that neither merits review nor judicial review is designed to redress systemic concerns as distinct from individual grievances. So radical new thinking is called for,Footnote 79 such as a greater focus on governmentality approaches to accountability.Footnote 80 To understand the gaps in legal and institutional frameworks, the use of ADM systems in administrative settings must be reviewed as a whole – from the procurement of data and design of ADM systems to their deployment.Footnote 81 Systemic grievances are not simply a result of purely ‘mathematical flaws’ in digital systems, as opposed to the product of accountability deficiencies within the bureaucracy and structural injustice.Footnote 82

One possible new direction is through ADM impact statement processes designed to help prevent systemic grievances. An example is Canada’s Directive, modelled on the GDPR and largely mimicking administrative law values.Footnote 83 While this certainly has merit, it is open to critique as paying but lip service to risk prevention because it relies on industry collaboration and thus has potential for industry ‘capture’ or other pressures.Footnote 84 Other alternatives include a mixture of ex ante and ex post oversight in the form of an oversight board within the administrative agency to circumvent the barrier of a costly judicial challenge,Footnote 85 and the crafting of sector-specific legal mechanisms.Footnote 86

There is also theoretical appeal in the more radical idea of turning to a governance frame that incorporates administrative accountability norms as its governance standard. The best known of these are Mashaw’s trinity of bureaucratic rationality, moral judgement, and professional treatment, and Adler’s additions of managerialism, consumerist, and market logics.Footnote 87

However these innovative ideas presently lack remedial purchase. Incorporation of tools such as broadened impact assessments may give these norms and values some operational purchase, but the limitations of impact assessment would still remain.Footnote 88 A research impact framework for AI framed around concepts of public value and social value may hold greater promise.Footnote 89

Self-regulation against industry ethics codes, or those co-authored with regulators, has also proven to be weak reeds. They too are easily ‘subsumed by the business logics inherent in the technology companies that seek to self-impose ethical codes’,Footnote 90 or a form of ‘ethics washing’.Footnote 91 As Croft and van RijswijkFootnote 92 detailed for industry behemoths such as Google, this inability to curb corporate power is because it is systemic. As James and Whelan recently concluded:

Codifying ethical approaches might result in better outcomes, but this still ignores the structural contexts in which AI is implemented. AI inevitably operates within powerful institutional systems, being applied to the ‘problems’ identified by those systems. Digital transformation reinforces and codifies neoliberal agendas, limiting capacities for expression, transparency, negotiation, democratic oversight and contestation … This can be demonstrated by juxtaposing the AI ethics discourse in Australia with how AI has been implemented in social welfare.Footnote 93

The Australian Human Right Commission (AHRC) Report also delivered underwhelming support,Footnote 94 though academic work is continuing to boost the contribution to be made by ethics-based audits.Footnote 95

Consideration of how to mitigate risk of harm to vulnerable recipients of the social services cannot be divorced from meta-level impacts of ADM and AI technology on the character and design of law and social programs, as discussed below.

5.3.2 The Transformational Power of AI to Shape Social Policy and Law

Lawyers and social policy designers are rather accustomed to calling the shots in terms of setting normative and procedural standards of accountability (law) and formulating optimally appropriate social service programs (social policy). Digitisation, however, not only transforms the way individual citizens engage with the state and experience state power at the micro-level, but also transforms the nature of government services and modes of governance. The second of these, the transformation of governance by ADM and AI technologies,Footnote 96 is perhaps better known than the first.

Public law scholars have begun to recognise that it may not simply remain a question of how to tame ADM by rendering it accountable to traditional administrative law standards such as those of transparency, fairness, and merits review, but rather of how to avoid those values being supplanted by ADM’s values and ways of thinking. The concern is that ADM remakes law in its technological image rather than the reverse of making ADM conform to the paradigms of the law.Footnote 97

The same contest between ADM and existing paradigms is evident in other domains of government services. Contemporary advances in design of social services for instance favours ideas such as personalisation, social investment, and holistic rather than fragmented services.Footnote 98 But each of these policy goals is in tension with ADM’s design logic of homogenisation and standardisation.Footnote 99 Personalisation of disability services through case planning meetings and devolution of responsibility for individual budgets to clients, in place of top-down imposition of standard packages of services, is one example of that tension, as recently exemplified in the NDIS.Footnote 100 The mid-2022 roll-out of algorithmic online self-management of employment services (PEPs) to all except complex or more vulnerable clients is anotherFootnote 101 despite introduction of requirement for a digital protection framework under s 159A (7) and (9) of the Social Security Legislation Amendment (Streamlined Participation Requirements and Other Measures) Act 2022.

Initiatives across the health and justice systems, such as ‘social prescribing’ designed to address the contribution of socioeconomic disadvantage to disability and health issues such as by coordinating income support and health services,Footnote 102 or integration of human services and justice systems through justice reinvestment or therapeutic ‘problem-solving’ courtsFootnote 103 are two other settings where the same tension arises. In the case of social prescribing, the rigid ‘quantification’ of eligibility criteria for access to the disability pension, together with strict segregation of social security and health services, compounds the issue. In the second instance, predictive criminal justice risk profiling tools threaten to undermine the central rationale of individualisation and flexibility of justice reinvestment interventions to build capacity and avoid further progression into criminality.Footnote 104

What is able to be built in social policy terms depends in no small part on the available materials from which it is to be constructed. Rule-based materials such as the algorithms and mechanisms of ADM are unsuited to building social programs reliant on the exercise of subjective discretionary choices. Just as the fiscal objective of reducing staff overheads to a minimum led to enactment of rules in place of former discretionary powers in Australian social security law,Footnote 105 government policies such as ‘digital first’ inexorably lead to push back against policies of individualisation and accommodation of complexity. Those program attributes call for expensive professional skills of human caseworkers or the less pricey discretionary judgments of human case administrators. ADM is far less costly than either, so in light of the long reign of neoliberal forms of governance,Footnote 106 it is unsurprising that social protection is being built with increasing amounts of ADM and AI,Footnote 107 and consequently is sculpted more in the image of that technology than of supposedly favoured welfare policies of personalisationFootnote 108 or those of social investment.Footnote 109

There are many possible longer-run manifestations should ADM values and interests gain the upper hand over traditional legal values. One risk is that ADM systems will create subtle behavioural biases in human decision-making,Footnote 110 changing the structural environment of decision-making. For example the facility of ADM to ascertain and process facts may lead to lesser scrutiny of the veracity of these facts than would be the case in human decision-making. Abdicating the establishment of fact and the value-judgements underlying factfinding to ADM substitutes digital authority for human authority.Footnote 111 This raises questions of accountability where human actors develop automation bias as a result of failing to question outputs generated by an automated system.Footnote 112

Other manifestations are more insidious, including entrenchment of an assumption that data-driven decision-making is inherently neutral and objective rather than subjective and contested, or overlooking the contribution of surveillance capitalism discourse around the business practices that procure and commodify citizen data for a profit.Footnote 113 This criticism has been levelled at Nordic governmental digitalisation initiatives. The Danish digital welfare state, for example, has drawn academic scrutiny for an apparently immutable belief that data processing initiatives will create a more socially responsible public sector, overlooking the consequences of extensive data profiling using non-traditional sources such as information from individuals’ social networking profiles. The public sector’s embrace of private sector strategies of controlling consumers through data suggests a propensity for rule of law breaches through data maximisation, invasive surveillance, and eventual citizen disempowerment.Footnote 114

This is not the place to do other than set down a risk marker about the way ADM and AI may change both the architecture and values of the law as well as of the very policy design of social service programs. That resculpting may be dystopian (less accommodating of human difference and discretions) or utopian in character (less susceptible to chance variability and irrelevant influences known as decisional ‘noise’). The reciprocal power contest between the power of AI technology on the one hand and law/social policy on the other is however a real and present concern, as the NDIS example demonstrated.

5.4 Towards AI Trust and Empathy for Ordinary Citizens

Administration of social security payments and the crafting of reasonable and necessary supports under the NDIS are quintessentially examples of how law and government administration impact ‘ordinary’ citizens. As Raso has observed:

As public law scholars, we must evaluate how legality or governance functions within administrative institutions in everyday and effectively final decisions. As we develop theories of how it ought to function, we must interrogate how decision making is functioning.Footnote 115

It is suggested here that the principal impression to be drawn from this review of Australia’s recent experience of rolling out ADM in Raso’s ‘everyday’ domain of the ordinary citizen, is one of failure of government administration. It is argued that the history so far of Australian automation of welfare – most egregiously the robodebt debacle – demonstrates both a lack of government understanding that the old ways of policy-making are no longer appropriate, and that public trust in government has seriously eroded. Automation of welfare in Australia has not only imposed considerable harm on the vulnerable,Footnote 116 but has destroyed an essential trust relationship between citizens and government.Footnote 117

Restoring trust is critical. Trust is one of the five overarching themes identified for consultation in February 2022 by the PM&C’s Digital Technology Taskforce and in the AHRC’s final report.Footnote 118 Restoration of trust in the NDIS was also one of the main themes of the recent Joint Parliamentary Committee report on independent assessments.Footnote 119 Consequently, if future automation is to retain fidelity to values of transparency, quality, and user interests, it is imperative that government engage creatively with the welfare community to develop the required innovative new procedures. A commitment to genuine co-design and collaborative fine-tuning of automation initiatives should be a non-negotiable first step, as stressed for the NDIS.Footnote 120 Ensuring empathy of government/citizen dealings is another.

Emphasising in Chapter 9 about the potential for the automated state, wisely crafted and monitored to realise administrative law values, Cary Coglianese writes that

[i]n an increasingly automated state, administrative law will need to find ways to encourage agencies to ensure that members of the public will continue to have opportunities to engage with humans, express their voices, and receive acknowledgment of their predicaments. The automated state will, in short, also need to be an empathic state.

He warns that ‘[t]o build public trust in an automated state, government authorities will need to ensure that members of the public still feel a human connection’. This calls for a creative new administrative vision able to honour human connection, because ‘[i]t is that human quality of empathy that should lead the administrative law of procedural due process to move beyond just its current emphasis on reducing errors and lowering costs’. That vision must also be one that overcomes exclusion of the marginalised and vulnerable.Footnote 121 Another contribution to building trust is to be more critical of the push for automated administration in the first place. An American ‘crisis of legitimacy’ in administrative agencies has been attributed to the way uncritical adoption of ADM leads to the loss of the very attributes that justify their existence, such as individualisation.Footnote 122 Framing the NDIS independent assessor episode in this way demonstrated a similar potential deterioration of citizen trust and legitimacy.

Building trust and empathy in social service administration and program design must fully embrace not only the mainstream human condition but also the ‘outliers’ that AI standardisation excludes.Footnote 123 At the program design level this at a minimum calls for rejection of any AI or ADM that removes or restricts inclusion of otherwise appropriate elements of personalisation, subjective human judgement, or exercise of discretion relevant to advancing agreed social policy goals. This extends to AI outside the program itself, including being sensitive to indirect exclusion from discriminatory impacts of poorly designed technological tools such as smartphones.Footnote 124

Half a century ago in the pre-ADM 1970s, the ‘new administrative law’ of merits review and oversight bodies was touted as the way to cultivate citizens’ trust in government administration and provide access to administrative justice for the ordinary citizen, though even then the shortfall of preventive avenues was recognised.Footnote 125 Overcoming the ability of government to game first-tier AAT by keeping adverse rulings secret, and arming it with ways of raising systemic issues (such as a form of ‘administrative class action’) might go a small way to restoring trust and access to justice. But much more creative thinking and work is still to be done at the level of dealing with individual grievances as well.Footnote 126

In short, this chapter suggests that the conversation about the ADM implications for the socioeconomic rights of marginalised citizens in the social services has barely begun. Few remedies and answers currently exist either for program design or for individual welfare administration.

6 A New ‘Machinery of Government’? The Automation of Administrative Decision-Making

Paul Miller
Footnote *
6.1 Introduction: ADM and the Machinery of Government

The machinery of government are those structures, processes, and people that comprise departments and agencies, and through which governments perform their functions. The term is perhaps best known in the context of ‘MoG changes’ – the frequent adjustments made to the way departments and agencies are structured, responsibilities and staff are grouped and managed,Footnote 1 and how agencies are named.Footnote 2 For at least the last half century, the defining characteristic of the machinery of government has been public officials (the ‘bureaucrats’), structured into branches, divisions, and departments, operating pursuant to delegations, policies and procedures, and providing advice, making and implementing decisions, and delivering services for and on behalf of the government. Characterising governments as a ‘machine’ is both a metaphor and, like the term ‘bureaucracy’, can convey a somewhat pejorative connotation: machines (even ‘well-oiled machines’) are cold, unfeeling, mechanical things that operate according to the dictates of their fixed internal rules and logic.

This chapter examines a change brought about to the machinery of government that is increasingly permeating government structures and processes – the adoption of automated decision-making (ADM) tools to assist, augment, and, in some cases, replace human decision-makers. The ‘machinery of government’ metaphor has been extended to frame the discussion of this topic for three reasons. First, it more clearly focuses attention on the entire system that underpins any government administrative decision, and in which digital technology may play some role. Second, rather than assuming that new technologies must – because they are new – be unregulated, the role of new technology within the machinery of government should be considered, and therefore (at least as a starting point) the well-established laws and principles that already control and regulate the machinery of government need to be analysed. Finally, this chapter aims to consider whether there might be lessons to be learnt from the past when other significant changes have taken place in the machinery of government. For example, do the changes that are now taking place with the increasing digitisation of government decision-making suggest that we should consider a deeper examination and reform of our mechanisms of administrative review, in a similar way to what happened in Australia in the 1970s and 1980s in response to the upheavals then taking place?

In this chapter, some of the key themes addressed in detail in the NSW Ombudsman’s 2021 special report to the NSW Parliament, titled ‘The new machinery of government: using machine technology in administrative decision-making’Footnote 3 (Machine Technology report), are outlined. This chapter provides a brief context of the need for visibility of government use of ADM tools and the role of the Ombudsman, key issues at the intersection between automation and administrative law and practice, and broad considerations for agencies when designing and implementing ADM tools to support the exercise of statutory functions. The chapter concludes with a question of whether the rise of ADM tools may also warrant a reconsideration of the legal frameworks and institutional arrangements.

6.2 Context
6.2.1 The New Digital Age

We have entered a digital age, and it is widely accepted that governments must transform themselves accordingly.Footnote 4 In this context, government digital strategies often refer to a ‘digital transformation’ and the need for government to become ‘digital by design’ and ‘digital by default’.Footnote 5 It is unsurprising then that digital innovation has also begun to permeate the machinery of government, changing the ways public officials make decisions and exercise powers granted to them by Parliament through legislation.Footnote 6 ADM involves a broad cluster of current and future systems and processes that, once developed, run with limited or no human involvement, and whose output can be used to assist or even displace human administrative decision-making.Footnote 7 The technology ranges in complexity from relatively rudimentary to extremely sophisticated.

6.2.2 Government Use of ADM Tools

The use of simpler forms of ADM tools in public sector decision-making is not new. However, what is changing is the power, complexity, scale, and prevalence of ADM tools, and the extent to which they are increasingly replacing processes that have, up to now, been the exclusive domain of human decision-making. The Machine Technology report includes case studies of New South Wales (NSW) government agencies using AI and other ADM tools in administrative decision-making functions, including fines enforcement, child protection, and driver license suspensions. Such tools are also used in areas such as policing (at NSW State level) and taxation, social services and immigration (at Australian Commonwealth level). This rise of automation in government decision-making and service delivery is a global phenomenon.Footnote 8 Internationally, it has been observed that ADM tools are disproportionately used in areas that affect ‘the most vulnerable in society’ – such as policing, healthcare, welfare eligibility, predictive risk scoring (e.g., in areas such as recidivism, domestic violence, and child protection), and fraud detection.Footnote 9

As noted by the NSW Parliamentary Research Service, while there has been some international progress on increased transparency of ADM, no Australian jurisdiction appears to be working on creating a registry of ADM systems.Footnote 10 Additionally, in no Australian jurisdiction do government agencies currently have any general obligation to notify or report on their use of ADM tools. Nor does it appear that they routinely tell people if decisions are being made by or with the assistance of ADM tools. This lack of visibility means that currently it is not known how many government agencies are using, or developing, ADM tools to assist them in the exercise of their statutory functions, or which cohorts they impact. This is a substantial barrier to external scrutiny of government use of ADM tools.

6.2.3 The Risks of ‘Maladministration’

Clearly, there are many situations in which government agencies can use appropriately designed ADM tools to assist in the exercise of their functions, which will be compatible with lawful and appropriate conduct. Indeed, in some instances automation may improve aspects of good administrative conduct – such as accuracy and consistency in decision-making, as well as mitigating the risk of individual human bias. However, if ADM tools are not designed and used in accordance with administrative law and associated principles of good administrative practice, then its use could constitute or involve ‘maladministration’ (for example, unlawful, unreasonable, or unjust conduct).Footnote 11 This is where an agency’s conduct may attract the attention of the Ombudsman – as its role generally is to oversee government agencies and officials to ensure that they are conducting themselves lawfully, making decisions reasonably, and treating all individuals equitably and fairly. Maladministration can, of course, also potentially result in legal challenges, including a risk that administrative decisions or actions may later be held by a court to have been unlawful or invalid.Footnote 12

6.3 Administrative Law and ADM Technologies

There is an important ongoing discussion about the promises and potential pitfalls of the most highly sophisticated forms of AI technology in the public sector. However, maladministration as described above can arise when utilising technology that is substantially less ‘intelligent’ than many might expect. The case studies in the Machine Technology Report illustrate a range of issues relating to administrative conduct, for example, the automation of statutory discretion, the translation of legislation into code, and ADM governance. Only some aspects of the technologies used in those case studies would be described as AI. In any case, the focus from an administrative law and good conduct perspective is not so much on what the technology is, but what it does, and the risks involved in its use in the public sector.Footnote 13

Mistakes made when translating law into a form capable of execution by a machine will likely continue to be the most common source of unlawful conduct and maladministration in public sector use of ADM tools. While of course unaided human decision-makers can and do also make mistakes, the ramifications of automation errors may be far more significant. The likelihood of error may be higher, as the natural language of law does not lend itself easily to translation into machine code. The scale of error is likely to be magnified. The detection of error can be more difficult, as error will not necessarily be obvious to any particular person affected, and even where error is suspected, identifying its source and nature may be challenging even for the public authority itself. A machine itself is, of course, incapable of ever doubting the correctness of its own outputs. Rectifying errors may be more cumbersome, costly, and time-consuming, particularly if it requires a substantial rewriting of machine code, and especially where a third party vendor may be involved.

6.3.1 The Centrality of Administrative Law and Principles of Good Administrative Practice

Some of the broader concerns about use of ADM tools by the private sector, in terms of privacy, human rights, ethics, and so on, also apply (in some cases with greater relevance) to the public sector.Footnote 14 However, the powers, decisions, and actions of government agencies and officials are constitutionally different from that of the general private sector.

Public authorities exercise powers that impact virtually all aspects of an individual’s life – there is ‘scarcely any field of human activity which is not in some way open to aid or hindrance by the exercise of power by some public authority’.Footnote 15 The inherently ‘public’ nature of such functions (such as health, education, and transport) and the specific focus of some government service provision on groups of people likely to experience vulnerability, means that the government’s use of ADM tools will necessarily, and often significantly, impact most of society. Recipients of government services – unlike customers of private sector businesses – are also typically unable to access alternative providers or to opt out entirely if they do not like the way decisions are made and services are provided. Most importantly, governments do not just provide services – they also regulate the activity of citizens and exercise a monopoly over the use of public power and coercive force – for example, taxation, licensing, law enforcement, punishment, forms of detention, and so on. It is in the exercise of functions like these, which can affect people’s legal status, rights, and interests, that administrative decision-making principles raise particular issues that are unique to the public sector. Governments, by their nature, have a monopoly over public administrative power, but this means that the exercise of that power is controlled through public administrative law. Any use of ADM tools by government agencies must therefore be considered from an administrative law perspective – which is not to disregard or diminish the importance of other perspectives, such as broader ethicalFootnote 16 and human rightsFootnote 17 concerns.

This administrative law – the legal framework that controls government action – does not necessarily stand in the way of adopting ADM tools, but it will significantly control the purposes to which they can be put and the ways in which they can operate in any particular context. The ultimate aim of administrative law is good government according to law.Footnote 18 Administrative law is essentially principles-based and can be considered, conceptually at least, to be ‘technology agnostic’. This means that, while the technology used in government decision-making may change, the underlying norms that underpin administrative law remain constant. The essential requirements of administrative law for good decision-making can be grouped into four categories: proper authorisation, appropriate procedures, appropriate assessment, and adequate documentation. Administrative law is more complex than this simple list may suggest, and there are more technically rigorous ways of classifying its requirements.Footnote 19 There are, of course, also myriad ways in which administrative decision-making can go wrong – some of the more obvious considerations and risks when ADM tools are used are highlighted below.

6.3.2 Proper Authorisation

When Parliament creates a statutory function, it gives someone (or more than one person) power to exercise that function. This person must be a ‘legal person’, which can be a natural person (a human being) or a legally recognised entity, such as a statutory corporation, legally capable of exercising powers and being held accountable for obligations.Footnote 20 Proper authorisation means there must be legal power to make the relevant decision, that the person making the decision has the legal authority to do so, and that the decision is within the scope of decision-making power (including, in particular, within the bounds of any discretion conferred by the power). The requirement for proper authorisation means that statutory functions are not, and cannot be, granted to or delegated to ADM systems,Footnote 21 but only to a legal subject (a someone) and not a legal object (a something).Footnote 22

However, a person who has been conferred (or delegated) the function may be able to obtain assistance in performing their statutory functions, at least to some extent.Footnote 23 This is recognised by the Carltona principle.Footnote 24 In conferring a statutory function on an administrator, Parliament does not necessarily intend that the administrator personally undertake every detailed component or step of the function. As a matter of ‘administrative necessity’, some elements of a function might need to be shared with others who are taken to be acting on the administrator’s behalf. The reasoning underlying the Carltona principle appears to be sufficiently general that it could extend to permit at least some uses of ADM tools. However, the principle is based on a necessity imperative,Footnote 25 and cannot be relied upon to authorise the shared performance of a function merely on the basis that it might be more efficient or otherwise desirable to do so.Footnote 26 While the Carltona principle may be extended in the future,Footnote 27 whether and how that might happen is not clear and will depend on the particular statutory function.Footnote 28

The Carltona principle is not the only means by which administrators may obtain assistance, whether from other people or other things, to help them better perform their functions. For example, depending on the particular function, administrators can (and in some cases should, or even must) draw upon others’ scientific, medical, or other technical expertise. Sometimes, this input can even be adopted as a component of the administrator’s decision for certain purposes.Footnote 29 It can be expected that, like the obtaining of expert advice and the use of traditional forms of technology, there will be at least some forms and uses of sophisticated ADM tools that will come to be recognised as legitimate tools administrators can use to assist them to perform their functions, within the implicit authority conferred on them by the statute. However, whether and the extent to which this is so will need to be carefully considered on a case-by-case basis, taking into account the particular statutory function, the proposed technology, and the broader decision-making context in which the technology will be used.

Additionally, if the function is discretionary, ADM tools must not be used in a way that would result in that discretion being fettered or effectively abandoned. By giving an administrator a discretion, Parliament has relinquished some element of control over individual outcomes, recognising that those outcomes cannot be prescribed or pre-ordained in advance by fixed rules. But at the same time, Parliament is also prohibiting the administrator from setting and resorting to its own rigid and pre-determined rules that Parliament has chosen not to fix.Footnote 30 This means that exercising a discretion that Parliament has given to an administrator is just as important as complying with any fixed rules Parliament has prescribed. Over time, administrative law has developed specific rules concerning the exercise of statutory discretions. These include the so-called rule against dictation and rules governing (and limiting) the use of policies and other guidance material to regulate the use of discretion. Such rules are best viewed as applications of the more general principle described above – that where a statute gives discretion to an administrator, the administrator must retain and exercise that discretion. Those given a discretionary statutory function must, at the very least, ‘keep their minds open for the exceptional case’.Footnote 31 Given this principle, some uses of ADM tools in the exercise of discretionary functions may be legally risky. This was the view of the Australian Administrative Review Council, which concluded that, while ‘expert systems’ might be used to assist an administrator to exercise a discretionary function, the exercise of the discretion should not be automated and any expert systems that are designed to assist in the exercise of discretionary functions should not fetter the exercise of that function by the administrator.Footnote 32 At least on current Australian authorities, ADM tools cannot be used in the exercise of discretionary functions if (and to the extent that) it would result in the discretion being effectively disregarded or fettered.Footnote 33 If the introduction of automation into a discretionary decision-making system has the effect that the administrator is no longer able to – or does not in practice – continue to exercise genuine discretion, that system will be inconsistent with the statute that granted the discretion, and its outputs will be unlawful.Footnote 34 In practice, this suggests that discretionary decisions cannot be fully automated by ADM tools.Footnote 35

6.3.3 Appropriate Procedures

Good administrative decision-making requires a fair process. Appropriate procedures means that the decision has followed a procedurally fair process, that the procedures comply with other obligations including under privacy, freedom of information, and anti-discrimination laws, and that reasons are given for the decision (particularly where it significantly affects the rights or interests of individuals). Generally, a fair process requires decisions to be made without bias on the part of the decision-maker (‘no-bias rule’) and following a fair hearing of the person affected (‘hearing rule’). ADM tools can introduce the possibility of a different form of bias known as ‘algorithmic bias’,Footnote 36 which arises when a machine produces results that are systemically prejudiced or unfair to certain groups of people. Although it is unclear whether the presence of algorithmic bias would necessarily constitute a breach of the no-bias rule, it may still lead to unlawful decisions (based on irrelevant considerations or contravening anti-discrimination laws) or other maladministration (involving or resulting in unjust or improperly discriminatory conduct). Having appropriate procedures also means providing where required, accurate, meaningful, and understandable reasons to those who are affected by a decision, which can be challenging when ADM tools have made or contributed to the making of that decision.

6.3.4 Appropriate Assessment

Appropriate assessment means that the decision answers the right question, is based on a proper analysis of relevant material and on the merits, and is reasonable in all the circumstances. Using ADM tools in the exercise of statutory functions means translating legislation and other guidance material (such as policy) into the form of machine-readable code. A key risk is the potential for errors in this translation process, and possibly unlawful decisions being made at scale. Any errors may mean that, even in circumstances where technology can otherwise be used consistently with principles of administrative law, doubts will arise about the legality and reliability of any decisions and actions of the public agency relying upon the automation process.Footnote 37 When designing and implementing ADM tools, it is also essential to ensure that its use does not result in any obligatory considerations being overlooked or extraneous considerations coming into play. While the use of automation may enhance the consistency of outcomes, agencies with discretionary functions must also be conscious of the duty to treat individual cases on their own merits.

6.3.5 Adequate Documentation

Agencies are required to properly document and keep records of decision-making. In the context of ADM tools, this means keeping sufficient records to enable comprehensive review and audit of decisions. Documentation relating to different ‘versions’Footnote 38 of the technology, and details of any updates or changes to the system, may be particularly important.

6.4 Designing ADM Tools to Comply with the Law and Fundamental Principles of Good Government

To better manage the risks of maladministration in the use of ADM tools, there are at least five broad considerations that government agencies must address when designing and implementing ADM systems to support the exercise of an existing statutory function.Footnote 39 Dealing with those comprehensively will assist compliance with the principles of administrative law and good decision-making practice.

6.4.1 Putting in Place the Right Team

Adopting ADM tools to support a government function should not be viewed as simply, or primarily, an information technology project. Legislative interpretation requires specialist skills, and the challenge involved is likely to be especially pronounced when seeking to translate law into what amounts to a different language – that is, a form capable of being executed by a machine.Footnote 40 Agencies need to establish a multidisciplinary design team that involves lawyers, policymakers, and operational experts, as well as technicians, with clearly defined roles and responsibilities.Footnote 41 It is clearly better for all parties (including for the efficiency and reputation of the agency itself) if ADM tools are designed with those who are best placed to know whether it is delivering demonstrably lawful and fair decisions, rather than having to try to ‘retrofit’ that expertise into the system later when it is challenged in court proceedings or during an Ombudsman investigation.Footnote 42 The task of interpreting a statute to arrive at its correct meaning can be a complex task, and one that can challenge both highly experienced administrative officials and lawyers.Footnote 43 Even legal rules that appear to be straightforwardly ‘black and white’, and therefore appropriate candidates for ADM use, can nonetheless have a nuanced scope and meaning. They may also be subject to administrative law principles – such as underlying assumptions (for example, the principle of legality)Footnote 44 and procedural fairness obligations – which would not be apparent on the face of the legislation.

6.4.2 Determining the Necessary Degree of Human Involvement

Government agencies using ADM tools need to assess the appropriate degree of human involvement in the decision-making processes – discretionary and otherwise – having regard to the nature of the particular function and the statute in question. What level of human involvement is necessary? This is not a straightforward question to answer. As noted earlier, any statutory discretion will require that a person (to whom the discretion has been given or delegated) makes a decision – including whether and how to exercise their discretion. Given that ADM tools do not have a subjective mental capacity, their ‘decisions’ may not be recognised by law as a decision.Footnote 45 Merely placing a ‘human-on-top’ of a process will not, of itself, validate the use of ADM tools in the exercise of a discretionary function.Footnote 46 The need for a function to be exercised by the person to whom it is given (or delegated) has also been emphasised in Australian Federal Court decisions concerning the exercise of immigration discretions, which have referred to the need for ‘active intellectual consideration’,Footnote 47 an ‘active intellectual process’,Footnote 48 or ‘the reality of consideration’Footnote 49 by an administrator when making a discretionary decision.Footnote 50 The ‘reality of consideration’ may look different in different administrative contexts, in proportion to the nature of the function being exercised and the consequences it has for those it may affect. However, the principle remains relevant to the exercise of all discretionary functions – some level of genuine and active decision-making by a particular person is required. In a 2022 Federal Court matter, it was held that a minister failed to personally exercise a statutory power as required. The NSW Crown Solicitors Office noted, ‘The decision emphasises that, whilst departmental officers can assist with preparing draft reasons, a personal exercise of power requires a minister or relevant decision-maker to undertake the deliberate task by personally considering all relevant material and forming a personal state of satisfaction.’Footnote 51 What matters is not just that there is the required degree of human involvement on paper – there must be that human involvement in practice.

When designing and implementing ADM tools, government agencies need to also consider how the system will work in practice and over time, taking into consideration issues like natural human biases and behaviour and organisational culture. They must also recognise that those who will be making decisions supported by ADM tools in future will not necessarily be the people who were involved in its original conception, design, and implementation. The controls and mitigations that are needed to avoid ‘creeping control’ by ADM tools will need to be fully documented so they can be rigorously applied going forward.

There are several factors that are likely to be relevant to consider in determining whether there is an appropriate degree of human involvement in an ADM system. One is time – does the process afford the administrator sufficient time to properly consider the outputs of the tool and any other relevant individual circumstances of the case(s) in respect of which the function is being exercised? Does the administrator take this time in practice? Cultural acceptance is also important, particularly as it can change over time. Are there systems in place to overcome or mitigate automation-related complacency or technology bias, to scrutinise and raise queries about the output of the ADM tool, and to undertake further inquiries? If the administrator considers it appropriate, can they reject the output of the ADM tool? Is the authority of the administrator to question and reject the outputs respected and encouraged? Does it happen in practice?

Some other factorsFootnote 52 relevant to active human involvement include: an administrator’s access to source material used by the ADM tool and other relevant material to their decision, the seniority and experience of the administrator in relation to the type of decision being made, whether the administrator is considered responsible for the decisions they make, and whether the administrator can make or require changes to be made to the ADM tool to better support their decision-making. Finally, an appreciation of the decision-making impact including a genuine understanding of what their decision (and what a different decision) would mean in reality, including for the individuals who may be affected by the decision, is also likely to be relevant.Footnote 53 It is particularly important that the relevant administrator, and others responsible for analysing or working with the outputs of the technology, has a sufficient understanding of the technology and what its outputs actually mean in order to be able to use them appropriately.Footnote 54 This is likely to mean that comprehensive training, both formal and on-the-job, will be required on an ongoing basis.

6.4.3 Ensuring Transparency Including Giving Reasons

In traditional administrative decision-making, a properly prepared statement of reasons will promote accountability in at least two ways, which can be referred to as explainability and reviewability. The former enables the person who is affected by the decision to understand it, and provides a meaningful justification for the decision. The latter refers to the manner and extent to which the decision, and the process that led to the decision, can be reviewed. A review may be by the affected persons themselves, or by another person or body, such as an Ombudsman or a court, to verify that it was lawful, reasonable, and otherwise complied with norms of good decision-making. With ADM, these two aspects of accountability tend to become more distinct.

Agencies need to ensure appropriate transparency of their ADM tools, including by deciding what can and should be disclosed about their use to those whose interests may be affected. An explanation of an automated decision might include information about the ADM tool’s objectives, data used, its accuracy or success rate, and a meaningful and intelligible explanation of how the technology works to an ordinary person. When a human makes a decision, the reasons given do not refer to brain chemistry or the intricate process that commences with a particular set of synapses firing and culminates in a movement of the physical body giving rise to vocalised or written words. Likewise, explaining how an ADM tool works in a technical way, even if that explanation is fully comprehensive and accurate, will not necessarily satisfy the requirement to provide ‘reasons’ for its outputs. Reasons must be more than merely accurate – they should provide a meaningful and intelligible ‘explanation’Footnote 55 to the person who is to receive them. Generally, this means they should be in plain English, and provide information that would be intelligible to a person with no legal or technical training. Of course, the statement of reasons should also include the usual requirements for decision notices, including details of how the decision may be challenged or reviewed, and by whom. If a review is requested or required, then further ‘reasons’ may be needed, which are more technical and enable the reviewer to ‘get under the hood’ of the ADM tool to identify any possible error.

Although provision of computer source code may not be necessary or sufficient as a statement of reasons, there should be (at least) a presumption in favour of proactively publishing specifications and source code of ADM technology used in decision-making. A challenge here may arise when government engages an external provider for ADM expertise.Footnote 56 Trade secrets and commercial-in-confidence arrangements should not be more important than the value of transparency and the requirement, where it exists, to provide reasons. Contractual confidentiality obligations negotiated between parties must also be read as being subject to legislation that compels the production of information to a court, tribunal, or regulatory or integrity body.Footnote 57 As a minimum, agencies should ensure that the terms of any commercial contracts they enter in respect of ADM technology will not preclude them from providing comprehensive details (including the source code and data sets) to the Ombudsman, courts, or other review bodies to enable them to review the agency’s conduct for maladministration or legal error.

6.4.4 Verification, Testing, and Ongoing Monitoring

It is imperative both to test ADM tools before operationalising and to establish ongoing monitoring, audit, and review processes. Systems and processes need to be established up front to safeguard against inaccuracy and unintended consequences, such as algorithmic bias.Footnote 58 Agencies need to identify ways of testing that go beyond whether the ADM tool is performing according to its programming to consider whether the outputs are legal, fair, and reasonable. This means the costs of these ongoing testing requirements, governance processes, ongoing maintenance of the system, and training needs of the staff need to be factored in from the outset when evaluating the costs and benefits of moving to an automated system. Ignoring or underestimating these future costs and focusing only on apparent up-front cost-savings (by simplistically comparing an ADM tool’s build and running costs against the expenses, usually wages, of existing manual processes) will present an inflated picture of the financial benefits of automation. It also ignores other qualitative considerations, such as decision-making quality and legal risks.

6.4.5 The Role of Parliament in Authorising ADM Tools

If the implementation of ADM tools would be potentially unlawful or legally risky, this raises the question: can and should the relevant statute be amended to expressly authorise the use of ADM tools? Seeking express legislative authorisation for the use of ADM tools not only reduces the risks for agencies, but gives Parliament and the public visibility of what is being proposed, and an opportunity to consider what other regulation of the technology may be required. There is a growing practice, particularly in the Australian Commonwealth Parliament, of enacting provisions that simply authorise, in very general terms, the use of computer programs for the purpose of certain statutory decisions. A potential risk of this approach is complacency, if agencies mistakenly believe that such a provision, of itself, means that the other risks and considerations related to administrative law and good practice (see Section 6.3) do not need to be considered. Perhaps more importantly, this approach of legislating only to ‘authorise’ the use of ADM tools in simple terms seems to be a missed opportunity. If legislation is going to be introduced to enable the use of ADM tools for a particular statutory process, that also presents an opportunity for public and Parliamentary debate on the properties that the process should be required to exhibit to meet legal, Parliamentary, and community expectations of good administrative practice. Whether or not these properties are ultimately prescribed as mandatory requirements in the legislation itself (or some other overarching statutory framework), they can guide comprehensive questions that should be asked of government agencies seeking legislative authorisation of ADM tools, as illustrated below.

Is It Visible?

What information does the public, and especially those directly affected, need to be told regarding the involvement of the ADM tool, how it works, its assessed accuracy, testing schedule etc? Are the design specifications and source code publicly available – for example as ‘open access information’ under freedom of information legislation? Is an impact assessment required to be prepared and published?Footnote 59

Is It Avoidable?

Can an individual ‘opt out’ of the automation-led process and choose to have their case decided through a manual (human) process?

Is It Subject to Testing?

What testing regime must be undertaken prior to operation, and at scheduled times thereafter? What are the purposes of testing (eg compliance with specifications, accuracy, identification of algorithmic bias)? Who is to undertake that testing? What standards are to apply (eg randomised control trials)? Are the results to be made public?

Is It Explainable?

What rights do those affected by the automated outputs have to be given reasons for those outcomes? Are reasons to be provided routinely or on request? In what form must those reasons be given and what information must they contain?

Is It Accurate?

To what extent must the predictions or inferences of the ADM tool be demonstrated to be accurate? For example, is ‘better than chance’ sufficient, or is the tolerance for inaccuracy lower? How and when will accuracy be evaluated?

Is It Subject to Audit?

What audit records must the ADM tool maintain? What audits are to be conducted (internally and externally), by whom and for what purpose?

Is It Replicable?

Must the decision of the ADM tool be replicable in the sense that, if exactly the same inputs were re-entered, the ADM tool will consistently produce the same output, or can the ADM tool improve or change over time? If the latter, must the ADM tool be able to identify why the output now is different from what it was previously?

Is It Internally Reviewable?

Are the outputs of the ADM tool subject to internal review by a human decision maker? What is the nature of that review (eg full merits review)? Who has standing to seek such a review? Who has the ability to conduct that review and are they sufficiently senior and qualified to do so?

Is It Externally Reviewable?

Are the outputs of the ADM tool subject to external review or complaint to a human decision maker? What is the nature of that review (eg for example, merits review or review for error only)? Who has standing to seek such a review? If reviewable for error, what records are available to the review body to enable it to thoroughly inspect records and detect error?

Is It Compensable?

Are those who suffer detriment by an erroneous action of the ADM tool entitled to compensation, and how is that determined?

Is It Privacy Protective and Data Secure?

What privacy and data security measures and standards are required to be adhered to? Is a privacy impact assessment required to be undertaken and published? Are there particular rules limiting the collection, use and retention of personal information?

The properties suggested above are not exhaustive and the strength of any required properties may differ for different technologies and in different contexts. For example, in some situations, a process with a very strong property of reviewability may mean that a relatively weaker property of explainability will be acceptable.

6.5 Conclusion

Appropriate government use of ADM tools starts with transparency. The current lack of visibility means that it is not well known how many government agencies in NSW are using or developing ADM tools to assist in the exercise of administrative functions or what they are being used for. Nor is it possible to know who is impacted by the use of ADM tools, what validation and testing is being undertaken, whether there is ongoing monitoring for accuracy and bias, and what legal advice is being obtained to certify conformance with the requirements of administrative law.

Much of this chapter has focussed on how existing laws and norms of public sector administrative decision-making may control the use of ADM tools when used in that context. However, there are likely to be, at least initially, significant uncertainties and potentially significant gaps in the existing legal framework given the likely rapid and revolutionary changes to the way government conducts itself in the coming years. Government use of ADM tools in administrative decision-making may warrant a reconsideration of the legal frameworks, institutional arrangements, and rules that apply. It may be, for example, that existing administrative law mechanisms of redress, such as judicial review or complaint to the Ombudsman, will be considered too slow or individualised to provide an appropriate response to concerns about systemic injustices arising from algorithmic bias.Footnote 60 Modified frameworks may be needed – for example, to require the proactive external testing and auditing of systems, rather than merely reactive individual case review. If a statute is to be amended to specifically authorise particular uses of ADM tools, this creates an opportunity for Parliament to consider scaffolding a governance framework around that technology. That could include stipulating certain properties the system must exhibit in terms of transparency, accuracy, auditability, reviewability, and so on.

However, an open question is whether there is a need to consider more generally applicable legal or institutional reform, particularly to ensure that ADM tools are subject to appropriate governance, oversight, and review when used in a government context.Footnote 61 There may be precedent for this approach. The machinery of Australia’s modern administrative law – the administrative decisions tribunals, Ombudsman institutions, privacy commissions, and (in some jurisdictions) codified judicial review legislation – was largely installed in a short period of intense legislative reform, responding to what was then the new technology of modern government.Footnote 62 Ombudsman institutions (and other bodies which perform similar and potentially more specialised roles, including, for example, human rights commissions, anti-discrimination bodies, or freedom of information (FOI) and privacy commissions) have proven useful in many areas where traditional regulation and judicial enforcement are inadequate or inefficient. Ombudsman institutions also have the ability to not only respond reactively to individual complaints but also to proactively inquire into potential systemic issues, and to make public reports and recommendations to improve practices, policies, and legislation.Footnote 63 This ability to act proactively using ‘own motion’ powers may become increasingly relevant in the context of government use of ADM tools, partly because it seems less likely that complaints will be made about the technology itself – including if complainants are unaware of the role played by technology in the relevant decision. Rather, when people complain to bodies like the Ombudsman, the complaint is usually framed in terms of the outcome and impact on the individual. It must also be recognised that, if Ombudsman institutions are to perform this oversight role, there will be a need for capability growth. At present, it is likely they lack the in-house depth of technical skills and resources needed for any sophisticated deconstruction and interrogation of data quality and modelling, which may, at least in some cases, be required for effective scrutiny and investigation of ADM tools.Footnote 64

7 A Tale of Two Automated States Why a One-Size-Fits-All Approach to Administrative Law Reform to Accommodate AI Will Fail

José-Miguel Bello y Villarino
7.1 Introduction: Two Tales of the Automated State

In his 1967 book, which partially shares its title with this edited collection (The Automated State: Computer Systems as a New Force in Society),Footnote 1 Robert McBride anticipated that public authorities would be able to do ‘more’ thanks to the possibility of storing more detailed data combined with the increasing capacity of machines to process that data. He conjectured that this would create new legal problems. Fast forward half a century and the Automated State may (really) be on the brink of happening. AI can essentially change the state and the way it operates – note the ‘essentially’.

Public authorities, employing (or assisted by) machines to a large scale, could do more. What this ‘more’ is, is a matter of discussion,Footnote 2 but, broadly speaking, it can mean two ideas: (i) doing things that humans could do, but more efficiently or to a larger scale; or (ii) doing things that could not be done before, at all or at a reasonable cost.Footnote 3 Therefore, the rules that regulate the action of public authorities need to be adapted. This chapter deals with the normative question of the type of regulatory reform that we should aim for.

It can be anticipated that changes within the immediate horizon – three to five years – will be marginal and starting at the points of least resistance, that is, in tasks currently done by humans that could be easily automated. In these cases, the preferred regulatory option is likely to be the creation of some lex specialis for the situations when public authorities are using AI systems. This approach to automating the state and the necessary changes to the administrative law are explored in the following section (Section 7.2).

The much bigger challenge for the regulation of the Automated State will come from structural changes in the way we design policy and decide on policy options. This is best illustrated with one example already in the making: digital twins, data-driven copies of existing real-life environments or organisms. Although the attention has primarily focused on digital twins of living organisms,Footnote 4 promising work is being undertaken in other types of real-life twins, such as factories or cities. One leading example is the work in Barcelona (Spain) to create a digital twin that will help make decisions on urban policy, such as traffic management or planning.Footnote 5

According to some reports, when one of the key planning initiatives of the local government – the superilles, which involved the creation of limited-traffic city-block islands – was run through the system to see the effects with and without its implementation, it showed that there was close to no improvement on air pollution levels, one of the drivers for the creation and implementation of the initiative.Footnote 6 In other words, the intervention failed to achieve one of its main goals. Does this matter for administrative law?

Section 7.3 considers these policy-oriented types of AI systems. The systems used to design policy and make decisions among policy options open the door to an intrinsically different automated state which may require completely new tools and approaches to regulate it. Although the word ‘automated’ could be misleading – it is better described by the periphrastic ‘AI-driven decision support system for policy design and creation’ – the outputs of these systems are within the scope of administrative law. They are part of processes that eventually generate administrative acts or decisions and, as such, can be the object of challenges on legal grounds in many jurisdictions.

A key part of that discussion is the problem of translating into law a procedure for legal administrative accountability for ‘objectives’ (a particular type of input for those AI systems) and ‘insights’ (outputs). AI systems are often developed to optimise a number of objectives set by humans or to autonomously find insights and interesting relations among the data fed into it. When these types of AI systems are used on data held by public authorities for policy-making purposes, they generate immediate challenges to administrative law: how do we regulate policy-making that is meant not to be about discretionary choices, but about data-driven optimisation?

Concepts such as ‘arbitrariness’ or ‘discretion’ mean very different things in regard to public authorities’ decisions which are an application of the law to individuals or groups, covered in Section 7.2,Footnote 7 and for decisions about how to best use public resources at a policy level, explored in Section 7.3.Footnote 8 Distinguishing between legitimate political (or policy) choices and unreasonable decisions will be challenging if at a given stage of the decision-making process there is a system that is considering one option preferable to another according to the parameters built into that system.

This type of problem may still be incipient. The technology may still be very far from reliable, but if we reach a point when some policies can be shown to be Pareto superior to others (i.e., not one of the indicators considered in the policy is worse-off, but at least one is improved), is the choice of the Pareto inferior option still legitimate or fair? Will it be legal? How much deference should then be given to the choices of decision-makers?

To solve some of these questions in Section 7.4, I suggest some preparatory work for this scenario. I develop some heuristics – or rules of thumb – to distinguish between both tales of the Automated State. On that basis, I explore whether democratic and liberal societies can create a new type of administrative law that can accommodate divergence of views and still ensure that the margin of discretion of policy choices is adjusted to this new reality.

7.2 The Administrative Law of AI Systems That Replace Bureaucrats

The use of AI for automating work currently done by humans – or creating systems that facilitate the performance of those tasks by humans – can be directly linked to previous investments by governments in information systems. These were generally associated with attempts to update the ways public organisations operated to enhance efficiency and policy effectiveness.Footnote 9 Those AI systems, if used for fully automated administrative tasks, could be ‘isolated from the organisational setting they originated from’Footnote 10 and, therefore even legally considered as ‘individual artificial bureaucrats’.Footnote 11

In this context, the main consideration is that the system should be able to do its job properly. This view, therefore, naturally places the accent on testing the AI systems beforehand, particularly for impartiality and standardisation. This is something we are relatively familiar with and not conceptually dissimilar to the way Chinese imperial mandarins were subject to excruciating exams and tests before they could work for the emperor, or to the way the Spanish and French systems (and the countries in their respective areas of influence) still see the formalised gruelling testing of knowledge as a requisite to access a ‘proper’ bureaucrat position.

Therefore, administrative rules for the use of these AI systems are likely to focus on the systems themselves. As mentioned, the regulatory approach will then most likely emphasise ensuring that they are fit for purpose before starting operation, which is a type of legal reform already observed in several jurisdictions.

Commonly cited examples are the mechanisms already in place in Canada,Footnote 12 which focus on the risks of AI systems employed by public authorities; the proposed general approach in the European Union,Footnote 13 which expands to high-risk systems in the public and private sector; or the light-touch intervention model, which creates some pre-checks for the use of certain AI systems by the public authorities, such as the recently introduced rules in the state of New South Wales in AustraliaFootnote 14 – although with no concrete consequences, in this case, if the pre-check is not done properly.

Generally speaking, these approaches place the stress on the process (or its automated part) and not on the outputs. It is the system itself that must meet certain standards, defined on the basis of actual standards or specifications (in the EU case as described in article 9 of the proposal) or an impact assessment of some kind (Canada model) or the considerations of ‘experts’ (New South Wales, Australia model). At a higher level, this makes sense if what we are concerned about is the level of risk that could be generated by the system. The question here is ‘how bad can it go?’, and the law mandates to undertake that check beforehand.

In my opinion, this deviates from the views of administrative law that see the action of the public authorities as a materialisation of values such as equality and fairness.Footnote 15 Instead, this Weberian machine bureaucracy would stress impartiality and standardisation,Footnote 16 values more intrinsically attached to procedural elements.Footnote 17

In the classic model of Peters, in which the public administration is a manifestation of a combination of societal, political, and administrative cultures,Footnote 18 the direct connection here is to the administrative culture, and only collaterally to societal or political elements. That type of Automated State does not need to be fair, it needs to be accurate. The fairness is meant to be embedded in the policy it implements and the legitimacy of outputs depends on whether the process correctly implements the policy.

However, as this approach incorporates elements of risk-based regulatory techniques, outputs are indeed considered in the process of conformity checks. Normally, most of these regulations of the use of AI in administrative law settings will mandate, or make a reference to, some kind of cost–benefit analysis of the social utility of the deployment and use of the system, in the way described by Sunstein.Footnote 19 The test to start employing automated systems in this context is one that compares an existing procedure in which humans participate against the efficiency, savings, reliability, risks of mistakes and harms, and other social and cultural aspects of the automated systems.

Probably the only real complication from a regulatory point of view for these systems is the decision to shift from one model to another. I have considered this problem with Vijeyarasa in relation to the VioGén, a computer-based system used for the assessment of the level risk of revictimisation of victims of gender-based violence in Spain.Footnote 20 If an AI-based system is considered to be ready to deliver an output better than a human qualitative assessment or one based on traditional statistics, what is the degree of outperformance compared to humans, or the level of reassurance necessary to make that shift, and how much capacity should be left to bureaucrats to override the system’s decisions? These are not easy questions, but they are not difficult to visualise: should the standard for accepting automation be performing better than an average bureaucrat? Better than the bureaucrats with the best track records? Or when the risk of expected errors is considered as reduced as possible? At similar levels of performance, should cost be considered?

These are decisions that administrative law could explicitly leave to the discretion of bureaucrats, establish ex ante binding rules or principles, or leave it to the judiciary to consider it if a complaint is made. Again, not easy questions, but decisions that could be addressed within the principles that we are familiar with. In the end, the reasoning is not that dissimilar from a decision to externalise to a private provider a service hitherto delivered by the state.

To be clear, I am not suggesting that there is anything intrinsically wrong with focusing our (regulatory) attention on these issues. I believe, however, that this view encompasses a very narrow understanding of what AI systems could do in the public sector and the legal problems it can create. This approach is conceptualised in terms of efficiency and the hope that AI can finally deliver the (so far) unmet promise of the productivity revolution that was expected from the massive incorporation of computers in the public officials’ desks.Footnote 21

From that perspective AI could be a key element of that Automated State. AI systems could be optimised to limit the variance between decisions with similar or equal relevant attributes. Consequently, AI-driven systems could be the best way to reach a reasonable level of impartiality, while fulfilling mundane tasks previously performed by humans.

Obviously, this cannot happen without maintaining or improving the rights of those individually or collectively affected by these automated decisions. Administrative law would need to ensure that the possible mistakes of these ‘approved and certified’ systems can be redressed. The legal system must allow affected parties to challenge outputs that they believe do not correctly implement policy. This could be, at least, on the basis of a possible violation of any relevant laws for that policy or a lack of coherence with its objectives, or with other relevant rights of the person or entity affected by the output of the system.

Therefore, the only need for reforms (if any) for administrative law in this Automated State is to (i) create a path to pre-validate the system; (ii) create guidance or determine when to change to such a system; and (iii) enable parties affected by its outputs to complain and challenge these decisions.

Other chapters in this book look at this third point in more detail, but I see it as requiring affected parties to go ‘deeper’ into the automated (or machine-supported) decision. The affected party, alone or in conjunction with others affected by the same or similar decisions from that system, need to be able to – at least – (i) explore why their decision can be distinguished from similar cases deserving a different administrative response; (ii) be able to raise new distinguishing factors (attributes) not considered by the system; and (iii) challenge the whole decision system on the basis of the process of pre-certification of the system and its subsequent monitoring as the system learns.

Generally speaking, the type of legislative reform necessary to accommodate this change will not create excessive friction with the approaches to administrative law already in place in civil and common law systems.Footnote 22 Essentially, the only particularity is to be sure that the rights of the parties affected by administrative decisions do not get diluted because the administrative decision comes from a machine. The right to receive a reply, or to an intelligible explanation, or to appeal a decision considered illegal should be adapted, but not substantially changed.

Perhaps the concept of the ‘organ’ in civil law systems and the allocation of responsibility to the organ, which in practice makes administrative law a distinct area of law, with a different logic from the civil/criminal dichotomy still dominating the common law system,Footnote 23 could make the transition easier in civil law systems. The organ, not the bureaucrats or their service, is responsible for its outputs. However, certain rules about the burden of proof and the deference towards the state in continental systems could make it more difficult to interrogate the decision-making process of a machine.

Finally, in terms of administrative law, it is even possible to envisage a machine-driven layer of supervision or control that could monitor human action, that is, using AI to supervise the activity of public officials. One could imagine a machine-learning system which could continuously check administrative outputs created by human bureaucrats alerting affected parties and/or bureaucrats when it detects decisions that do not appear to align with previous practice or with the application of the normative and legal framework. Such an Automated State could even increase the homogeneity and predictability of administrative procedures and their alignment with the regulatory regime,Footnote 24 therefore increasing trust in the public system.

In this scenario, the Automated State will not (for the time being) replace humans, but work alongside them and only reveal itself when there is a disparity of criteria between the output of the human bureaucrat and the automatic one. The existence of this Automated State cohabiting with a manual one may require different administrative rules for human-made decisions. When decisions diverge, possible options may involve an obligation to notify affected parties of this divergence and, perhaps, granting them an automatic appeal to other administrative entities, or requiring reconsideration by the decision-maker, or imposing on the human decision-maker an obligation of more detailed and explicit motivations. In this state of automation, the human administrative decision will not be fully acceptable unless it aligns with the expected one from the Automated State. And, yet, we can still address these situations with a lex specialis for the automated decision, remaining within the logic and mechanisms of ‘traditional’ administrative law.

Having now covered the easier of the two transitions, it is now the time to consider the other Automated State, the one that liberal-democratic legal systems could find most difficult to accommodate. The tale of the Automated State that designs or evaluates policy decisions.

7.3 Regulating the Unseen Automated State

As noted in the Introduction, AI can be harnessed by public authorities in ways that have not been seen before. The idea of a digital twin, for example, alters the logic behind the discretion in the decision of public authorities, as it makes possible to envisage both states of a world, with and without a decision.

If we take another step in the same direction, one could even assume that in the future the design and establishment of policy itself could be delegated to machines (cyber-delegation).Footnote 25 In this scenario, AI systems could be monitoring opportunities among existing data to suggest new policies or the modification of existing regulations in order to achieve certain objectives as defined by humans or other AI systems.

Yet, for the purpose of this chapter, we will remain at the level of the foreseeable future and only consider systems that may contribute to policy determination. The discussion below also assumes that the systems are correctly designed and operate as they are expected.

This type of automation of the state involves expert systems that are considered to provide higher levels of confidence about choices in the policy-making process. This view of the Automated State sees AI systems as engineered mechanisms ‘that generate[…] outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives’, in line with current thinking in the global standardisation process.Footnote 26

This corresponds to existing observations in governance theory that note that ‘the transfer of governmental decision-making authority to outside actors occurs along a continuum’.Footnote 27 A public authority generally decides on policy through an output generated by one of its employees (elected or appointed) or a committee of them. How to reach that policy decision could be left to the employees of that public authority, reached through a system of consultation, or fully deferred to a committee of experts.

Regardless of how the decision is reached, the essential element is that the decision process is oriented towards the achievement of an implicit or explicit set of human-defined objectives. Achieving these objectives is the raison d’être of the policy decision, even if, from a social point of view, the ultimate motivation, and, therefore, the legitimacy of the decision to set these precise objectives, may have been spurious (e.g., to unjustifiably favour a certain service provider over others). If the advice to the decision-maker is assisted by an AI system, however, that objective needs to be explicit as it is what the system will try to achieve and optimise in relation to other factors.

Allow me, however, to explain the consequences of this statement, before exploring these objectives. The state, as an agent, does not act on its own behalf. The existence of the modern liberal state is based on the founding principle that it does not act on its own interest, but as a human creation for the benefit of its society. The human-defined objectives are the reason for its existence, the state being a tool to achieve them.

Leaving aside if this is actually the case – diverging from those who see the state as better described as a mechanism for preservation of certain parts of that society or more theoretical discussions about the role of the state – in this section I assume that decision-makers are honest about those objectives or boundary conditions.Footnote 28 As noted in the previous sections, what matters for systems that merely apply policy to reach outputs is to correctly reflect that policy in those outputs. Broader objectives such as fairness through redistribution, or equality of opportunities must be embedded in the policy design, the outputs just being the automated application of that policy. Here, the policy is what is being created by the Automated State, so the system will design or propose a policy that optimises those objectives.

In societies that democratically elect its decision-makers, one can assume that some of these objectives can come from different sources, such as:

  1. 1. Those determined by basic legal norms that constrain the action of public authorities. This is the case, for example, of constitutional rules, such as ‘no discrimination on grounds of age or socioeconomic grounds’, or a mandate to redress inequality derived from socioeconomic grounds or a ‘right to access a no-fee system of quality education until the age of 16’.Footnote 29

  2. 2. Those determined by the objectives hierarchically established at higher levels of decision-making. For example, one could consider the programme from a central government, or the priorities established at the ministerial level – and the principles explicated therein – as a restriction to the action of lower hierarchical levels, especially when materialised in formal directives. For example, in the fiscal context, one objective could be increasing the fight against fraud or, in the education context, improving the standardised results of students from disadvantage backgrounds.

  3. 3. Those that are determined by the specific decision-maker (organ or individual), who is formally in charge of making that decision. For example, in the tax context it could be accepting that more exhaustive detection of fraud would be at the cost of more administrative complaints from honest taxpayers that would be incorrectly identified. In the education context it could be a limit in the amount of resources that could be allocated to improving educational standards overall.

In all three cases the objective is the key element for the development of policy. An Automated State in which AI systems are designed to optimise these objectives will, in principle, derive its legitimacy and legality from these objectives. More importantly, the sequence of objectives listed above can be seen as hierarchical, with policymakers assisted by these AI systems bound by the objectives established by the superior levels. As an example, a decision-maker on the lowest level of hierarchy who sets the level of expenditure at this lowest level (district, local council, federated state, or national level) for public (government paid) education could not accept any recommendations from the Automated State that could suggest as optimal interventions those expected to deliver a significant improvement for overall academic standards for 99.9 per cent of the students of that administrative level, but would not offer free education for the 0.1 per cent living in the most remote communities if there is a constitutional mandate to offer free education for all. A proposal that would involve the exclusion of even one person would not be acceptable. Similarly, an option that improves the academic results for all at a given cost, but forces students from the most deprived backgrounds to separate from their families would be a violation of a tier 1 objective, and, therefore, not acceptable either. A correct design of the AI system producing the recommendation should not even generate these options.

Obviously, not all objectives follow this neat hierarchical structure. Sometimes the systems could offer recommendations for policy options that are seen as trade-offs between objectives at the same level. Some other times, there could be enough flexibility in the language of the boundary conditions that, at least formally speaking, it would not require to build those boundary conditions into the system. This would allow systems to generate some proposals that would not be accepted under a stricter objective or a different reading of the wording of the objective.

For example, a system may be allowed by humans to suggest an education policy that is expected to achieve a significant improvement for 99.9 per cent of the students. In this case policymakers tasked with creating a policy to improve standardised scores may decide to allow systems to consider this option, if they knew that they could meet the formal requirement of providing free education for all students through other means or policies. That could for example involve providing untested remote self-learning program to students for free. This would be feasible in policy settings where the boundary condition is just ‘providing non-fee education’ without qualification of ‘(proved) quality’.

As we know, it is not unusual for general mandates to be unqualified, particularly at the constitutional level, and see the qualifications being derived by interpretation from other sources (human rights principles or meaningful interpretations from high-ranking courts). In any case, it is how humans decide to translate those mandates into the system objectives what matters here.

Yet, this kind of problem may still not be that different from what systems of administrative control are facing today. The level of discretion is still added into the systems by humans and this concrete human choice (the decision to place other options within the scope of analysis) is still the one that could be controlled by courts, Ombudsman, or any other systems of administrative checks.

A second type of problem appears when the system is showing that certain options are superior to others, but benefit some groups of people differently. For example, a system that is expected to improve the results of all students, but improve the results of students from advantaged socioeconomic backgrounds by 10 per cent and those from disadvantaged backgrounds by the same 10 per cent would not be generated as a recommendation by a system which is requested to produce only options that are also expected to redress inequality. However, the same system could recommend the next best option at the same cost, which is expected to improve the results of the first group by 7 per cent and the second by 9 per cent, as this option does address inequality, which was a requirement set by humans to the system.

Favouring the latter proposal may seem absurd from a (human) rational point of view. The first suggestion is clearly superior as it would see all students being better off overall in terms of academic performance. Yet, only the second system would meet the objectives manifested in boundary conditions. A correctly built system would respect the hierarchy of objectives. Given that redressing inequality is more likely to be a constitutional or general mandate and, therefore, trump improving results – which is more likely to be an objective set at a lower hierarchical level – the first option would never be offered as a suggestion to the policymaker.

In this case, a better approach would be to allow the Automated State to present the first option to policymakers as far as the expected outputs are clear and the violation of the boundary condition is explicit. This would allow policymakers to simultaneously intervene in other ways to redress inequality. AI systems do not live in a policy vacuum, so it is important to design them and use them in a way that allows for a broader human perspective.

A third type of problem could occur when the system is designed with an added level of complexity, presenting the options in terms of trade-offs between different objectives at the same level.Footnote 30 For example, the choice could be offered to the decision-maker as policies that are expected to deliver overall improvements of educational standards, for all students, with a bigger gain for those from disadvantaged backgrounds (i.e., meeting all the boundary conditions and objectives), but expressed in terms of cost (in monetary units) and levels of overall improvement. Then it would be up to the decision-maker to decide which option of the many possible ones would be preferred. In this case, the main problem is one of allocation of resources, so this could initially be left to human discretion. However, as public resources are limited, if different AI systems are used to automate policy-making, setting a limit for one of these trade-offs would affect the level of trade-offs for other recommendation systems operating in other policy areas.

This could be intuitively grasped in the tax context. Imagine a public authority tasked with maximising tax revenue at the lowest cost within the legal boundaries. The system assessing anti-fraud policy may recommend an optimal level of investment in anti-fraud and establish the identified taxpayers that should be checked. Other system may be used to recommend possible media campaigns promoting compliance. This other system may suggest an optimal level of investment and the type of campaigns expected to give the highest return. Yet, it is possible that the level of resources available may not be enough to follow both suggestions. A broader system could be created to optimise both systems considered together, but what could not be done is considering each of the systems in isolation.

Looking together at these three types of problems gives us an idea about how this Automated State is different. For the systems discussed in Section 7.2, those that replace humans, I indicated that the most promising regulatory approach is the one that focuses on the systems and the testing beforehand and then shifts to monitoring of the outputs. As the bulk of the effects of each automated decision will be centred around a limited (even if large) number of individuals, the affected parties will have an incentive to raise their concerns about these decisions. This could allow for a human (administrative or judicial) review of these decisions according to the applicable rules. The automated outputs could be compared with what humans could do, according to the applicable administrative law, in those circumstances. This process would confirm or modify the automated decision and the automated systems could be refined to learn from any identified errors.

However, for the systems discussed in Section 7.3, that are used to do things that humans cannot do, especially in terms of policy design or supervision, it is impossible to proceed in such a way. Any challenge of a concrete decision could not be compared with what a human could do. Any disagreement about the reliability of the system would be too complex to disentangle.

Yet, there are aspects of the process that would still need to meet societal standards about adequate use of resources, fulfilment of superior principles of the state, or, more generally, the need to meet the state’s positive obligations to protect human rights, remove inequalities, and redress violations of rights of individuals or groups. At the very least there are three elements regarding how humans interact with the systems that generate the outputs that could be considered.

First, humans must test the systems. To grant some legal value to the recommendations of these systems – for example, to demand more from policymakers that deviate from their recommendations – this type of Automated State must be tested in real-life, real-time conditions. In the next section I explain in more detail what I mean by this point. Suffice to note here that systems tested only against data from the past may not perform well in the future and their legal usefulness as a standard for the behaviour of policymakers may therefore be undermined.

Second, humans must set the objectives that the system is meant to optimise (and suggest ways to achieve) and the boundaries that the suggestions are not meant to trespass. Which objectives and boundaries are incorporated into the system and how they are hierarchically placed and balanced can be explained and the legality of those choices controlled.

Third, humans must translate automated suggestions into policy. The example of the AI system used for assessing quality of teaching in the United States discussed in Chapter 10 of this bookFootnote 31 is a perfect example of this point. Even if we trusted that the system was correctly evaluating the value of a teacher in terms of improvement of the results of their students, the consequence attached to those findings is what really matters in the legal sphere. Policymakers using such a system to assess quality of teaching could decide to fire the lowest performing teachers – as it was the case in Houston – or to invest more in the training of those teachers.

7.4 Preparing for the Two Tales of the Automated State

In the previous sections, I discussed the two different tales of the Automated State and the distinct legal implications that each tale involves. This, however, was an oversimplification. Going back to the VioGén system presented above, one can today see a system of implementation, typical of the first tale of the Automated State, as it assesses each individual woman based on their risk of revictimisation. The suggested assessment, if accepted by the human decision-maker, automatically triggers for that victim the implementation of the protection protocol linked to her level of risk. Yet, VioGén could easily become a policy design tool. For example, it could be repurposed to collate all data for all victims and redeveloped into a system that allocates resources between women (e.g., levels of police surveillance, allocation of housing, allocations of educational programmes, suggestions about levels of monitoring of restraining orders for those charged with gender-based violence). If we consider every automated system a potential policy tool, we may be moving towards an excessive degree of administrative control of policy-making. As policymakers will have much more and richer data, administrative law could be used to question virtually any policy decision.

At the other extreme, one could think that it could be better to revert to almost complete deference to the discretion of policymakers. If we think of policy-making as a black box driven by criteria of opportunity or the preferences of high-ranking elected officials it is difficult to justify the need for a new type of administrative law for these situations, even if the policymakers are better placed to assess the consequences of their decisions. One can, for example, imagine the decision of a public authority to approve a new urban planning policy after a number of houses are destroyed by floods. The new policy may be so different to previous practice that its effects in case of another flood cannot be assessed by an AI-driven recommendation system. The system, however, can suggest several minor modifications that are expected to be enough to avoid a repetition of the situation. In this case the ultimate purpose of the new policy may be to increase resilience of the housing in case of new floods, but the real value of the initiative is to convey that public authorities are seen as reacting to social needs.

The expected evolution of the first type of the Automated State could also support deferring to the discretion of policymakers and ignoring the new tools of the Automated State from an administrative law perspective. As more decision-making is automated at the level of implementation, a reduction of variance should be expected. The effect in the world of these outputs could then be analysed in real time and the outputs will speak for the policies they implement. Public office holders would then be accountable if they fail to modify policies that are generating undesirable outputs. The effects of a change in policy that is implemented through fully automated means will be the basis to judge that policy. Policy design will not only refer to ‘design’, but also the choice and design of the automated tools that implement it.

In my view, none of these options are reasonable, so it is necessary to start developing new principles that acknowledge the legal relevance of these new tools in policy-making, without separating ourselves excessively from the process. The absolute deference to policymakers choices, even if tempting, would be a reversal of the positive ‘erosion of the boundaries separating what lies inside a government and its administration and what lies outside them’ or, in other words, of the transition from ‘government’ to ‘governance’.Footnote 32

A way to illustrate this latter point would be to consider the French example, and its evolution from a black-box State to an administré-centred one.Footnote 33 This transition, induced – according to a leading French scholar – by Scandinavian-, German-, and EU-driven influences, has forced administrative law to go beyond traditional rights in French law (to an intelligible explanation, to receive a reply, to appeal a decision considered illegal) into a regime where the administré can be involved in the decision-making process and is empowered vis-à-vis the State.Footnote 34 It is not just the output, but also the logic behind the process that matters.

If the reasons for policy decisions matter, how can we then use the Automated State to demand better accountability for those decisions? Trusting this Automated State blindly or inextricably binding decision-makers to its decisions does not appear to be a good option, even if we have tested the AI systems according to the most stringent requirements. My suggestion is to develop a few principles or heuristics that could guide us in the process of reform of administrative law.

The first – and most essential from my point of view in a technology without historic track record of performance – is that systems designed to make predictions about impacts of public actions in the future need to have been tested in real conditions. This Automated State could only be relied upon for the purpose of legal assessments of policy decisions, if the predictions or suggestions of its systems have been proven to be reliable over a given number of years before the date of the decision.

Systems that are ‘refined’ and reliable when tested against the past cannot be a legal basis to contest policy decisions. Only real-life experiments for policy design without ‘the benefit of hindsight’ should matter. In these cases, deference should be paid to policymakers to the same degree as before. However, for learning and testing processes an adequate record of use should be kept – that is, systematically recording how the system was used (for testing purposes) in real-time conditions.

Secondly, we should be flexible about setting boundaries and objectives. Administrative rules should not impose designs that are excessively strict in terms of hierarchy of objectives, as some of the objectives can be addressed by different policies at the same time, not all covered by the automated systems. For those cases, the systems should be designed to allow for the relaxation of the boundary conditions (objectives) in a transparent manner, so policymakers can assess the need for other interventions. In the example above about the education systems, a rigid translation of legal principles into data could blindside us to policy options that could be adapted further to respect legal boundaries or even be the reason to adapt those boundaries.

Finally, decisions that deviate from those suggested by legally reliable automated systems should be (i) motivated by decision-makers in more detail than traditionally required; and (ii) the selected (non-recommended/Pareto-inferior) policy should be also assessed with the relevant AI systems before implementation. The results of that assessment, the policymaker motivation, and all connected information should be made – in normal circumstances – publicly available. This would allow the improvement of systems, if necessary (e.g., incorporating other considerations), and allow better administrative or judicial control of the decision in the future. Guidance could be extracted from decisions that override recommendations of environmental impact assessment, where an administrative culture that relied on discretion rather than law – for example in the English contextFootnote 35 – has traditionally been an obstacle to the effective judicial control of those decisions. Discretion should be accepted as an option as far as it is explicitly justified and, hopefully, used for developing better automated systems.

8 The Islamophobic Consensus Datafying Racism in Catalonia

Aitor Jiménez and Ainhoa Nadia Douhaibi
8.1 Introduction

Catalonia is home to the largest Muslim communities of the Iberian Peninsula: a roughly 8 per cent of its population (617,453 out of 7,739,758) follows the Islamic tradition. Despite the neofascist natalist rhetoric of far-right parties speaking about a ‘great replacement’ (Aduriz, 2022), the number of Muslim students is consistent with the total number of Muslims. There are 1,337,965 non-tertiary education students in Catalonia,Footnote 1 approximately 101,721 of them are Muslims (7.60 per cent).Footnote 2 However, here the statistical consistencies end. The majority of Muslims work in precarious jobs or do not have jobs at all. Roughly 20 per cent of the migrant population is unemployed, compared to 8.19 per cent of general population in Catalonia.Footnote 3 They live in impoverished and deprived zones with less access to public resources and green areas. Traditionally migrant neighbourhoods such as la Barceloneta, el Raval, or Poblenou in Barcelona are among the most affected by the touristification and gentrification unleashed by foreign investment firms.Footnote 4 With scarce jobs, skyrocketing rents, and living costs, thousands of families are forced to live in slums and industrial areas with extremely poor living conditions, and are exposed to violent evictions and fatal accidents.Footnote 5 But the socioeconomic is just one of the areas where Muslim population face discrimination.

Muslim communities are targeted in relation to their beliefs, culture, and ways of socialising. Despite their demands, 90 per cent of Muslim students do not enjoy the same right to religious class in the public education system as their Christian-Catholic counterparts. Muslim communities often face fierce resistance from far-right organisations and public officers against their attempts of setting up and/or building mosques.Footnote 6 However, the situation is even worse within the welfare and the punitive systems. People of migrant origin, especially those from countries with Muslim majorities, are disproportionally present in the prison system. Despite being just 3.1 per cent of the population, people of Maghrebian background represent 16 per cent of the incarcerated population in Catalonia.Footnote 7 As has been pointed out by a large number of academics and activists, this is not a matter of rampant criminality among a very specific and identifiable segment of population, but the consequence of racial profiling among police agencies and social services who disproportionately target those produced as ‘enemies’.Footnote 8 These episodes of discrimination are not accidental, but rather functional elements of what we conceptualised as the Islamophobic Consensus.

From the early days of inquisition to the latest developments in automation the social construction of the Muslim as a social enemy has helped to shape both the Spanish identity and the Spanish state’s surveillance and repressive apparatuses. The subjectification of Muslims as a threat ranges from labelling them as job-stealers, and herein as a risk to the working class, to them being the ultimate enemy, the terrorist.Footnote 9 This racialisation process operates not only in relation to newcomers, but also towards the second and third generation of Muslims. As Suhaymah Manzoor-Khan has recently pointed out,Footnote 10 the pernicious characteristics attributed to the ‘Muslim culture’ rapidly evolved into a racially inherited condition that passes through generations.

The second decade of the twenty-first century has witnessed the proliferation of heavily racialised surveillance and carceral geographies. As the anti-immigrant raids in the United Kingdom, the United States, and Australia show, bordering technologies extend now to every territory, every street, and every working place.Footnote 11 The ‘exceptional’ and ‘temporary’ powers to surveil and to punish delegated to public authorities in order to fight the ‘war on terror’ are now well-established practices affecting every area of the public life. In Catalonia, entire Muslim communities and mosques are targeted and surveilled by an expanding ‘preventive’ sociotechnical system.Footnote 12 An army of educators, social workers, and police officers are now entrusted with gathering information from endless data points, and to report to their civil and police superiors the most subtle changes in individual and collective behaviour. For instance, teachers are taught by police agencies that the everyday manifestations of religiosity such as the adoption of ‘Islamic’ dress codes or collective prayer could could be indicators of ‘radicalisation’. This information is used to terrorise vulnerable communities who are routinely threatened with criminalisation, family separation, and even deportation.

The system to prevent terrorism envisaged by the Spanish multiagency initiative on national security operates as a self-fulfilling prophecy mechanism. The risk assessments tools may flag as a threatening symbol of radicalisation of mundane and often contradictory facts. For instance, either exercising too much or having an absolute sedentary life may induce vigilantes to believe that a young Muslim is up to something.Footnote 13 In the same vein, young Muslims following severe religious routines may signal fundamentalist tendencies, but also not following religious mandates may be, in the eyes of the police services, a worrying nihilistic symptom of latent lone-wolf tendencies. These instruments and the way they look, and produce Muslims have a profound impact on the lives of thousands. Are these individuals appropriate candidates for welfare benefits? Will they be subject of an investigation either by social services or by any of the multiple police agencies? Will they be released on parole? Will they remain in prison? Will they be processed under terrorism charges? A vast sociotechnical assemblage of analogical and digital technologies controls the lives of thousands of Muslim people in Catalonia.

However, these control and disciplinary technologies are not only aimed at limiting, cancelling, and governing subaltern people. Drawing on the structural comprehension of racism pinpointed by Eduardo Bonilla Silva,Footnote 14 we argue that these technologies are part of what here is coined as the Islamophobic Consensus, that is, the Southern European iteration of racial neoliberalism. A system of domination intended to reinforce structural gender, racial, and class inequalities, through a sociotechnical system encompassing all sorts of surveillance, repressive legal, political, economic, educational, and military instruments. Some may argue that the Spanish surveillance state has not reached full or high degrees of datafication or digitalisation as it may have been the case in countries such as the Netherlands.Footnote 15 And, perhaps, the digitalisation in Spain will never reach this level, given the characteristics of Southern European countries. However, as this chapter hypothesises, the vast surveillance apparatus deployed for gathering data of vulnerable populations, and the extensive use of actuarial and automated methods is leading to a form of datafied surveillance state.Footnote 16

This chapter has two objectives. First, to point to the necessity of building a non-Anglocentric theoretical framework from which to study the ideological and sociological fundamentals in which datafied forms of societal oppression stand. As we further develop, the datafication techniques underpinning contemporary automated governmentalities build on long-term historical, epistemological, and ideological processes. In the case of Southern Europe these techniques can be traced back to the sixteenth century genocidal biopolitics deployed against Muslims, Jews, Roma, and Indigenous peoples.Footnote 17 We aim to fill an important gap in race, sociolegal, and critical data studies. Despite Spain and Catalonia’s long and influential history of surveillance and racial oppression, its institutional surveillance apparatuses remain largely unknown and understudied. As the chapter demonstrates, the data surveillance state does not rely on the same technologies, focus on the same subjects, and pursue the same objectives in every context. On the contrary, it draws on contextual genealogies of domination, specific socioeconomic structures, and distinctive forms of distributing power. The second objective is to provide an empirical analysis on the ways the Islamophobic ConsensusFootnote 18 is being operationalised in Catalonia, and with it to expose the overlapped racist mechanisms governing the lives of racialised black and brown young adults.

Drawing on empirical and archival research, the first part of the chapter analyses the surveillance-governmental apparatus deployed over Islamic communities in Catalunya. The second part of the chapter frames the ideological, epistemological, and historical fundamentals of the Southern European way to racial neoliberalism, here labelled as the Islamophobic Consensus. Drawing on surveillance and critical race studies, we synthesise the defining features that distinguish this model of domination from other iterations of neoliberal racism. The section continues examining two dimensions of the Islamophobic Consensus: Islamophobia as an epistemology of domination and Islamophobia as a governmentality.

8.2 Datafying Islamophobia

Since 2016, Catalonia has been implementing the Catalan Protocol for prevention, detection and intervention in processes of Violent Extremism or PRODERAE in schools, local police stations, prisons, and social services. PRODERAE is part of the wider Special Counter Terrorism Policing Operational Program. Despite its relevance (and the persistent requests of the authors through official channels) most details of the PRODERAE remain unavailable to the public and hence hidden from democratic scrutiny due to ‘security reasons’.Footnote 19 However, a leak allowed us to get access to some documents and to a non-official recording of the PRODERAE training. On 18 May 2022, upon the requirement of the Catalan parliamentary group of the Candidatura d’Unitat Popular, we also obtained information on the training given on these instruments to public servants across different services. Specifically, the scarce data provided by the Catalan authorities accounts for the number of attendees and the number of courses given. We have crossed this documentation with the PRODERAE antecedent, the PRODERAI-CE Protocol de prevenció, detecció i intervenció de processos de radicalització islamista- Centres Educatius [Protocol for the prevention, detection and intervention of Islamist radicalization processes – Education centres] widely used over young Muslims. While not fully accurate, this analysis could provide a glimpse into the racist governmental strategies deployed over Muslim population in Catalonia.

Both instruments evaluate and assess the risk to individuals based on different elements such as their individual behaviour, the social, economic, professional, and educational contexts, or the ways they engage with beliefs, politics, and religion. In this regard the instruments used in Catalonia are similar to other predictive and preemptive tools used in the European context, such as the Dutch Violent Extremism Risk AssessmentFootnote 20 and the British Structured Professional Guidelines for Assessing Risk of Extremist Offending.Footnote 21 Like the infamous British Prevent strategy,Footnote 22 the model proposed by the Spanish and Catalan authorities establishes a comprehensive although distributed surveillance regime over the population under risk of radicalisation (the entire Muslim community).

The PRODERAI-CE differentiates four areas from which the risk of a given subject will be evaluated: personal development, school context, family context, and social context. To obtain information the system relies on a vast array of agents, technologies, and points of data extraction that amalgamate under the securitarian prism – members of the community, educators, social workers, police officers, and intelligence services. To that end, the Catalan government has deployed considerable efforts and resources in providing training on the use of these tools to educators (3,118 since 2018), officers of the criminal justice system (CJS) including lawyers and social workers (2,013 since 2015), and police officers (30,902 since 2015). This has resulted in 667 thorough investigations of which 250 were conducted by police intelligence services. Herein, the boundaries between welfare and policing, street surveillance and cyberwarfare blurry in a diffuse although perceptible regime of racialised social control.

Among the factors related to personal development the instruments evaluate negatively ‘the difficulty of managing emotions’, ‘the difficulty of building a multiple identity’, the ‘proximity to radicalised peer groups’, and ‘low expectations of success’.Footnote 23 Elements such as the dress code (hijab, niqab), personal appearance (beard), as well as dietary and leisure habits (halal, alcohol consumption), are surveilled with special interest. In the same vein public servants are instructed to follow closely religious beliefs and political attitudes towards specific issues. Besides the above elements, school educators are asked to pay special attention to ‘the lack of bonds between peers’ and ‘the difficulty of (the teacher) establishing bonds with students’,Footnote 24 as these elements are considered risk indicators.

With regard to the family environment, ‘low family participation and involvement in school activities’ and ‘the [lack of] sense of belonging’ are also considered as elements to consider in measuring potential radicalisation processes.Footnote 25 In terms of social context, the instruments evaluate negatively ‘the influence of social networks’, or if the individual belongs to ‘socioeconomically disadvantaged contexts’. Another element that may trigger an alarm is the ‘lack of attachment to the social environment’.Footnote 26 The information collected by public servants is transferred to the Territorial Evaluation and Monitoring Board where police officers and education inspectors will decide the feasibility of the indicated risk. This could eventually lead to further investigation, wiretapping, raids, detentions, and deportations.

Given the opacity, secrecy, and the lack of transparencyFootnote 27 guiding the Spanish and Catalan authorities’ operations with regard to cases of alleged radicalisation, it is utterly difficult for researchers, activists, and even politicians to access critical information. What data gathering tools, both analogical and digital, are currently being used? How is the data gathered across services being stored, processed, analysed, and by whom? Are these data sets feeding ADM systems used in the public sector? Who is entrusted with overseeing these data-intensive tasks? Had these instruments and technological tools passed any form of auditing and impact assessment? We have asked Spanish and Catalan authorities these and other questions, but have not received any response whatsoever. However, we can infer some of this information from: (1) The documentation related to RisCanvi, the risk assessment tool used in the Catalan prison system to assess the potential recidivism of inmates in order to determine paroles, and (2) the well documented usage of tools for preventing ‘radicalisation’ in the United Kingdom and the Netherlands.

RisCanvi is an automated tool used by prison authorities, psychologists, criminologists, and social workers in the Catalan prison system. So far only one official report has been published,Footnote 28 which is consistent with the lack of transparency in other instruments and areas; however, the report and several academic works published by its designers gives a glimpse of the system. The tool provides a recidivism risk score that helps professionals to decide whether inmates can be paroled. For that it takes into account forty-five variables, encompassing behavioural, sociodemographic, biographical, educational, economic and social data. For instance, the system will measure whether an inmate belongs to a vulnerable group, their criminal history (and that of their peers), addictions, sexual behaviour, and so on. While necessarily overseen by humans, officers rarely disagree with the ‘algorithmic score’ (1 per cent) which given the 82 per cent false-positive rateFootnote 29 leads to a situation of unfairness. The weight of each variable in the final score has not been revealed, however given the known items we can infer that a potential automated discrimination may be taking place. For instance, the tool negatively weighs a vulnerable economic situation, employment status, the criminal history of family and peers among others. Items like these have been used in other toolsFootnote 30 as proxies to punish race and poverty, reinforcing social prejudices against vulnerable collectives. In addition, RisCanvi has been built upon historical data gathered by the prison system, the fact which raises important problems. As we have demonstrated elsewhere,Footnote 31 classism and racism run rampant across the Spanish and Catalan criminal justice systems. Racialised and poor subjects are more likely to be stopped, detained, arrested, and processed. Hence, the ‘dirty’ data setFootnote 32 feeding the system nurture a discriminatory feedback loop.

Britain’s Violent Extremism Preventing Program, popularly known as Prevent, is part of United Kingdom’s national counter-terrorism strategy CONTEST. It was launched in 2006 by the then governing UK Labour Party.Footnote 33 Its reach has expanded from police and prisons, to child care, elementary and high schools, tertiary education institutions, and even the National Healthcare System (NHS). The Extremism Risk Guidelines 22+ (known as ERG 22+) developed by ‘Her Majesty’s Prison and Probations Service’ in 2011 is the inductive instrument that gathers the ‘radicalisation signals’ and backs-up the program with the risk assessment framework. The ERG22+ is presented as ‘a structured professional judgement (SPJ) tool that assesses individuals along 22 factors that are grouped into three domains; Engagement, Intent and Capability’.Footnote 34 This has been replicated in the PRODERAE-PRODERAI-CE training, which uses terminology such as ‘identity, meaning and belonging’, ‘us and them thinking’, ‘overidentification with a group, cause or ideology’, ‘the need to redress justice’, or ‘the need to defend against threats’.

Many scholars have highlighted how the UK’s automated tools associate Muslims with terrorism, putting the entire Muslim population on the spot.Footnote 35 Moreover, recent research highlights community surveillance is becoming universal surveillance.Footnote 36 For instance, NHS’ public servants are now legally obliged to comply with their policing tasks, not only over ‘suspicious communities’, but also have to look for unpredicted new patterns of extremism in the entire patient population.Footnote 37

In fact, as Heath-KellyFootnote 38 points out the implementation in the national healthcare and education systems belongs to modalities of calculation derived from automated and big data tools that enable mass surveillance methods. She even argues that this kind of surveillance inductively produces the terrorist profile.Footnote 39 Consequently, the outcome of this approach is the production of Islamophobic data associated with Muslim (pre)criminality. Even if the cases are dismissed, the details of the people that flagged the alert remain in the UK’s police database for seven years.Footnote 40 ‘Prevent’ has been the target of profound critique in numerous reports from antiracist and anticolonial grassroots movements (Islamic Human Rights Commission, Cage UK), as well as international human rights organisations such as the Transnational Institute and Amnesty International.Footnote 41 One of the last reports not only pointed to its Islamophobic and discriminatory nature, but also to its ineffectiveness.Footnote 42 Despite the wide critique, the UK Home Office has only expressed that they ‘can find no evidence to support these claims’.Footnote 43

Despite the limited information available, the PRODERAE and PRODERAI show important theoretical and operational flaws worth highlighting. First and foremost, both instruments are aimed at preventing radicalisation. However, there is a striking lack of theoretical consensus on its definition.Footnote 44 Radicalisation takes shape when protocols such as PRODERAE are applied. It is thus a tool for producing ‘dangerous subjects’. The second problem is that many of the hidden indicators are expressions of religious practice. Changing the dressing code or adopting a more visibly Muslim expression, as wearing a hijab, putting henna on their hands, respecting prayer hours, demanding a halal menu, speaking or expressing opinions based on Islamic precepts or even expressing social discontent or pointing out Islamophobic or racist practices can all be indicators of radicalisation.

The tools analysed are embedded in vagueness and abstraction, if not falling in blatant contradictions. Factors and indicators that guide their implementation are left to the arbitrary interpretation of public officers. For instance, playing too many violent video games may indicate ‘military training’, although not playing video games at all may be a symptom of rejection of ‘westernisation’, in consequence both playing and not playing video games become a cause of suspicion. In the same vein, many of the ‘radicalisation symptoms’ indicated by the tools, such as troubles in navigating multiple identities, swift changes in appearance, friends, and habits, are most often processes inherent to the personal development of teenagers and young adults, and not ‘strange’ or ‘deviated’ as the tools make them to be. These tools embrace a hyper individualistic approach making individuals responsible for the consequences of complex socio-structural problems. For instance, individuals are accused of separatism and cultural isolation, ignoring the endemic economic crisis that, along with the racial division of labour, nurtures a growing racialised geography and school segregation. To illustrate, the chances of being flagged as a risky subject dramatically rise when students rely too much on ‘cultural and religious’ peers, because as the document states ‘the school has difficulties in promoting an inclusive environment’.Footnote 45 The tools, far from helping the school to better understand these difficulties, seem to present them as elements of suspicion. As we can see, the pernicious consequences of the racial neoliberal project are datafied and hidden under an aura of false technological neutrality, just to be weaponised against its victims.

Finally, as multiple scholars have warned, predictive and preemptive tools used across the public sector (welfare, CJS, policing, and surveillance) entail considerable risks especially for already vulnerable and racialised populations.Footnote 46 This has been demonstrated in recent scandals involving classist and racist sociotechnical systems deployed in Australia, the United Kingdom, and the Netherlands, to name a few. It was Bernard Harcourt who famously stated that these technologies can ‘create a vicious circle, a kind of self-fulfilling prophecy’Footnote 47 contributing to ‘reinforce[ing of] stigmatisation, significantly undermining living conditions of certain population groups and restricting the possibilities of insertion of the individuals belonging to them’.Footnote 48 Some have rightfully described the plans to prevent radicalisation in Spain as an example of neoliberal exceptionalism.Footnote 49 A system that ‘employs surveillance technologies and situational crime control measures and that minimises or curtails a variety of social welfare programs’ against vulnerable people, producing it as dangerous population and criminalising it accordingly. Far from preventing any potential harm, the datafication processes triggered by tools like the ones analysed increase the occurrence of racial pre-criminality and reinforce the socially harmful policies. Our aim in the following sections is to contextualise the ongoing actuarial and datafication processes within a longer history of Islamophobia that far predates contemporary forms of datafied governance.

8.3 Southern European Neoliberalism Fundamentals

Multiple local organisations and antiracist grassroots movements such as the Asociación Musulmana por los Derechos Humanos [The Islamic Association for Human Rights],Footnote 50 SOS Racisme CatalunyaFootnote 51 have denounced how institutional, political, and social Islamophobia narratives run rampant in Southern Europe. They are not alone in their criticism. Higher supranational instances have also pointed in the same direction. For instance, the UN Special Rapporteur on freedom of religion or belief released in 2021, a report on anti-Muslim racism informing how government-driven securitisation processes severely affect Muslim rights to freely exercise their religion, with intelligence services surveilling mosques, and governments such as the French restricting the ability of Muslim communities to stabilising charitable institutions.Footnote 52 However, these efforts can do little against the Islamophobic narrative deployed at every institutional and social level. In the media, a wide variety of actors, from so-called liberal philosophers to well-known white feminist writers have contributed to the production of the Islamic otherFootnote 53 with labels such as ‘backwards’, ‘antimodern’, ‘violent patriarchal’, and ‘dangerous’.Footnote 54 As the report highlights, these stereotyped narratives promoted by ‘prominent politicians, influencers, and academics’ who ‘advance discourses online on both social networks and blogs that Islam is innately antithetical to democracy and human rights, particularly gender equality, often propagating the trope that all Muslim women are oppressed’. Sociologist Sara Farris has coined this ideological, neoliberal political-economy convergence as Feminacionalism.Footnote 55 Despite meaningful divergencies in other political arenas, neoliberal politicians, right- and far-right nationalist parties and feminist bureaucrats, or ‘femicrats’ seem to agree on the intrinsic dangers of Islam in general and male Muslims in particular.

Politically, far-right parties cashed the endless succession of crises caused by financial capitalism, becoming key political actors in Spain (third political party), Portugal (third political party), Greece (formerly third political party), Italy (first political party). The most impoverished and discriminated segments of populations were used by the far-right as a scapegoat of the 2008 and 2021 crises, and accused of stealing jobs, being responsible for an inexistent wave of criminality, and the destruction of moral values and social coexistence. Rising neofascist political parties such as Vox in Spain (a spin-off of the conservative Popular Party) have, for instance, proposed to reverse the already granted Spanish citizenship to ‘dubious migrants’ stating that ‘[c]itizenship is a privilege’.Footnote 56 These discriminatory discourses permeate the political landscape across the political spectrum due to the modern transhistorical persistence of what Edward Said described as Orientalism.Footnote 57 Islamophobia is indeed one of the defining features of the Southern European iteration of racial neoliberalism. Although sharing some common traits with its Global North counterparts, Southern European racial neoliberalism emerges from a different genealogy and is built upon different socioeconomic and ideological structures, presenting thus its own characteristics.Footnote 58 While the main objective of the chapter is to focus on the Islamophobic Consensus, it is worth highlighting some distinguishable elements of Southern European neoliberalism.

First, Southern European racial neoliberalism does not stick to a single ideology, policy, technology, and regulation, nor univocally attached to exclusive forms of domination. Instead, it is composed by a baroquianFootnote 59 multilayered structure encompassing traditional and latest technological developments (including ADM and AI) with colonial and postcolonial practices of racialised governmentality developed through centuries of colonialism. These proto-racistFootnote 60 dynamics defined by pre- and capitalist cultural and religion discrimination practices, still inform the performativity of the Spanish racial formation. For instance, the colour-line created during the slave economy still works as a racialising technology in the current welfare, migration, and criminal policies.Footnote 61 As Deepa Kumar stated: ‘While race is dynamic, contingent, and contextual, the ideology of Islamophobia attempts to fix what it means to be Muslim and to create a reified Muslim whose behaviour can be predicted, explained, and controlled.’Footnote 62 Because of the above, racial politics, deeply bound with the legacies of coloniality, operate with significant differences from other countries in the Global North. For instance, Romani people, an extremely diverse and historically oppressed minority,Footnote 63 is also celebrated as quintessential of the Spanish and Catalan popular cultures. As the global success of the Catalan singer Rosalia stresses (see, for instance, her video ‘Málamente’), folklorised values and aesthetics associated with Romani people are appropriated by individuals and institutions and commodified, while Romani people are discriminated at every level.Footnote 64

Secondly, the public sector plays a key role in the societal, economic, and political dimensions. It controls significant aspects of key ideological apparatuses such as schools and media. It holds a vast influence over the workforce through direct employment of relatively significant segments of population.Footnote 65 Unlike other polities such as the United States, Southern European countries have not fully privatised their criminal justice systems, retaining much of the organisational, operational, and designing sovereignty over these areas.

Thirdly, the privatisation of democracy described by Basque philosopher Jule GoikoetxeaFootnote 66 as the hijack of public institutions and common assets by corporations and private interests, and the perceptible sacrifice of social rights for the sake of the capitalist class has not fully impacted the entire population. As a plethora of feminist researchers demonstrate, women, especially those belonging to racialised communities, have disproportionally paid a heavy price containing what would have been otherwise a societal tragedy.Footnote 67 They have disproportionately sustained the family structures that have safeguarded the well-being of entire families, especially taking care of dependents. In the following section we will focus on two dimensions of what we identify as the Southern European path to neoliberal racism, what here is called the Islamophobic Consensus: Islamophobia as a racialised epistemic formation, and as a form of governmentality.

8.3.1 Islamophobia as an Epistemic Formation

During his courses in the College du la France (1977–1978) French philosopher Michel Foucault described how the western European states slowly switched their object and subject of governance from the vagueness of kingdoms and nations to the scientific and measurability of territories and population. The rise of governmentality and the birth of biopolitics placed life as something to govern, to manage, to commodify, and reproduce.Footnote 68 In his landmark book The Taming of Chance, Ian Hacking explained how during 1860–1882 the expansionist Prussian State developed one of the most powerful statistical apparatuses of the era.Footnote 69 One of its most unsettling results was the emergence of a distinguishable and previously inexistent population within Prussia: the Jews. Under the Enlightened Prussian direction, racialisation of German Jews started through the act of being counted and measured as a category separated from true Germans of the Empire and a dangerous population to be controlled, to be governed.

A new interest for counting and measuring bodies, goods, commodities grew as a consequence of the expansion of new governmental techniques.Footnote 70 This led to a transformation in the way decision, policies, and laws were produced, and how they were re-centred to producing and managing territories and population under a securitarian regime. How many people, of what kind, creed, were born and deceased? How many apples were picked? How much gold, iron, how many roads? Numbers became the glorified signature and evidence of a scientifically based knowledge. Nature was subjected to the apprehension of its intuited regularities, so did societies. Natural and social phenomena were no longer discernible through the lens of mechanicist eternal laws in motion. Instead, they were the result of complex interactions between a nearly endless succession of events determined by chance and apprehensible through mathematical probabilistic models … if enough data was available.Footnote 71 That was the first step towards the dethroning of law as the inspiring principle of the state and its substitution by the actuarial dispositives, or as Alain SupiotFootnote 72 put it, ‘the beginning of the governance by numbers’.

However, as Aimé Césaire explains in his powerful work Discourse on Colonialism,Footnote 73 almost all major institutionalised crime against the ‘white man’ had already been practised in the colonial laboratory against non-Europeans. The very first to be counted, numbered and managed, to be commodified, to be produced and reproduced, to be scientifically governed and datafied were not white subjects of the metropolises, but racialised dominated subjects. The first systematic censuses were undertaken not in European metropolises as Hacking mentioned, but in Al-Andalus, Peru, and Mexico, where Whites, Catholics, Moriscos, Jews, and Converses (to name some of the endless racial categories) were counted in order to inform political, economic, ecclesiastic, and social decisions.Footnote 74 The will to exploit and colonise lands and peoples fuelled much of the sociotechnical developments nowadays considered modern science. An army of colonial scientists swarmed the colonies measuring forests and lakes, mines, and dunes. Counting bodies, scrutinising eyes, arms, and craniums. Evaluating the fertility of the land and of the women’s wombs.Footnote 75

Fifteenth century Iberian Peninsula’s politics and heated intellectual debates testify to the interconnected genealogy of the birth of the colonial enterprise, racial capitalism, and population control technologies.Footnote 76 The most renowned intellectuals of the time, gathered around the School of Salamanca, demanded a shift from medieval politics cantered in aristocratic factions and vague notions of territory, towards the government of the population. As has been stated the School of Salamanca advanced much of the early capitalist political economy, and, as we are just now starting to unveil, they also set the grounds for the ideological justification of opprobrious forms of human exploitation.Footnote 77 For instance, the commonly cited theological debates of Valladolid allegedly discussing whether Indigenous people had souls were not a backward Byzantine debate, as it has often been depicted. They were instead highly sophisticated negotiations between colonial factions arguing whether ‘Indians’ and ‘Moros’ were to be massacred, enslaved, or included within the political body of the empire.Footnote 78

Accordingly, the state governmental strategies switched from regarding the population as a passive element, to contemplating it as an active resource that needed to be governed and mobilised. The new morals demanded mechanisms for counting, controlling, multiplying, governing, and mobilising the population along the States’ needs. But also, to control, regulate, and punish its ‘ill’ and ‘impure’ elements. For that, the Spanish colonial State developed sophisticated technologies of power aimed at producing racialised subjects ready to be governed and exploited in the mines, plantations, and endless public and private operations.Footnote 79 For instance, the consideration of humans as a resource to be controlled appears as early as 1499 in a document signed by the Catholic Monarchs. There, the ‘gitanos’, traditionally nomadic and thus unfixed to a specific sovereign, were regarded as an unproductive and dangerous population. Those ‘gitanos’ with no profession should be physically punished or vanished from the territory, claimed the norm.Footnote 80

A thick network of legal measures plagued the Spanish Empire, underpinning a profoundly racialised epistemology of power. That is a system of knowledge designed to produce dominated political subjectivities bound to inherited tasks considered to be of inferior status.Footnote 81 The infamous statutes of ‘pureza de sangre’ [purity of blood] are a well-known example of it. Designed by one of the most advanced political bodies of European modernity: the Inquisition, they consisted of a decentralised and granular system of population classification articulated through parishes and churches, entrusted with certifying the alleged Christian blood purity of a family’s genealogy.Footnote 82 Those unable to prove their intergenerational purity (more likely conversos, Jewish, and Muslims) were prohibited from accessing positions of social, political, military, religious, and economic relevance.Footnote 83 Along with the ‘estatutos de limpieza de sangre’, endless instruments were deployed to expel ‘indios’, ‘negros’, ‘mulatos’, ‘moros’, ‘mestizos’, ‘gitanos’, and anything in between, from the most socially rewarded and profitable activities.Footnote 84 Unlike other previous forms of domination, the new technologies of power configured an inferior subjectivity with hereditary, collective, and functional character. It sought to target and mark entire populations, for exploitation and control. Legalised social status fixations, and consequently the impossibility of social progress for Blacks, Roma, Jews, Muslims, and converts, lies at the very foundations of the Spanish nation-state.

8.3.2 Islamophobia as a Governmentality Strategy

As we have briefly seen, Spanish historiography is plagued with examples of racialised governmental technologies. However, for the purpose of this analysis it is worth highlighting two (relatively) recent developments. The first Spanish Immigration law (1985) turned the Muslim Arab-Amazigh population living in the peninsula into ‘illegal immigrants’. The colonial dominion of the Spanish state over north African territories lasted until the late 1970s, when Sahara gained independence, with several enclaves, such as Ceuta and Melilla, still controlled by Spain. Former (post)colonial subjects, living in Spanish territories for years, overnight were denied any recognition of residency and citizenship. In other words, they became the new other and were expelled from the symbolic and material benefits of their political community.Footnote 85 This measure responded to the forthcoming integration of Spain in the European Union (and therefore, becoming one of the southern borders of Europe) and the new role of the Spanish state, switching from migrant-sender to a migrant-receiving country.Footnote 86 The country was transitioning towards neoliberal way of managing subaltern and racialised people locating the dominated within a racially hierarchised labour system. The aim was to prevent them from equal access to the best remunerated jobs through a set of formal and informal mechanisms, that began with the production of differentiated categories in citizenship and residency with different access to rights and work permits, as well as by not recognising foreign degrees certificates, thus deploying discriminatory practices in hiring.Footnote 87 The new racial division of labour was especially perceptible in global hubs such as Catalonia. On the one hand, migrants from European Union member countries were rebranded as expats, and accepted as designers, executives, teachers, and scientists. On the other hand, the African and Latin American precarious subaltern were funnelled to the agricultural and construction sectors, both characterised by the poor if not nonexistent labour political and social rights.

The second wave of Islamophobic legislation came enshrined in the wide context of the US-led war on terror. The 2004 Madrid terrorist attack accelerated the neoliberal punitive turn with multiple counter-terrorist policies specifically designed to fight the ‘jihadist’ threat.Footnote 88 The new measures steadily increased policing and judicial powers, and more importantly, validated a securitarian narrative by which entire populations become suspicious. The concept of terrorism itself also shifted to encompass a wide range of activities and behaviours ranging from the mundane, to political and civil activism, the expression of solidarity with international causes or the contentious self-indoctrination. The framing served the purpose of institutionalising racially defined securitarian spaces turning the rhetoric of prevention as political common sense. Herein it becomes normal aligning hard and soft State power (police and welfare surveillance) to surveil neighbourhoods framed as dangerous environments, immersed in ‘radicalising’ atmospheres. It was during this last period when welfare and police surveillance, everyday stop and frisk, arrests and extrajudicial killings fuelled a climate of unrest and repression for many communities while reinvigorating the transhistorical moral panic of the ‘Moros’.Footnote 89

To sum, while it is true that Islamophobia as governmentality builds on fictional beliefs that have become western ‘common sense’, it will be a mistake to consider it just a set of discriminatory narratives. The Islamophobic Consensus operates under the code of a colour-blind racism and defends, reinforces, and produces an unequal distribution of goods and assets, that disproportionally benefits the right kind of citizens while punishes the others. In other words, the Islamophobic governmental apparatus was designed to legitimise and to justify very material relations of exploitation (Kumar, 2021).Footnote 90

8.4 Conclusion

In 2017 a series of attacks shocked Catalonia. A van was driven in the centric Rambla of Barcelona, killing fourteen people and injuring hundreds. Hours later in Cambrils (Catalonia), another woman was killed, and several others injured. According to the PRODERAE and PRODERAV tools, one of the most relevant factors in any type of radicalisation relates to the perceived sense of belonging and connection with a territory. However, a social educator from Ripoll, the hometown of the young adults who committed the attacks, said: ‘[t]hese boys were integrated; they spoke perfect Catalan and they became terrorists’. The attacks demonstrated the uselessness of the protocols, indicators, and criteria for detecting radicalisation. As we have seen, the PRODERAI/PRODERAEV are preventive actuarial methods aimed at measuring and preventing radicalisation. For that, the instruments draw on classic social risk factors (personal development, school context, family context, social context) along with other ‘radicalisation indicators’ inaccessible to the public. The concealment of these indicators from public knowledge hinders social and political opposition, precisely because it hides the explicitly Islamophobic character of the automated tools used. However, the problem will not be solved just by making these sociotechnical systems more transparent and accountable.

As we have demonstrated, the digital and analogue technologies used to control, surveil, and punish young Muslims are not ends by themselves, but rather they mean for reinforcing a socially harmful system of oppression rooted in the darkest moments of the global European domination. The Islamophobic Consensus, that is, the Southern European iteration of neoliberal racism, stands on centuries of eurocentrism and white suprematism articulated through intricate institutional, legal, political, and economic developments transcending regional and national boundaries. Similarly, today’s astonishing data gathering, data management and data analysis capabilities, and the ‘magic’ behind predictive automated tools are not spontaneous outputs but the result of centuries of training, experimentation, and scientific developments. From the early colonial censuses and regulations designed to protect the healthy Christian population from depraved Muslims and Jews to more recent forms of predictive policing and digital surveillance, numbers, statistics, and dozens of other governmental tools have served the interests of the powerful.

There are no shortcuts, neither technical, legal nor magical solutions for a global problem rooted in centuries of oppression, domination, genocide, and deprivation. No solution will come from a political party, a corporation, a new legal instrument (either National or Universal). The long history of struggles against colonialism, racism, and fascisms demonstrates that the perversity of domination extends from the most obscene and crude forms of domination to highly sophisticated and subtle alienation. To fight such massive structures we need, undoubtedly, powerful communities, and meaningful relations, let alone the energising voices of empowered singers such as Huda who reminds us to Keep It Halal. But we also need adequate epistemic tools to be able to think politically and historically about the events surrounding us. Hopefully, this chapter could help radical researchers and other folks in such endeavour.

Footnotes

5 The Automated Welfare State Challenges for Socioeconomic Rights of the Marginalised

* The author is indebted to research assistance provided by Arundhati Ajith.

1 Jennifer Raso, ‘Unity in the Eye of the Beholder? Reasons for Decision in Theory and Practice in the Ontario Works Program’ (2020) 70 (Winter) University of Toronto Law Journal 1, 2.

2 Karen Yeung, ‘Algorithmic Regulation: A Critical Interrogation’ (2018) 12(4) Regulation & Governance 505; Lina Dencik and Anne Kaun, ‘Introduction: Datification and the Welfare State’ (2020) 1(1) Global Perspectives 12912; Raso, ‘Unity in the Eye of the Beholder?’; Lena Ulbricht and Karen Yeung, ‘Algorithmic Regulation: A Maturing Concept for Investigating Regulation of and through Algorithms’ (2022) 16 Regulation & Governance 3.

3 Terry Carney, ‘Artificial Intelligence in Welfare: Striking the Vulnerability Balance?’ (2020) 46(2) Monash University Law Review 23.

4 Tapani Rinta-Kahila et al, ‘Algorithmic Decision-Making and System Destructiveness: A Case of Automatic Debt Recovery’ (2021) 31(3) European Journal of Information Systems 313; Peter Whiteford, ‘Debt by Design: The Anatomy of a Social Policy Fiasco – Or Was It Something Worse?’ (2021) 80(2) Australian Journal of Public Administration 340.

5 Prygodicz v Commonwealth of Australia (No 2) [2021] FCA 634, para [5]: Royal Commission into the Robodebt Scheme, Report (Canberra: July 2023).

6 Penny Croft and Honni van Rijswijk, Technology: New Trajectories in Law (Abingdon, Oxford: Routledge, 2021) 416.

7 Brian Jinks, ‘The “New Administrative Law”: Some Assumptions and Questions’ (1982) 41(3) Australian Journal of Public Administration 209.

8 Joel Townsend, ‘Better Decisions?: Robodebt and Failings of Merits Review’ in Janina Boughey and Katie Miller (eds), The Automated State (Sydney: Federation Press, 2021) 5269.

9 Terry Carney, ‘Robo-debt Illegality: The Seven Veils of Failed Guarantees of the Rule of Law?’ (2019) 44(1) Alternative Law Journal 4.

10 Maria O’Sullivan, ‘Automated Decision-Making and Human Rights: The Right to an Effective Remedy’ in Janina Boughey and Katie Miller (eds), The Automated State (Sydney: Federation Press, 2021) 70–88.

11 Framework for the Classification of AI Systems – Public Consultation on Preliminary Findings (OECD AI Policy Observatory, 2021).

12 Alexandra James and Andrew Whelan, ‘“Ethical” Artificial Intelligence in the Welfare State: Discourse and Discrepancy in Australian Social Services’ (2022) 42(1) Critical Social Policy 22 at 29.

13 Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (New York: St Martins Press, 2017).

14 Joe Tomlinson, Justice in the Digital State: Assessing the Next Revolution in Administrative Justice (Bristol: Policy Press, 2019).

15 Paul Henman, ‘Improving Public Services Using Artificial Intelligence: Possibilities, Pitfalls, Governance’ (2020) 42(4) Asia Pacific Journal of Public Administration 209, 210.

16 Daniel Turner, ‘Voices from the Field’ (Paper presented at the Automated Decision Making (ADM) in Social Security and Employment Services: Mapping What Is Happening and What We Know in Social Security and Employment Services (Brisbane, Centre of Excellence for Automated Decision Making and Society (ADM + S), 5 May 2021).

17 Terry Carney, ‘Automation in Social Security: Implications for Merits Review?’ (2020) 55(3) Australian Journal of Social Issues 260.

18 This is a machine learning optical character reading system developed by Capgemini: Aaron Tan, ‘Services Australia Taps AI in Document Processing’ (16 October 2020) ComputerWeeklyCom <www.computerweekly.com/news/252490630/Services-Australia-taps-AI-in-document-processing>.

19 Sasha Karen, ‘Services Australia Seeks Customer Experience Solutions for myGov Platform Upgrade’ (9 February 2021) ARN <www.arnnet.com.au/article/686126/services-australia-seeks-customer-experience-solutions-mygov-platform-upgrade/>.

20 Asha Barbaschow, ‘All the Tech within the 2021 Australian Budget’ (11 May 2021) ZDNet <www.zdnet.com/article/all-the-tech-within-the-2021-australian-budget/>.

21 Monika Zalnieriute, Lyria Bennett Moses, and George Williams, ‘The Rule of Law and Automation of Government Decision-Making’ (2019) 82(3) Modern Law Review 425.

22 Carney, ‘Automation in Social Security’. Marginalised citizens may however benefit from human-centred (a ‘legal design approach’) to AI technologies to broaden access to justice at a relatively low cost: Lisa Toohey et al, ‘Meeting the Access to Civil Justice Challenge: Digital Inclusion, Algorithmic Justice, and Human-Centred Design’ (2019) 19 Macquarie Law Journal 133.

23 Gerard Goggin et al, ‘Disability, Technology Innovation and Social Development in China and Australia’ (2019) 12(1) Journal of Asian Public Policy 34.

24 Sora Park and Justine Humphry, ‘Exclusion by Design: Intersections of Social, Digital and Data Exclusion’ (2019) 22(7) Information, Communication & Society 934, 944.

25 Footnote Ibid, 946.

27 See Chapter 9 in this book: Cary Coglianese, ‘Law and Empathy in the Automated State’.

28 Carney, ‘Automation in Social Security’; Simone Casey, ‘Towards Digital Dole Parole: A Review of Digital Self‐service Initiatives in Australian Employment Services’ (2022) 57(1) Australian Journal of Social Issues 111. A third of all participants in the program experienced loss or delay of income penalties, with Indigenous and other vulnerable groups overrepresented: Jacqueline Maley, ‘“Unable to Meet Basic Needs”: ParentsNext Program Suspended a Third of Parents’ Payments’ (11 August 2021) Sydney Morning Herald <www.smh.com.au/politics/federal/unable-to-meet-basic-needs-parentsnext-program-suspended-a-third-of-parents-payments-20210811-p58hvl.html>.

29 Jennifer Raso, ‘Displacement as Regulation: New Regulatory Technologies and Front-Line Decision-Making in Ontario Works’ (2017) 32(1) Canadian Journal of Law and Society 75, 83.

31 Virginia Eubanks and Alexandra Mateescu, ‘“We Do Not Deserve This”: New App Places US Caregivers under Digital Surveillance’ (28 July 2021) Guardian Australia <www.theguardian.com/us-news/2021/jul/28/digital-surveillance-caregivers-artificial-intelligence>.

32 The reforms were opposed by the NDIS Advisory Council and abandoned at a meeting of Federal and State Ministers: Luke Henriques-Gomes, ‘NDIS Independent Assessments Should Not Proceed in Current Form, Coalition’s Own Advisory Council Says’ (8 July 2021) Guardian Australia <www.theguardian.com/australia-news/2021/jul/08/ndis-independent-assessments-should-not-proceed-in-current-form-coalitions-own-advisory-council-says>; Muriel Cummins, ‘Fears Changes to NDIS Will Leave Disabled without Necessary Supports’ (7 July 2021) Sydney Morning Herald <www.smh.com.au/national/fears-changes-to-ndis-will-leave-disabled-without-necessary-supports-20210706-p58756.html> .

33 The NDIA outlined significant changes to the model immediately prior to it being halted: Joint Standing C’tte on NDIS, Independent Assessments (Joint Standing Committee on the National Disability Insurance Scheme, 2021) 24–27 <https://parlinfo.aph.gov.au/parlInfo/download/committees/reportjnt/024622/toc_pdf/IndependentAssessments.pdf;fileType=application%2Fpdf>.

34 Helen Dickinson et al, ‘Avoiding Simple Solutions to Complex Problems: Independent Assessments Are Not the Way to a Fairer NDIS’ (Melbourne: Children and Young People with Disability Australia, 2021) <https://apo.org.au/sites/default/files/resource-files/2021–05/apo-nid312281.pdf>.

35 Footnote Ibid; Marie Johnson, ‘“Citizen-Centric” Demolished by NDIS Algorithms’, InnovationAus (Blog Post, 24 May 2021) <‘Citizen-centric’ demolished by NDIS algorithms (innovationaus.com)>; Joint Standing C’tte on NDIS, Independent Assessments.

36 The original IP test was a subjective one of whether the real applicant with their actual abilities and background could obtain a real job in the locally accessible labour market (if their disability rendered them an ‘odd job lot’ they qualified).

37 Terry Carney, Social Security Law and Policy (Sydney: Federation Press, 2006) ch 8; Terry Carney, ‘Vulnerability: False Hope for Vulnerable Social Security Clients?’ (2018) 41(3) University of New South Wales Law Journal 783.

38 Joint Standing C’tte on NDIS, Independent Assessments, ch 5, 9–13.

39 Asher Barbaschow, ‘Human Rights Commission Asks NDIS to Remember Robo-debt in Automation PushZDNet (Blog Post, 22 June 2021) <www.zdnet.com/article/human-rights-commission-asks-ndis-to-remember-robo-debt-in-automation-push/>.

40 Henman, ‘Improving Public Services Using Artificial Intelligence’, 210.

41 Mark Considine, Phuc Nguyen, and Siobhan O’Sullivan, ‘New Public Management and the Rule of Economic Incentives: Australian Welfare-to-Work from Job Market Signalling Perspective’ (2018) 20(8) Public Management Review 1186.

42 Simone Casey, ‘Job Seeker’ Experiences of Punitive Activation in Job Services Australia’ (2022) 57(4) Australian Journal of Social Issues 847–60 <https://doi.org/10.1002/ajs1004.1144>; Simone Casey and David O’Halloran, ‘It’s Time for a Cross-Disciplinary Conversation about the Effectiveness of Job Seeker SanctionsAustaxpolicy (Blog Post, 18 March 2021) <www.austaxpolicy.com/its-time-for-a-cross-disciplinary-conversation-about-the-effectiveness-of-job-seeker-sanctions/>.

43 Bert van Landeghem, Sam Desiere, and Ludo Struyven, ‘Statistical Profiling of Unemployed Jobseekers’ (2021) 483(February) IZA World of Labor 56 <https://doi.org/10.15185/izawol.15483>.

44 Sam Desiere, Kristine Langenbucher, and Ludo Struyven, ‘Statistical Profiling in Public Employment Services: An International Comparison’ (OECD Social, Employment and Migration Working Papers, Paris, OECD Technical Workshop, 2019) 10, 14, 22–23.

45 van Landeghem et al, ‘Statistical Profiling of Unemployed Jobseekers’.

46 Sandra Wachter, Brent Mittelstadt, and Chris Russell, ‘Bias Preservation in Machine Learning: The Legality of Fairness Metrics under EU Non-Discrimination Law’ (2021) 123(3) West Virginia Law Review 735, 775.

47 Sam Desiere and Ludo Struyven, ‘Using Artificial Intelligence to Classify Jobseekers: The Accuracy-Equity Trade-Off’ (2020) 50(2) Journal of Social Policy 367.

48 Emre Bayamlıoğlu and Ronald Leenes, ‘The “Rule of Law” Implications of Data-Driven Decision-Making: A Techno-regulatory Perspective’ (2018) 10(2) Law, Innovation and Technology 295.

49 Jobactive Australia, ‘Assessments Guideline – Job Seeker Classification Instrument (JSCI) and Employment Services Assessment (ESAt)’ (Canberra: 3 June 2020) <www.dese.gov.au/download/6082/assessments-guideline-job-seeker-classification-instrument-jsci-and-employment-services-assessment/22465/document/pdf>.

50 Desiere et al, ‘Statistical Profiling in Public Employment Services’, 9–10.

51 Nigel Stobbs, Dan Hunter, and Mirko Bagaric, ‘Can Sentencing Be Enhanced by the Use of Artificial Intelligence?’ (2017) 41(5) Criminal Law Journal 261.

52 Justice Melissa Perry, ‘AI and Automated Decision-Making: Are You Just Another Number?’ (Paper presented at the Kerr’s Vision Splendid for Administrative Law: Still Fit for Purpose? – Online Symposium on the 50th Anniversary of the Kerr Report, UNSW, 21 October 2021) <www.fedcourt.gov.au/digital-law-library/judges-speeches/justice-perry/perry-j-20211021>.

53 Justin B Bullock, ‘Artificial Intelligence, Discretion, and Bureaucracy’ (2019) 49(7) The American Review of Public Administration 751.

54 DSS, Guide to Social Security Law (Version 1.291, 7 February 2022) para 1.1.E.104 <http://guides.dss.gov.au/guide-social-security-law> .

55 Considine et al, ‘New Public Management and the Rule of Economic Incentives’.

56 Employment Services Expert Advisory Panel, I Want to Work (Canberra: Department of Jobs and Small Business, 2018) <https://docs.jobs.gov.au/system/files/doc/other/final_-_i_want_to_work.pdf>.

58 Mark Considine, Enterprising States: The Public Management of Welfare-to-Work (Cambridge: Cambridge University Press, 2001); Terry Carney and Gaby Ramia, From Rights to Management: Contract, New Public Management and Employment Services (The Hague: Kluwer Law International, 2002).

59 Peter Davidson, ‘Is This the End of the Job Network Model? The Evolution and Future of Performance-Based Contracting of Employment Services in Australia’ (2022) 57(3) Australian Journal of Social Issues 476.

60 Flemming Larsen and Dorte Caswell, ‘Co-Creation in an Era of Welfare Conditionality – Lessons from Denmark’ (2022) 51(1) Journal of Social Policy 58.

61 Bill Ryan, ‘Co-production: Option or Obligation?’ (2012) 71(3) Australian Journal of Public Administration 314.

62 Joel Tito, BGC Foundation Centre for Public Impact, Destination Unknown: Exploring the Impact of Artificial Intelligence on Government (Report, 2017) <www.centreforpublicimpact.org/assets/documents/Destination-Unknown-AI-and-government.pdf>; Elisa Bertolini, ‘Is Technology Really Inclusive? Some Suggestions from States Run Algorithmic Programmes’ (2020) 20(2) Global Jurist 176 <https://doi.org/10.1515/gj-2019-0065>; Perry, ‘AI and Automated Decision-Making’.

63 Simone Casey, ‘Social Security Rights and the Targeted Compliance Framework’ (2019) February, Social Security Rights Review <www.nssrn.org.au/social-security-rights-review/social-security-rights-and-the-targeted-compliance-framework/>; Casey, ‘Job Seeker’ Experiences’.

64 Sofia Ranchordas and Louisa Scarcella, ‘Automated Government for Vulnerable Citizens: Intermediating Rights’ (2022) 30(2) William & Mary Bill of Rights Journal 373, 375.

65 Prygodicz (No 2), para [7].

66 As Murphy J wrote in Prygodicz at para [23] ‘One thing, however, that stands out … is the financial hardship, anxiety and distress, including suicidal ideation and in some cases suicide, that people or their loved ones say was suffered as a result of the Robodebt system, and that many say they felt shame and hurt at being wrongly branded “welfare cheats”’.

67 Whiteford, ‘Debt by Design’.

68 Townsend, ‘Better Decisions?’. As pointed out in Prygodicz. ‘The financial hardship and distress caused to so many people could have been avoided had the Commonwealth paid heed to the AAT decisions, or if it disagreed with them appealed them to a court so the question as to the legality of raising debts based on income averaging from ATO data could be finally decided’: Prygodicz (No 2) para [10].

69 Carney, ‘Robo-debt Illegality’.

70 Jack Maxwell, ‘Judicial Review and the Digital Welfare State in the UK and Australia’ (2021) 28(2) Journal of Social Security Law 94.

71 Amato v The Commonwealth of Australia Federal Court of Australia, General Division, Consent Orders of Justice Davies, 27 November 2019, File No VID611/2019 (Consent Orders).

72 Prygodicz (No 2).

73 Madeleine Masterton v Secretary, Department of Human Services of the Commonwealth VID73/2019.

74 Prygodicz (No 2), paras [172]–[183] Murphy J.

75 The emerging field of explainable AI (XAI) is a prime example which aims to address comprehension barriers and improve the overall transparency and trust of AI systems. These machine learning applications are designed to generate a qualitative understanding of AI decision-making to justify outputs, particularly in the case of outliers: Amina Adadi and Mohammed Berrada, ‘Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)’ (2018) 6 IEEE Access 52138.

76 Raso, ‘Unity in the Eye of the Beholder?’.

77 Anna Huggins, ‘Decision-Making, Administrative Law and Regulatory Reform’ (2021) 44(3) University of New South Wales Law Journal 1048.

78 But see: Makoto Cheng Hong and Choon Kuen Hui, ‘Towards a Digital Government: Reflections on Automated Decision-Making and the Principles of Administrative Justice’ (2019) 31 Singapore Academy of Law Journal 875; Arjan Widlak, Marlies van Eck, and Rik Peeters, ‘Towards Principles of Good Digital Administration’ in Marc Schuilenburg and Rik Peeters (eds), The Algorithmic Society (Abingdon: Routledge, 2020) 6783.

79 O’Sullivan, ‘Automated Decision-Making and Human Rights’, 70–88.

80 Raso, ‘Unity in the Eye of the Beholder?’.

81 Yee-Fui Ng et al, ‘Revitalising Public Law in a Technological Era: Rights, Transparency and Administrative Justice’ (2020) 43(3) University of New South Wales Law Journal 1041.

82 Abe Chauhan, ‘Towards the Systemic Review of Automated Decision-Making Systems’ (2020) 25(4) Judicial Review 285.

83 Teresa Scassa, ‘Administrative Law and the Governance of Automated Decision-Making: A Critical Look at Canada’s Directive on Automated Decision-Making’ (2021) 54(1) University of British Columbia Law Review 251.

84 Andrew Selbst, ‘An Institutional View of Algorithmic Impact Assessments’ (2021) 35(1) Harvard Journal of Law & Technology 117.

85 David Freeman Engstrom and Daniel E Ho, ‘Algorithmic Accountability in the Administrative State’ (2020) 37(3) Yale Journal on Regulation 800.

86 Frederik J Zuiderveen Borgesius, ‘Strengthening Legal Protection against Discrimination by Algorithms and Artificial Intelligence’ (2020) 24(10) The International Journal of Human Rights 1572.

87 E.g. Jennifer Raso, ‘Implementing Digitalization in an Administrative Justice Context’ in Joe Tomlinson et al (eds), Oxford Handbook of Administrative Justice (Oxford: Oxford University Press, 2021).

88 Selbst, ‘An Institutional View of Algorithmic Impact Assessments’.

89 Colin van Noordt and Gianluca Misuraca, ‘Evaluating the Impact of Artificial Intelligence Technologies in Public Services: Towards an Assessment Framework’ (Conference Paper, Proceedings of the 13th International Conference on Theory and Practice of Electronic Governance, Association for Computing Machinery) 12–15.

90 Selbst, ‘An Institutional View of Algorithmic Impact Assessments’, 166.

91 Footnote Ibid, 188.

92 Croft and van Rijswijk, Technology: New Trajectories in Law, ch 4.

93 James and Whelan, ‘“Ethical” Artificial Intelligence in the Welfare State’, 37.

94 Australian Human Rights Commission (AHRC), Human Rights and Technology: Final Report (Final Report, 2021) 88–91 <https://tech.humanrights.gov.au/downloads>.

95 Jakob Mökander et al, ‘Ethics-Based Auditing of Automated Decision-Making Systems: Nature, Scope, and Limitations’ (2021) 27(4) Science and Engineering Ethics 44.

96 Fleur Johns, ‘Governance by Data’ (2021) 17 Annual Review of Law and Social Science 4.1.

97 Richard Re and Alicia Solow-Niederman, ‘Developing Artificially Intelligent Justice’ (2019) 22 (Spring) Stanford Technology Law Review 242; Carol Harlow and Richard Rawlings, ‘Proceduralism and Automation: Challenges to the Values of Administrative Law’ in Elizabeth Fisher, Jeff King, and Alison Young (eds), The Foundations and Future of Public Law (Oxford: Oxford University Press, 2020) 275–98 point out that ‘Computerisation is apt to change the nature of an administrative process, translating public administration from a person-based service to a dehumanised system where expert systems replace officials and routine cases are handled without human input’.

98 Australia, A New System for Better Employment and Social Outcomes (Final Report, Department of Social Services Reference Group on Welfare Reform to the Minister for Social Services, 2015) <www.dss.gov.au/sites/default/files/documents/02_2015/dss001_14_final_report_access_2.pdf>; Christopher Deeming and Paul Smyth, ‘Social Investment after Neoliberalism: Policy Paradigms and Political Platforms’ (2015) 44(2) Journal of Social Policy 297; Greg Marston, Sally Cowling, and Shelley Bielefeld, ‘Tensions and Contradictions in Australian Social Policy Reform: Compulsory Income Management and the National Disability Insurance Scheme’ (2016) 51(4) Australian Journal of Social Issues 399; Paul Smyth and Christopher Deeming, ‘The “Social Investment Perspective” in Social Policy: A Longue Durée Perspective’ (2016) 50(6) Social Policy & Administration 673.

99 Jutta Treviranus, The Three Dimensions of Inclusive Design: A Design Framework for a Digitally Transformed and Complexly Connected Society (PhD thesis, University College Dublin, 2018) <http://openresearch.ocadu.ca/id/eprint/2745/1/TreviranusThesisVolume1%262_v5_July%204_2018.pdf>; Zoe Staines et al, ‘Big Data and Poverty Governance under Australia and Aotearoa/New Zealand’s “Social Investment” Policies’ (2021) 56(2) Australian Journal of Social Issues 157.

100 Terry Carney, ‘Equity and Personalisation in the NDIS: ADM Compatible or Not?’ a paper delivered at the Australian Social Policy Conference 25–29 October to 1–5 November 2021 Sydney; Alyssa Venning et al, ‘Adjudicating Reasonable and Necessary Funded Supports in the National Disability Insurance Scheme: A Critical Review of the Values and Priorities Indicated in the Decisions of the Administrative Appeals Tribunal’ (2021) 80(1) Australian Journal of Public Administration 97, 98.

101 Casey, ‘Towards Digital Dole Parole’; Mark Considine et al, ‘Can Robots Understand Welfare? Exploring Machine Bureaucracies in Welfare-to-Work’ (2022) 51(3) Journal of Social Policy 519.

102 Alex Collie, Luke Sheehan, and Ashley McAllister, ‘Health Service Use of Australian Unemployment and Disability Benefit Recipients: A National, Cross-Sectional Study’ (2021) 21(1) BMC Health Services Research 1.

103 Lacey Schaefer and Mary Beriman, ‘Problem-Solving Courts in Australia: A Review of Problems and Solutions’ (2019) 14(3) Victims & Offenders 344.

104 David Brown et al, Justice Reinvestment: Winding Back Imprisonment (Basingstoke: Palgrave Macmillan, 2016).

105 Carney, Social Security Law and Policy.

106 Rob Watts, ‘“Running on Empty”: Australia’s Neoliberal Social Security System, 1988–2015’ in Jenni Mays, Greg Marston, and John Tomlinson (eds), Basic Income in Australia and New Zealand: Perspectives from the Neoliberal Frontier (Basingstoke: Palgrave Macmillan, 2016) 6991.

107 Monique Mann, ‘Technological Politics of Automated Welfare Surveillance: Social (and Data) Justice through Critical Qualitative Inquiry’ (2020) 1(1) Global Perspectives 12991 <https://doi.org/12910.11525/gp.12020.11299>.

108 Andrew Power, Janet Lord, and Allison deFranco, Active Citizenship and Disability: Implementing the Personalisation of Support, Cambridge Disability Law and Policy Series (Cambridge: Cambridge University Press, 2013); Gemma Carey et al, ‘The Personalisation Agenda: The Case of the Australian National Disability Insurance Scheme’ (2018) 28(1) International Review of Sociology 1.

109 Smyth and Deeming, ‘The “Social Investment Perspective” in Social Policy’; Staines et al, ‘Big Data and Poverty Governance’.

110 Madalina Busuioc, ‘Accountable Artificial Intelligence: Holding Algorithms to Account’ (2021) 81(5) Public Administration Review 825.

111 Bertolini, ‘Is Technology Really Inclusive?’.

112 Busuioc, ‘Accountable Artificial Intelligence’.

113 Shoshana Zuboff, ‘Big Other: Surveillance Capitalism and the Prospects of an Information Civilization’ (2015) 30(1) Journal of Information Technology 75.

114 Rikke Frank Jørgensen, ‘Data and Rights in the Digital Welfare State: The Case of Denmark’ (2021) 26(1) Information, Communication & Society 123–38 <https://doi.org/10.1080/1369118X.2021.1934069>.

115 Raso, ‘Unity in the Eye of the Beholder?’.

116 Carney, ‘Artificial Intelligence in Welfare’.

117 Valerie Braithwaite, ‘Beyond the Bubble that Is Robodebt: How Governments that Lose Integrity Threaten Democracy’ (2020) 55(3) Australian Journal of Social Issues 242.

118 AHRC, Human Rights and Technology, 24, 28 respectively.

119 Joint Standing C’tte on NDIS, Independent Assessments, ix, 22, 120, 152.

120 ‘Co-design should be a fundamental feature of any major changes to the NDIS’: Footnote ibid, 145, para 9.28 and recommendation 2.

121 Michael D’Rosario and Carlene D’Rosario, ‘Beyond RoboDebt: The Future of Robotic Process Automation’ (2020) 11(2) International Journal of Strategic Decision Sciences (IJSDS) 1; Jennifer Raso, ‘AI and Administrative Law’ in Florian Martin-Bariteau and Teresa Scassa (eds), Artificial Intelligence and the Law in Canada (Toronto: LexisNexis, 2021).

122 Ryan Calo and Danielle Citron, ‘The Automated Administrative State: A Crisis of Legitimacy’ (2021) 70(4) Emory Law Journal 797.

123 Treviranus, The Three Dimensions of Inclusive Design.

124 Shari Trewin et al, ‘Considerations for AI Fairness for People with Disabilities’ (2019) 5(3) AI Matters 40.

125 Jinks, ‘The “New Administrative Law”’.

126 One outstanding question for instance is whether the AHRC Report (AHRC, ‘Human Rights and Technology’) is correct in thinking that post-ADM merits and judicial review reforms should remain ‘technology neutral’ or whether more innovative measures are needed.

6 A New ‘Machinery of Government’? The Automation of Administrative Decision-Making

* NSW Ombudsman. This chapter and the presentation given to the ‘Money, Power and AI: From Automated Banks to Automated States’ conference are edited versions of a report the Ombudsman tabled in the NSW Parliament in 2021 titled ‘The New Machinery of Government: Using Machine Technology in Administrative Decision-Making’. With appreciation to all officers of the NSW Ombudsman who contributed to the preparation of that report, including in particular Christie Allan, principal project officer, and Megan Smith, legal counsel.

1 For this reason, machinery of government or ‘MoG’ has taken on the character of a verb for public servants – to be ‘mogged’ is to find oneself, through executive order, suddenly working in a different department, or unluckier still, perhaps out of a role altogether.

2 Machinery of government changes provide an opportunity for government to express its priorities and values, or at least how it wishes those to be perceived – abolishing a department or merging it as a mere ‘branch’ into another may signal that it is no longer seen as a priority; re-naming a department (like re-naming a ministerial portfolio) provides an opportunity to highlight an issue of importance or proposed focus (e.g., a Department of Customer Service).

3 NSW Ombudsman, The New Machinery of Government: Using Machine Technology in Administrative Decision-Making (Report, 29 November 2021) <The new machinery of government: using machine technology in administrative decision-making - NSW Ombudsman>.

4 See for example Australian Government, Digital Government Strategy 2018–2025 (Strategy, December 2021) <www.dta.gov.au/sites/default/files/2021–11/Digital%20Government%20Strategy_acc.pdf>.

5 See for example the first NSW Government Digital Strategy, NSW Digital Government Strategy (Strategy, May 2017) <www.digital.nsw.gov.au/sites/default/files/DigitalStrategy.pdf>; that Strategy has been revised and replaced by NSW Government, Beyond Digital (Strategy, November 2019) <www.digital.nsw.gov.au/sites/default/files/Beyond_Digital.pdf>.

6 See Andrew Le Sueur, ‘Robot Government: Automated Decision-Making and Its Implications for Parliament’ in Alexander Horne and Andrew Le Sueur (eds), Parliament: Legislation and Accountability (Oxford: Hart, 2016) 181.

7 See Commonwealth Ombudsman, Automated Decision-Making Better Practice Guide (Guide, 2019) 5 <www.ombudsman.gov.au/__data/assets/pdf_file/0030/109596/OMB1188-Automated-Decision-Making-Report_Final-A1898885.pdf>.

8 Including health, criminal justice and education settings. A 2019 survey of US federal agency use of AI found that many agencies have experimented with AI and machine learning: David Freeman Engstrom et al, Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies (Report, February 2020) <www-cdn.law.stanford.edu/wp-content/uploads/2020/02/ACUS-AI-Report.pdf>.

9 See Jennifer Cobbe et al, ‘Centering the Rule of Law in the Digital State’ (2020) 53(10) IEEE Computer 4; Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (New York: St. Martin’s Press, 2018).

10 Daniel Montoya and Alice Rummery, The Use of Artificial Intelligence by Government: Parliamentary and Legal Issues’ (e-brief, NSW Parliamentary Research Service, September 2020) 20.

11 For example, the NSW Ombudsman can generally investigate complaints if conduct falls within any of the following categories set out in section 26 of the Ombudsman Act 1974:

  1. (a) contrary to law,

  2. (b) unreasonable, unjust, oppressive, or improperly discriminatory,

  3. (c) in accordance with any law or established practice but the law or practice is, or may be, unreasonable, unjust, oppressive, or improperly discriminatory,

  4. (d) based wholly or partly on improper motives, irrelevant grounds, or irrelevant consideration,

  5. (e) based wholly or partly on a mistake of law or fact,

  6. (f) conduct for which reasons should be given but are not given,

  7. (g) otherwise wrong.

Conduct of the kinds set out above may be said to constitute ‘maladministration’ (although the NSW Act does not actually use that term).

12 See example ‘Services Australia Centrelink’s automated income compliance program (Robodebt)’ in NSW Ombudsman, Machine Technology Report, 27.

13 See further chapters 5–10 of NSW Ombudsman, Machine Technology Report; Marion Oswald, ‘Algorithm-Assisted Decision-Making in the Public Sector: Framing the Issues Using Administrative Law Rules Governing Discretionary Powers’ (2018) 376(2128) Philosophical Transactions of the Royal Society A 1 for a discussion of how administrative law or ‘old law – interpreted in a new context – can help guide our algorithmic-assisted future’.

14 Many of these are discussed in Australian Human Rights Commission, Human Rights and Technology (Final Report, 1 March 2021).

15 New South Wales Law Reform Commission, Appeals in Administration (Report 16, December 1972) 6.

16 See Madeleine Waller and Paul Waller, ‘Why Predictive Algorithms Are So Risky for Public Sector Bodies’ (Article, October 2020) <https://ssrn.com/abstract=3716166> who argue that consideration of ethics may be ‘superfluous’:

The understanding of ‘ethical behaviour’ depends on social context: time, place and social norms. Hence we suggest that in the context of public administration, laws on human rights, statutory administrative functions, and data protection provide the basis for appraising the use of algorithms: maladministration is the primary concern rather than a breach of ‘ethics’: at 4–5, 11.

17 Of course, although not explicitly couched in ‘human rights’ terms, a core preoccupation of administrative law and good administrative practice is the protection of fundamental human rights: see Australian Human Rights Commission, Human Rights and Technology, 55.

18 Corporation of the City of Enfield v Development Assessment Commission [2000] HCA 5; (2000) 199 CLR 135, 157 at 56.

19 For example, requirements can be grouped according to whether a failure to comply with them gives rise to a right to challenge the decision in the courts by way of judicial review, and if they do the various individual ‘grounds’ of such review. They can also be grouped broadly by considering whether a failure to comply with them would mean: (a) the decision is invalid (jurisdictional error); (b) there has been some other breach of law (other legal error); or (c) the decision, or its processes, is otherwise wrong (for example, in a way that could result in an adverse finding under section 26 of the Ombudsman Act 1974 (NSW)).

20 There have separately been questions raised as to whether the constitutionally entrenched rights of judicial review (Commonwealth of Australia Constitution Act s 75(v)) may be affected by a move towards the automation of administrative decision-making, as those rights refer to relevant orders being ‘sought against an officer of the Commonwealth’: Yee-Fui Ng and Maria O’Sullivan, ‘Deliberation and Automation – When Is a Decision a “Decision”?’ (2019) 26 Australian Journal of Administrative Law 3132. On the other hand, it might be that this constitutional provision could ultimately come to limit the ability of the government to adopt fully autonomous machines. In particular, might it be inconsistent with this provision – and therefore constitutionally impermissible – for an agency to put in place autonomous mechanisms in such a way that would result in there being no ‘officer of the Commonwealth’ against whom orders could be sought for legal (jurisdictional) errors? See Will Bateman and Julia Powles, Submission to the Australian Human Rights Commission, Response to the Commission’s Discussion Paper (2020) (‘Any liability rules which sought to circumvent that constitutional rule (section 75(v)) would be invalid …’).

21 Currently, the law recognises as ‘legal persons’ both individuals and certain artificial persons, such as companies and other legally incorporated bodies. Despite suggestions that AI may one day develop to such a degree that the law might recognise such a system as having legal personality, this is clearly not the case today. See Will Bateman, ‘Algorithmic Decision-Making and Legality: Public Law Dimensions’ (2020) 94 Australian Law Journal 529–30.

22 Of course, it is conceivable that legislation could be amended so that something that is now required or permitted to be done by a human administrator is instead to be done in practice by a machine. However, depending on how the legislation is drafted, the proper legal characterisation will not be that the statutory function has moved (from the human administrator to the machine) but rather that the statutory function itself has changed. For example, a legislative amendment may result in an administrator, whose original statutory function is to perform a certain decision-making task, instead being conferred a statutory function to design, install, maintain, etc. a machine that will perform that task.

23 However, an administrator cannot abdicate to others those elements of a function where the administrator must form their own opinion: see New South Wales Aboriginal Land Council v Minister Administering the Crown Lands Act (the Nelson Bay Claim) [2014] NSWCA 377.

24 Carltona Ltd v Commissioner of Works [1943] 2 All ER 560.

25 ‘Practical necessity’ in O’Reilly v Commissioners of State Bank of Victoria [1983] HCA 47; (1983) 153 CLR 1 at 12.

26 New South Wales Aboriginal Land Council v Minister Administering the Crown Lands Act [2014] NSWCA 377 at 38.

27 See Katie Miller, ‘The Application of Administrative Law Principles to Technology-Assisted Decision-Making’ (2016) 86 Australian Institute of Administrative Law Forum 20 at 22. Miller argues that ‘[t]he need to avoid administrative “black boxes” which are immune from review or accountability may provide a basis for extending the Carltona principle to public servants in the context of technology-assisted decision-making to ensure that actions of technology assistants are attributable to a human decision-maker who can be held accountable’.

28 Given uncertainty around the application of the Carltona principle (which is based on an inference as to Parliament’s intent), the Commonwealth Ombudsman has suggested that the authority to use machine technology ‘will only be beyond doubt if specifically enabled by legislation’: Commonwealth Ombudsman, ‘Automated Decision-Making Guide’, 9. That is, rather than inferring that Parliament must have intended that administrators be able to seek the assistance of machines, Parliament could expressly state that intention.

There are already some rudimentary examples of such legislative provisions but, they are not without their own problems. See further chapter 15 of NSW Ombudsman, Machine Technology Report.

29 See, for example, Commissioner of Victims Rights v Dobbie [2019] NSWCA 183, which involved legislation requiring a decision-maker to obtain and have regard to a report written by a relevantly qualified person but not being legally bound to accept and act on that assessment.

30 NEAT Domestic Trading Pty Limited v AWB Limited [2003] HCA 35; (2003) 216 CLR 277 at 138.

31 Footnote Ibid at 150 citing, among other authorities R v Port of London Authority; Ex parte Kynoch Ltd [1919] 1 KB 176 at 184; Green v Daniels [1977] HCA 18; (1977) 51 ALJR 463 at 467 and Kioa v West [1985] HCA 81; (1985) 159 CLR 550 at 632–33.

32 Administrative Review Council, Automated Assistance in Administrative Decision Making (Report No 46, 1 January 2004) <www.ag.gov.au/sites/default/files/2020–03/report-46.pdf> 15–16.

33 James Emmett SC and Myles Pulsford, Legality of Automated Decision-Making Procedures for the Making of Garnishee Orders (Joint Opinion, 29 October 2020) 11 [35] from ‘Annexure A – Revenue NSW case study’ in NSW Ombudsman, Machine Technology Report: ‘Subject to consideration of issues like agency (see Carltona Ltd v Commissioner of Works [1943] 2 All ER 560) and delegation, to be validly exercised a discretionary power must be exercised by the repository of that power’.

34 Of course, machines themselves are inherently incapable of exercising discretion. Even if machines could exercise discretion, their doing so would not be consistent with the legislation, which has conferred the discretion on a particular (human) administrator.

35 See ‘Annexure A – Revenue NSW case study’ in NSW Ombudsman, Machine Technology Report for a detailed case study relating to a NSW Ombudsman investigation where proper authorisation, discretionary decision-making, and the need for a decision-maker to engage in an active intellectual process were key issues.

36 Algorithmic bias may arise without any intention to discriminate, without any awareness that it is occurring, and despite the best intentions of designers to exclude data fields that record any sensitive attributes or any obvious (to humans) proxies. See examples under ‘Algorithmic bias’ in NSW Ombudsman, Machine Technology Report, 35.

37 See example ‘Lost in translation – a simple error converting legislation into code’ in NSW Ombudsman, Machine Technology Report, 43.

38 See Miller, ‘Application of Administrative Law Principles’, 26.

39 See further chapters 11–15 of NSW Ombudsman, Machine Technology Report.

40 See Bernard McCabe, ‘Automated Decision-Making in (Good) Government’ (2020) 100 Australian Institute of Administrative Law Forum 118.

41 As far back as 2004, the Administrative Review Council emphasised the need for lawyers to be actively involved in the design of machine technology for government. Administrative Review Council, Automated Assistance in Administrative Decision Making.

42 See Miller, ‘Application of Administrative Law Principles’, 31.

43 Anna Huggins, ‘Executive Power in the Digital Age: Automation, Statutory Interpretation and Administrative Law’ in J Boughey and L Burton Crawford (eds), Interpreting Executive Power (Alexandria: The Federation Press, 2020) 117; McCabe, ‘Automated Decision-Making’, 118.

44 See the reversal of the onus of proof of the existence of a debt in the initial implementation of the Commonwealth ‘Robodebt’ system: Huggins, ‘Executive Power in the Digital Age’, 125.

45 Pintarich v Federal Commissioner of Taxation [2018] FCAFC 79; (2018) 262 FCR 41. The situation is complicated where legislation purports to deem the output of a machine to be a decision by a relevant human administrator (see chapter 15 in NSW Ombudsman, Machine Technology Report).

46 See for example ‘Annexure A – Revenue NSW case study’ in NSW Ombudsman, Machine Technology Report.

47 Navoto v Minister for Home Affairs [2019] FCAFC 135 at 89.

48 Carrascalao v Minister for Immigration and Border Protection [2017] FCAFC 107; (2017) 252 FCR 352 at 46; Chetcuti v Minister for Immigration and Border Protection [2019] FCAFC 112 at 65.

49 Minister for Immigration and Border Protection v Maioha [2018] FCAFC 216; (2018) 267 FCR 643 at 45. In Hands v Minister for Immigration and Border Protection [2018] FCAFC 225 at 3, Allsop CJ described this, in the context of decisions made under the Migration Act 1958 (Cth), as the need for an ‘honest confrontation’ with the human consequences of administrative decision-making.

50 Among other things, these cases looked at the amount of time an administrator had between when they received relevant material and the time when they made their decision. In some cases, this time period was shown to have been too short for the administrator to have even read the material before them. The court concluded that there could not have been any ‘active intellectual consideration’ undertaken in the exercise of the function, and therefore overturned the decisions on the basis that there had been no valid exercise of discretion. Carrascalao v Minister for Immigration and Border Protection [2017] FCAFC 107; (2017) 252 FCR 352; Chetcuti v Minister for Immigration and Border Protection [2019] FCAFC 112.

51 NSW Crown Solicitors Office, Administrative Law Alert: ‘Sign here’: A Word of Warning about Briefs to Ministers Exercising Statutory Power Personally to Make Administrative Decisions (Web Page, April 2022) <www.cso.nsw.gov.au/Pages/cso_resources/cso-alert-ministers-statutory-power-administrative-decisions.aspx> citing McQueen v Minister for Immigration, Citizenship, Migrant Services and Multicultural Affairs (No 3) [2022] FCA 258.

52 See further chapter 13 in NSW Ombudsman, Machine Technology Report for a more comprehensive list of considerations. Also see ‘What Does the GDPR Say about Automated Decision-Making and Profiling?’, Information Commissioner’s Office (UK) (Web Page) <https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/automated-decision-making-and-profiling/what-does-the-gdpr-say-about-automated-decision-making-and-profiling/#id2>.

53 Hands v Minister for Immigration and Border Protection [2018] FCAFC 225; (2018) 267 FCR 628 at 3.

54 See further Counsel’s advice at ‘Annexure A – Revenue NSW case study’ in NSW Ombudsman, Machine Technology Report and refer to Michael Guihot and Lyria Bennett Moses, Artificial Intelligence, Robots and the Law (Toronto: LexisNexis, 2020), 160.

55 Guihot and Moses, ‘Artificial Intelligence’, 151–59.

56 See eg, O’Brien v Secretary, Department Communities and Justice [2022] NSWCATAD 100. In that case a social housing tenant had applied for information about how government rental subsidies were calculated. The information sought included confidential developer algorithms and source code for an application created for the relevant government department by an external ADM tool provider. The Tribunal held that the information was not held by the department (and therefore not required to be made available to the applicant).

57 Smorgon v Australia and New Zealand Banking Group Limited [1976] HCA 53; (1976) 134 CLR 475 at 489.

58 There are various examples that demonstrate the need to verify and validate machine technology at the outset and periodically after implementation. See further chapter 14 in NSW Ombudsman, Machine Technology Report.

59 A number of commentators have proposed ‘algorithmic impact assessment’ processes be undertaken similar to environment or privacy impact assessments: see, for example Michele Loi, Algorithm Watch, Automated Decision Making in the Public Sector: An Impact Assessment Tool for Public Authorities (Report, 2021); Nicol Turner Lee, Paul Resnick, and Genie Barton, Brookings, Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms (Report, 22 May 2019).

60 See Jennifer Raso, ‘AI and Administrative Law’ in Florian Martin-Bariteau and Teresa Scassa (eds), Artificial Intelligence and the Law in Canada (Toronto: LexisNexis Canada, 2021); Joel Townsend, ‘Better Decisions? Robodebt and the Failings of Merits Review’ in Janina Boughey and Katie Miller (eds), The Automated State: Implications, Challenges and Opportunities (Alexandria: The Federation Press, 2021), 52, 56 (discussing the limits of existing merits review systems to address high volume, technology-assisted decision-making).

61 See for example Cobbe et al, ‘Centering the Rule of Law’, 15 (‘Given the limitations of existing laws and oversight mechanisms, … as well as the potential impact on vulnerable members of society, we argue for a comprehensive statutory framework to address public sector automation.’); Bateman, ‘Public Law Dimensions’, 530 (‘Attaining the efficiency gains promised by public sector automation in a way that minimizes legal risk is best achieved by developing a legislative framework that governs the exercise and review of automated statutory powers in a way which protects the substantive values of public law. Other jurisdictions have made steps in that direction, and there is no reason Australia could not follow suit.’); see also Terry Carney, ‘Robo-debt Illegality: The Seven Veils of Failed Guarantees of the Rule of Law?’ (2019) 44(1) Alternative Law Journal 4.

62 Robin Creyke, ‘Administrative Justice – Towards Integrity in Government’ (2007) 31(3) Melbourne University Law Review 705.

63 Cf Simon Chesterman, We, the Robots? Regulating Artificial Intelligence and the Limits of the Law (Cambridge: Cambridge University Press, 2021), 220–22 (suggesting the establishment of ‘an AI Ombudsperson’).

64 Cf Cary Coglianese and David Lehr, ‘Regulating by Robot: Administrative Decision Making in the Machine-Learning Era’ (2017) 105 The Georgetown Law Journal 1190 (suggesting oversight approaches including ‘the establishment of a body of neutral and independent statistical experts to provide oversight and review, or more likely a prior rule making process informed by an expert advisory committee or subjected to a peer review process’).

7 A Tale of Two Automated States Why a One-Size-Fits-All Approach to Administrative Law Reform to Accommodate AI Will Fail

1 Robert McBride, The Automated State: Computer Systems as a New Force in Society (Chilton Book Company, 1967).

2 WG de Sousa et al, ‘How and Where Is Artificial Intelligence in the Public Sector Going? A Literature Review and Research Agenda’ (2019) 36 Government Information Quarterly 101392; BW Wirtz, JC Weyerer, and C Geyer, ‘Artificial Intelligence and the Public Sector – Applications and Challenges’ (2019) 42 International Journal of Public Administration 596615.

3 K Gulson and J-M Bello y Villarino, ‘AI in Education’ in Regine Paul, Emma Carmel, and Jennifer Cobbe (eds), Handbook on Public Policy and Artificial Intelligence (Edward Elgar, forthcoming 2023).

4 S Scoles, ‘A Digital Twin of Your Body Could Become a Critical Part of Your Health Care’ (10 February 2016) Slate; J Corral-Acero et al, ‘The “Digital Twin” to Enable the Vision of Precision Cardiology’ (2020) 41 European Heart Journal 4556–64.

5 J Argota Sánchez-Vaquerizo, ‘Getting Real: The Challenge of Building and Validating a Large-Scale Digital Twin of Barcelona’s Traffic with Empirical Data’ (2022) 11 ISPRS International Journal of Geo-Information 24.

6 A Hernández Morales, ‘Barcelona Bets on “Digital Twin” as Future of City Planning’ (18 May 2022) Politico.

7 See, for example, the discussion about discretion in different levels of bureaucracy in JB Bullock, ‘Artificial Intelligence, Discretion, and Bureaucracy’ (2019) 49 The American Review of Public Administration 751–61.

8 See also the discussion in Chapter 10 in this book.

9 See A Cordella and N Tempini, ‘E-Government and Organizational Change: Reappraising the Role of ICT and Bureaucracy in Public Service Delivery’ (2015) 32 Government Information Quarterly 279–86 at 279, and the references therein.

10 Footnote Ibid, 281.

11 JB Bullock and K Kim, ‘Creation of Artificial Bureaucrats’ (Lisbon, Portugal (Online), 2020), 8.

12 Treasury Board of Canada, Directive on Automated Decision-Making (2019).

13 European Commission, Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (Proposal, 21 April 2021), see also Chapter 1 in this book.

14 Digital.NSW, NSW Government, NSW AI Assurance Framework (Report, 2022).

15 S Verba, ‘Fairness, Equality, and Democracy: Three Big Words’ (2006) 73 Social Research: An International Quarterly 499540.

16 TM Vogl et al, ‘Smart Technology and the Emergence of Algorithmic Bureaucracy: Artificial Intelligence in UK Local Authorities’ (2020) 80 Public Administration Review 946–61 at 946.

17 See also discussion in Chapter 12 in this book.

18 BG Peters, Politics of Bureaucracy, 5th ed (Routledge, 2002) 35.

19 CR Sunstein, The Cost–Benefit Revolution (MIT Press, 2019).

20 J-M Bello y Villarino and R Vijeyarasa, ‘International Human Rights, Artificial Intelligence, and the Challenge for the Pondering State: Time to Regulate?’ (2022) 40 Nordic Journal of Human Rights 194215 at 208–9.

21 There is a societal expectation that AI-driven systems can materialise the productivity jump that computers did not bring, and respond to Nobel Prize laureate Robert Solow’s quip that ‘you can see the computer age everywhere but in the productivity statistics’. ‘Why a Dawn of Technological Optimism Is Breaking’ (16 January 2021) The Economist; ‘Paradox Lost’ (11 September 2003) The Economist.

22 Ombudsman New South Wales, The New Machinery of Government: Using Machine Technology in Administrative Decision-Making (Report, 2021).

23 JAS Pastor, ‘La teoría del órgano en el Derecho Administrativo’ (1984) Revista española de derecho administrativo 43–86.

24 Cordella and Tempini, ‘E-Government and Organizational Change’, 280.

25 Gulson and Bello y Villarino, ‘AI in Education’.

26 ‘ISO/IEC 22989:2022(en)’ (2022) sec. 3.1.4.

27 M Shapiro, ‘Administrative Law Unbounded: Reflections on Government and Governance Symposium: Globalization, Accountability, and the Future of Administrative Law’ (2000) 8 Indiana Journal of Global Legal Studies 369–78 at 371–72.

28 That is, not cheating the process, for example, through entering into the automated system a series of acceptable objectives until they reach a desired output for other reasons, that is, their real hidden objectives.

29 For a sample of countries having the right of education in their constitutions, see S Edwards and AG Marin, Constitutional Rights and Education: An International Comparative Study (2014).

30 Gradient Institute, Practical Challenges for Ethical AI (Report, 2019) 8.

31 Houston Federation of Teachers Local 2415 et al v Houston Independent School District, 251 F. Supp. 3d 1168 (2017).

32 Shapiro, ‘Administrative Law Unbounded’, 369.

33 P Gérard, ‘L’administré dans ses rapports avec l’État’ (2018) 168 Revue française d’administration publique 913–23.

35 J Alder, ‘Environmental Impact Assessment – The Inadequacies of English Law’ (1993) 5 Journal of Environmental Law 203–20 at 203.

8 The Islamophobic Consensus Datafying Racism in Catalonia

1 Institut d’Estadística de Catalunya (Idescat), Prison Population, by Nationality and Geographical Origin (Report, 2022) <www.idescat.cat/pub/?id=aec&n=881&lang=es>.

2 Observatorio Andalusí, Estudio Demográfico de la Población Musulmana (Report, 2021).

3 Instituto Nacional de Estadística (INE), Tasas de paro por nacionalidad, sexo y comunidad autónoma (Report, 2022) <www.ine.es/jaxiT3/Datos.htm?t=4249>.

4 A López-Gay, A Andújar-Llosa, and L Salvati, ‘Residential Mobility, Gentrification and Neighborhood Change in Spanish Cities: A Post-Crisis Perspective’ (2020) 8(3) Spatial Demography 351–78.

5 Plataforma Anti-desahucios, Emergencia habitacional, pobreza energética y salud (Report, 2020) <https://pahbarcelona.org/wp-content/uploads/2021/01/Informe-Emergencia-Habitacional-Pobreza-Energetica-Salud-Barcelona-2017-2020-CAST.pdf>.

6 Observatorio Andalusí, Estudio Demográfico de la Población Musulmana; United Nations Special Rapporteur on Freedom of Religion or Belief, Countering Islamophobia/Anti-Muslim Hatred to Eliminate Discrimination and Intolerance Based on Religion or Belief (Report A/HRC/46/30, 2021) <https://documents-dds-ny.un.org/doc/UNDOC/GEN/G21/086/49/PDF/G2108649.pdf?OpenElement>.

7 Institut d’Estadística de Catalunya (Idescat), Prison Population, by Nationality and Geographical Origin.

8 SOS Racisme, (In)Visibles. L’estat del racisme a Catalunya (Report, 16 March 2022) <https://ec.europa.eu/migrant-integration/library-document/invisibles-state-racism-catalonia_en>; A Douhaibi and S Amazian, La radicalización del racismo Islamofobia de Estado y prevención antiterrorista (Oviedo: Editorial Cambalache, 2019).

9 D Kumar, Islamophobia and the Politics of Empire: Twenty Years after 9/11 (London: Verso, 2021).

10 S Manzoor-Khan, Tangled in Terror Uprooting Islamophobia (London: Pluto, 2022).

11 S Mezzadra and B Neilson Border as Method, or the Multiplication of Labor (Durham: Duke University Press, 2013).

12 JC Aguerri and D Jiménez-Franco, ‘On Neoliberal Exceptionalism in Spain: A State Plan to Prevent Radicalization’ (2021) 29(4) Critical Criminology 817–35.

13 CITCO, Ministerio del Interior – Secretaría de Estado de Seguridad, Plan Estratégico Nacional de Lucha Contra la Radicalización Violenta (PEN-LCRV) (Report, 2015) <www.interior.gob.es/documents/642012/5179146/PLAN+DEFINITIVO+APROBADO.pdf/f8226631-740a-489a-88c3-fb48146ae20d>.

14 E Bonilla-Silva, Racism without Racists: Color-Blind Racism and the Persistence of Racial Inequality in the United States (Lanham: Rowman & Littlefield Publishers, 2006).

15 A Rachovitsa and N Johann, ‘The Human Rights Implications of the Use of AI in the Digital Welfare State: Lessons Learned from the Dutch SyRI Case’ (2022) 22(2) Human Rights Law Review 1.

16 P Alston, Report of the Special Rapporteur on Extreme Poverty and Human Rights (Report, 2019).

17 I Cortés, Sueños y sombras sobre los gitanos. La actualidad de un racismo histórico (Barcelona: Bellaterra, 2021); S Castro-Gómez, La hybris del punto cero: ciencia, raza e ilustración en la Nueva Granada (1750–1816) (Bogotá: Editorial Pontificia Universidad Javeriana, 2010).

18 KA Beydoun, ‘Islamophobia, Internationalism, and the Expanse Between’ (2021) 28 Brown Journal of World Affairs 101; Kumar, Islamophobia and the Politics of Empire: Twenty Years after 9/11.

19 A Douhaibi and V Almela, ‘Vigilància de Frontera a plicadaa les Escoles’ (29 November 2017) La Directa 443.s.

20 ‘Violent Extremism Risk Assessment Revised’, Dutch Ministry of Justice and Security (Web Page) <www.vera-2r.nl/>.

21 UK Ministry of Justice, The Structural Properties of the Extremism Risk Guidelines (ERG22+): A Structured Formulation Tool for Extremist Offenders (Report, 2019).

22 UK Government, Statutory Guidance Revised Prevent Duty Guidance: For England and Wales (Report, 2021) <www.gov.uk/government/publications/prevent-duty-guidance/revised-prevent-duty-guidance-for-england-and-wales>.

23 Generalitat de Catalunya, Departament d’Ensenyament, Protocol de Prevenció, detecció i intervenció de processos de radicalització als centres educatius (PRODERAI CE) 2016, 7–13 <http://educacio.gencat.cat/documents/PC/ProjectesEducatius/PRODERAI-CE.pdf>.

24 Footnote Ibid, 13–20.

25 Footnote Ibid, 20–24.

26 Footnote Ibid, 24–28.

27 On opacity and lack of transparency see also Chapters 2, 4, 10, and 11 in this volume.

28 As has been criticised in LISA News, ‘¿Es posible predecir la reincidencia de los presos?’ (16 February 2022, Web Page) <www.lisanews.org/actualidad/es-posible-predecir-reincidencia-de-presos-espana/>.

29 LM Garay, ‘Errores conceptuales en la estimación de riesgo de reincidencia’ (2016) 14 Revista Española de Investigación Criminológica 131.

30 BE Harcourt, ‘Risk as a Proxy for Race: The Dangers of Risk Assessment’ (2015) 27(4) Federal Sentencing Reporter 237–43.

31 Douhaibi and Amazian, La radicalización del racismo Islamofobia de Estado y prevención antiterrorista; A Jiménez and E Cancela, ‘Surveillance Punitivism: Colonialism, Racism, and State Terrorism in Spain’ (2021) 19(3) Surveillance & Society 374–78.

32 R Richardson, JM Schultz, and K Crawford, ‘Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice’ (2019) 94 NYUL Review Online 15.

33 A Kundnani, Institute of Race Relations, Spooked! How Not to Prevent Violent Extremism (Report, 2009).

34 UK Ministry of Justice, The Structural Properties of the Extremism Risk Guidelines (ERG22+): A Structured Formulation Tool for Extremist Offenders, 3.

35 Manzoor-Khan, Tangled in Terror Uprooting Islamophobia; C Heath-Kelly, ‘Algorithmic Autoimmunity in the NHS: Radicalisation and the Clinic’ (2017) 48(1) Security Dialogue 2945; T Younis and S Jadhav, ‘Islamophobia in the National Health Service: An Ethnography of Institutional Racism in PREVENT’s Counter‐Radicalisation Policy’ (2020) 42(3) Sociology of Health & Illness 610–26.

36 Heath-Kelly, ‘Algorithmic Autoimmunity in the NHS: Radicalisation and the Clinic’; Younis and Jadhav, ‘Islamophobia in the National Health Service: An Ethnography of Institutional Racism in PREVENT’s Counter‐Radicalisation Policy’.

37 Heath-Kelly, ‘Algorithmic Autoimmunity in the NHS: Radicalisation and the Clinic’.

40 Manzoor-Khan, Tangled in Terror Uprooting Islamophobia; A Kundnani, The Muslims Are Coming!: Islamophobia, Extremism, and the Domestic War on Terror (London: Verso, 2014).

41 Amnesty International & Open Society Foundation, A Human Rights Guide for Researching Racial and Religious Discrimination in Counter-Terrorism in Europe (Report, 2021) <www.amnesty.org/en/wp-content/uploads/2021/05/EUR0136062021ENGLISH.pdf>.

42 J Holmwood and L Aitlhadj, The People’s Review of Prevent (Report, February 2022).

43 HM Government, Prevent Strategy, Presented to Parliament by the Secretary of State for the Home Department by Command of Her Majesty June 2011, 28, <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/97976/prevent-strategy-review.pdf>.

44 A Kundnani, ‘Radicalization: The Journey of a Concept’ (2012) 54(2) Race & Class 325; Manzoor-Khan, Tangled in Terror Uprooting Islamophobia.

45 Generalitat de Catalunya, PRODERAI-CE, 14.

46 See e.g. Chapter 5 in this book. Richardson et al, ‘Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice’; P Alston, Report of the Special Rapporteur on Extreme Poverty and Human Rights.

47 BE Harcourt, Against Prediction (Chicago: University of Chicago Press, 2007) 30.

48 JA Brandariz García, ‘La difusión de las lógicas actuariales y gerenciales en las políticas punitivas’ (2014) 2 InDret 4, 18.

49 JC Aguerri and D Jiménez-Franco, ‘On Neoliberal Exceptionalism in Spain: A State Plan to Prevent Radicalization’.

50 ‘It’s for your safety. Institutional machinery of Islamophobia’, Asociación Musulmana de Derechos Humanos (Video, 2021).

51 SOS Racisme, (In)Visibles. L’estat del racisme a Catalunya.

52 United Nations Special Rapporteur on Freedom of Religion or Belief, Countering Islamophobia/Anti-Muslim Hatred to Eliminate Discrimination and Intolerance Based on Religion or Belief, 9.

53 H Bouteldja, Whites, Jews and Us: Toward a Politics of Revolutionary Love (Cambridge: MIT Press, 2016).

54 S Ahmed and J Matthes, ‘Media Representation of Muslims and Islam from 2000 to 2015: A Meta-analysis’ (2017) 79(3) International Communication Gazette 219–44.

55 Sara Farris, In the Name of Women’s Rights: The Rise of Femonationalism (Durham: Duke University Press, 2017).

56 ‘Contreras explica por qué y cómo quiere reforzar VOX la concesión de la nacionalidad española’, Vox Parliamentary Group (Media Release, 15 February 2022) <www.voxespana.es/grupo_parlamentario/actividad-parlamentaria/proposiciones-de-ley/vox-ley-nacionalidad-espanola-20220215>.

57 EW Said, Orientalism (New York: Vintage, 1979).

58 E Tastsanis, ‘The Social Determinants of Ideology: The Case of Neoliberalism in Southern Europe’ (2009) 35(2) Critical Sociology 199.

59 B Echeverría, La modernidad de lo barroco (México DF: Ediciones Era, 2000).

60 Cedric Robinson, Black Marxism: The Making of the Black Radical Tradition, 1st ed (London: Zed Books, 1983).

61 Jiménez and Cancela, ‘Surveillance Punitivism: Colonialism, Racism, and State Terrorism in Spain’.

62 Kumar, Islamophobia and the Politics of Empire: Twenty Years after 9/11.

63 Cortés, Sueños y sombras sobre los gitanos. La actualidad de un racismo histórico.

65 J Goikoetxea, Privatizing Democracy (Oxford: Peter Lang, 2017).

67 P Moré, ‘Cuidados y crisis del coronavirus: el trabajo invisible que sostiene la vida’ (2020) 29(3) Revista Española de Sociología (RES) 737–45.

68 M Foucault, Security, Territory, Population: lectures at the Collège de France, 1977–78 (Berlin: Springer, 2007).

69 I Hacking, The Taming of Chance (Cambridge: Cambridge University Press, 1990).

70 C Rosenthal, Accounting for Slavery (Cambridge: Harvard University Press, 2018).

71 WHK Chun, Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition (Cambridge: MIT Press, 2021); I Hacking, ‘Biopower and the Avalanche of Printed Numbers’ in Vernon W Cisney and Nicolae Morar (eds), Biopower: Foucault and Beyond (Chicago: The University of Chicago Press, 2015) 6580.

72 A Supiot, Governance by Numbers: The Making of a Legal Model of Allegiance (London: Bloomsbury, 2017), vol. 20.

73 Originally published as Discours sur le colonialisme (Editions Présence Africaine, 1955).

74 E Martín-Corrales, Muslims in Spain, 1492–1814: Living and Negotiating in the Land of the Infidel (Leiden: Brill, 2020).

75 AH Reggiani, Historia mínima de la eugenesia en América Latina (México DF: El Colegio de México, 2019); Castro-Gómez, La hybris del punto cero: ciencia, raza e ilustración en la Nueva Granada (1750–1816).

76 R Grosfoguel, ‘Epistemic Islamophobia and Colonial Social Sciences’ (2010) 8(2) Human Architecture: Journal of the Sociology of Self-Knowledge 2938.

77 AJ Bohrer, ‘Just Wars of Accumulation: The Salamanca School, Race and Colonial Capitalism’ (2018) 59(3) Race & Class 2037.

78 Grosfoguel, ‘Epistemic Islamophobia and Colonial Social Sciences’; D Montañez Pico, ‘Pueblos sin religión: la falacia de la controversia de Valladolid’ (2016) 18(36) Araucaria 87110.

79 DB Rood, The Reinvention of Atlantic Slavery: Technology, Labor, Race, and Capitalism in the Greater Caribbean (Oxford: Oxford University Press, 2017); IB Guerra, ‘Moriscos, esclavos y minas: comentario al memorial de Juan López de Ugarte o sobre cómo introducir a los moriscos en la labor de minas’ (2010) 23 Espacio Tiempo y Forma. Serie III, Historia Medieval.

80 J Ramirez, Libro de las Bulas y Pragmáticas de los Reyes Católicos (Madrid: Instituto de España, 1973), vol. 1.

81 Castro-Gómez, La hybris del punto cero: ciencia, raza e ilustración en la Nueva Granada (1750–1816).

82 J Irigoyen-García, The Spanish Arcadia: Sheep Herding, Pastoral Discourse, and Ethnicity in Early Modern Spain (Toronto: University of Toronto Press, 2013).

83 H Kamen, The Spanish Inquisition: A Historical Revision (New Haven: Yale University Press, 2014).

84 A Quijano, ‘Colonialidad del poder y clasificación social’ (2015) 2(5) Contextualizaciones Latinoamericanas; S Rivera Cusicanqui, Pueblos Originarios y Estado (Buenos Aires: Instituto Nacional de la Administración Pública de Argentina, 2008); JC Mariátegui, 7 ensayos de interpretación de la realidad peruana (Caracas: Ayacucho, 1978).

85 S Amazian, SOS Racisme, Islamofobia Institucional y Securitización (Report, 2021) <www.sosracisme.org/wp-content/uploads/2021/07/InformeIslamofobia_01072021_INTERACTIVO_CAST_.pdf>.

86 C Fernández Bessa, El dispositiu de deportació. Anàlisi criminològica de la detenció, internament i expulsió d’immigrants en el context espanyol. Universitat de Barcelona (Doctoral Thesis), (2016) 54.

87 Douhaibi and Amazian, La radicalización del racismo Islamofobia de Estado y prevención antiterrorista.

89 S Amazian, SOS Racisme; S Cohen, Folk Devils and Moral Panics (London: Routledge, 2011).

90 Kumar, Islamophobia and the Politics of Empire: Twenty Years after 9/11.