Skip to main content Accessibility help
×
Home
Hostname: page-component-558cb97cc8-mtzvg Total loading time: 1.022 Render date: 2022-10-07T04:57:54.633Z Has data issue: true Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "useRatesEcommerce": false, "displayNetworkTab": true, "displayNetworkMapGraph": true, "useSa": true } hasContentIssue true

Can Robots Understand Welfare? Exploring Machine Bureaucracies in Welfare-to-Work

Published online by Cambridge University Press:  16 March 2022

MARK CONSIDINE
Affiliation:
School of Social and Political Sciences, University of Melbourne
MICHAEL MCGANN*
Affiliation:
Maynooth University Social Sciences Institute, National University of Ireland, Maynooth
SARAH BALL
Affiliation:
School of Social and Political Sciences, University of Melbourne
PHUC NGUYEN
Affiliation:
Lecturer in Management, La Trobe University
*
Corresponding author, email: michael.mcgann@mu.ie
Rights & Permissions[Opens in a new window]

Abstract

The exercise of administrative discretion by street-level workers plays a key role in shaping citizens’ access to welfare and employment services. Governance reforms of social services delivery, such as performance-based contracting, have often been driven by attempts to discipline this discretion. In several countries, these forms of market governance are now being eclipsed by new modes of digital governance that seek to reshape the delivery of services using algorithms and machine learning. Australia, a pioneer of marketisation, is one example, proposing to deploy digitalisation to fully automate most of its employment services rather than as a supplement to face-to-face case management. We examine the potential and limits of this project to replace human-to-human with ‘machine bureaucracies’. To what extent are welfare and employment services amenable to digitalisation? What trade-offs are involved? In addressing these questions, we consider the purported benefits of machine bureaucracies in achieving higher levels of efficiency, accountability, and consistency in policy delivery. While recognising the potential benefits of machine bureaucracies for both governments and jobseekers, we argue that trade-offs will be faced between enhancing the efficiency and consistency of services and ensuring that services remain accessible and responsive to highly personalised circumstances.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2022. Published by Cambridge University Press

Introduction

For 30 years, the delivery of welfare-to-work has been the subject of frequent governance reform and political contestation. Alongside formal social policy shifts towards ‘activating’ claimants through sanctions and benefits conditionality, the delivery of welfare-to-work has repeatedly been restructured by governance reforms such as quasi-marketisation, performance-based contracting and other New Public Management (NPM) instruments. Underlying this so-called project of ‘double activation’ (Considine et al., Reference Considine, Lewis, O’Sullivan and Sol2015) – where welfare reform involves not just efforts to discipline claimants but also greater attempts to direct the behaviours of frontline delivery agents (Soss et al. Reference Soss, Fording and Schram2011: 207-8) – is the implicit recognition that administrative discretion is ‘an inherent’ and ‘at times even necessary’ (Brodkin, Reference Brodkin2011: i247) feature of policy delivery. As Pors and Schou observe, frontline workers ‘do not simply carry out already completed policies, but serve as a mainspring’ in fitting the necessary imprecisions of policies to ‘complex situated cases’ (2020: 156). The delivery of welfare-to-work is therefore ‘suffused by moments of policymaking’ (Zacka, Reference Zacka2017: 247), bringing the possibility of ‘not one but many different welfare programmes’ (Fletcher, Reference Fletcher2011: 450) being implemented depending on the characteristics of the client and which advisors they see.

This discretionary aspect of service delivery is not without its problems though. Biases are potentially introduced into the system by the worldviews, personal experiences, and assumptions that frontline workers embed in their practices (Hasenfeld, Reference Hasenfeld2010). Curtailing such problems therefore has been a central objective of the NPM governance reforms enacted among street-level delivery agents. Nowhere has this been more apparent than in Australia, which became the first country to put its entire public employment services (PES) out to contestable contract in 1998, before full privatisation in 2003. Throughout this period, the implementation of welfare-to-work polices in many OECD countries has been determined to varying extents by competition between providers for clients and contracts, and performance management regimes that rely on financial incentives and outcomes measurement to ‘raise the odds that preferred paths will be taken’ (Soss et al., Reference Soss, Fording and Schram2011: 207). However, this method of filtering discretion through market governance instruments is now being eclipsed by a new project of digitising discretion – with Australia, again, at the vanguard of reform. Since 2018, it has been piloting an online employment services model that is due to be rolled out from mid-2022 (Casey, Reference Casey2021). While ‘digitalised welfare’ (Coles-Kemp et al., Reference Coles-Kemp, Ashenden, Morris and Yuille2020: 177) in the form of delivering benefits via online applications is a feature of welfare administration in numerous countries, what sets the Australian experiment apart is the proposal to fully automate PES for most claimants, who will now self-manage their own forms of activation under the watchful eye of algorithms rather than the direct gaze of advisors. Rather than ‘cyborg bureaucracy’ (Breit et al., Reference Breit, Egeland, Løberg, Pedersen and Wilkinson2019: 165) models where algorithmic profiling tools and digital communications are used to augment the jobseeker-advisor relationship, Australia’s online model is deploying algorithms to remove the human element of service provision for all claimants other than those identified as having the most complex employment barriers. This makes the stakes of digital welfare especially high.

In this paper we examine both the potential and limits of this project to replace human-to-human with what might be described as ‘machine’ bureaucracies. We ask to what extent PES activities are amenable to automation. In other words, where along the service delivery supply chain can automated decision-making (ADM) be productively employed to enhance efficiency and flexibility? On the other hand, which aspects of service delivery will require human judgement and agency (albeit with the potential aid of digital inputs)? In addressing these questions, and reviewing existing discourses on digitalised welfare, we consider the main drivers of digitalisation and the belief that automation can achieve higher levels of efficiency, accountability, and consistency in policy delivery. While recognising the potential benefits of machine bureaucracies for both governments and jobseekers, we argue that trade-offs will be faced between enhancing the efficiency and consistency of service delivery and ensuring that services remain accessible and responsive to highly personalised circumstances. Our emphasis on machine bureaucracies is deliberate and important, as the concept of ‘digitalised welfare’ is used in a multiplicity of indeterminate ways, with little agreement about its precise implications for the relationship between citizens, delivery agents, and the state. Accordingly, an ancillary aim is to unpack the concept of ‘digitalised welfare’ by proposing a clarifying typology for ‘varieties of digitalisation’ in welfare administration.

We proceed by introducing our conceptualisation of ‘machine’ bureaucracy, differentiating the Australian case of online employment services from previous modes of computerisation in street-level bureaucracy, and from ongoing digitalisation reforms elsewhere. This is followed by an evaluation of the key efficiency and accountability arguments in favour of moving from the street-level to the machine-level delivery of welfare-to-work, and the trade-offs involved.

Machine bureaucracies and varieties of digitalisation

There is a growing interest among scholars in the ‘expansion of digital welfare’ (Coles-Kemp et al., Reference Coles-Kemp, Ashenden, Morris and Yuille2020: 184) and the fact that the citizens’ encounter with the welfare state is becoming ‘increasingly digitalized’ (Pors and Schou, Reference Pors and Schou2020: 155). Nonetheless, elements of computerisation have long been a feature of citizens’ experiences of welfare-to-work. Since the early days of the ‘activation turn’, access terminals for job searching have been a constituent element of the local support given to jobseekers in many job centres, while computerized case management systems were already in ubiquitous use by the early 2000s (Caswell et al., Reference Caswell, Marston and Larsen2010). In Australia, frontline staff have been obligated since 2003 to use a prescribed IT system for recording all client interactions, including how jobseekers are meeting their mutual obligations. The level of employment support jobseekers are eligible for in Australia is also largely determined by a profiling instrument, the Job Seeker Classification Instrument (JSCI), which is administered by entering responses into a software package that generates a statistical estimate of jobseekers’ probability of long-term unemployment (Casey, Reference Casey2021). This ubiquitous use of computer systems is emblematic of what Bovens and Zouridis describe as ‘screen-level’ bureaucracy: a form of administration in which contacts with clients ‘always run through or in the presence of a computer screen’ (2002: 177).

Computerisation in screen-level bureaucracy has taken at least three forms. First, information management systems function as devices for opening-up the ‘black box’ of policy delivery to the scrutiny of local processes, thereby making the application of eligibility criteria and other programmatic decisions more accountable to ‘data-driven management’ (Pedersen and Wilkinson, Reference Pedersen and Wilkinson2018: 195). This, in turn, exerts its own pressure on street-level workers to conform their decisions to program rules. Client records can be viewed by administrators at any time, making discretion not only more visible but also changing its ‘operation and scope’ (Bullock, Reference Bullock2019: 751). Frontline workers now make decisions as actors who know they are ‘being closely monitored’ (Soss et al., Reference Soss, Fording and Schram2011: 208) thereby re-orientating decision-making towards a more rule-bound approach.

A second way that computerisation manifests in screen-level bureaucracies is in the form of applying assessment protocols and profiling tools to ‘target’ services. Such profiling tools are widespread across PES, and are becoming increasingly sophisticated through the capacity to train algorithms on vast administrative datasets and ‘real-time’ labour market information to generate ever-finer calculations of jobseekers’ probability of long-term unemployment (Desiere and Struyven, Reference Desiere and Struyven2021). Examples include the Portuguese and Flemish PES, where a combination of algorithmic profiling and data mining of labour market vacancy data are used to estimate jobseekers’ risk of long-term unemployment. The predictive modelling then recommends whether jobseekers should receive job-matching support or upskilling, saving advisors’ time ‘in getting to know jobseekers’ (Carney, Reference Carney2021: 7). To date, the ultimate decision remains with the advisor whether to accept or override the algorithmic recommendations.

A more recent functional use of ICT is as a medium for facilitating remote, often faceless, service encounters. This has accelerated since COVID-19 when PES in many countries moved out of necessity to online or phone appointments. However, even before the pandemic, several countries such as Norway provided the option for jobseekers to meet with advisors remotely via phone or Skype (Breit et al., Reference Breit, Egeland, Løberg, Pedersen and Wilkinson2019). While ‘digitally mediated’ (Breit et al., Reference Breit, Egeland, Løberg and Røhnebæk2020: 3), these virtual encounters remain human-to-human if not-face-to-face.

This article explores the transition from screen-level bureaucracy to what Bovens and Zouridis (Reference Bovens and Zouridis2002) predicted was the logical conclusion of computerisation in public administration: ‘system-level’ bureaucracies in which services are ‘built around the information system’ and the operational delivery previously performed by street-level bureaucrats ‘has been taken over by the technostructure’ (Zouridis et al., Reference Zouridis, Van Eck, Bovens, Evans and Hupe2020). These system-level bureaucracies involve encounters where the ‘advisors’ that citizens interact with are no longer human, but an assemblage of applications powered by algorithms and machine learning. While administrative staff may continue to work in these machine bureaucracies, their role is mainly confined to ‘help[ing] clients interface with the information system’ (Bullock, Reference Bullock2019: 755) rather than interacting directly with clients.

Figure 1 summarises how model forms of digitalisation shape the degree of interaction and balance of power between advisors, citizen-clients, and technological platforms in structuring access to services. These differences in how digitalisation mediates discretion are not simply technical matters. They capture important decisions about the role of face-to-face encounters, and what function these may play in a citizen’s life. To the top left, is street-level bureaucracy where citizens encounter the state ‘through’ advisors who ‘hold the keys to a dimension of citizenship’ (Lipsky, Reference Lipsky2010: 4). To the middle and right-hand side are variants of screen-level bureaucracy that are differentiated by the degree to which citizens can access services via technology. In (B), digitalisation is deployed primarily in the form of information management systems and computerised assessment protocols that shape frontline discretion by making it more visible. Nonetheless, frontline workers remain the gate keepers to services as distinct from (C), where citizens can avail themselves of technology to access some aspects of the service without going through their case manager. For example, they may use online platforms provided by their agency to job-search or complete training modules. This digital platform may even incorporate machine learning capabilities, mining labour market data to customise job-matches.

FIGURE 1. Models of (digital) service encounters.

Source: Adapted from (Bordoloi et al., Reference Bordoloi, Fitzsimmons and Fitzsimmons2019: 96)

The interaction models depicted in the bottom of Figure 1 take us towards the realm of ‘system-level’ bureaucracy. In (D), citizens access almost all aspects of the service (job-matching, training etc.) digitally, with service staff providing troubleshooting support when problems arise with the software. Service staff may even continue to provide some guidance, but this will be communicated digitally through web chats administered remotely. An example is the Norwegian PES’ Modia platform. Under this approach, jobseekers develop a digital activity plan and communicate with their advisor predominantly through the Modia app rather than in-person. In this example of a ‘cyborg bureaucracy’, the technology is not just an optional tool but ‘enables the very relationship’ (Breit et al., Reference Breit, Egeland, Løberg, Pedersen and Wilkinson2019: 166) between advisors and jobseekers.

In (E), the provision of services moves from being digitally mediated to being fully automated. The offers and demands that citizens receive are determined entirely by algorithms. If there is a risk of bias this stems from the inputs and outputs of algorithms rather than the ‘calculus of street-level choice’ (Brodkin, Reference Brodkin2011: i260). Zouridis et al. (Reference Zouridis, Van Eck, Bovens, Evans and Hupe2020) give the example of child benefits claims in the Netherlands. Children are assigned a unique identifying number upon birth registration, which is used to gather administrative data to prepare automated applications for child benefits. These machine-generated applications are sent to parents for verification, and decisions about payment eligibility are made algorithmically. The process is repeated at various ‘legal milestones’ affecting payment levels and eligibility, speeding up claims processing and reducing the administrative burden for citizens of claiming benefits. The flipside of automation in claims administration is deploying algorithms to detect possible fraud. One example is Australia’s Online Compliance Intervention. Colloquially known as Robodebt, the initiative involved data-matching records of benefits paid to claimants against their historical income tax returns to determine whether overpayments had been made (Whiteford, Reference Whiteford2021). Calculation of overpayments was automatic, based on a simple averaging of reported annual earnings; and debt notices were raised with no human oversight. Robodebt also shifted the burden of proof onto individuals, who were given few details of how their debt had been calculated. Those wishing to challenge the debt repayment claim needed to contact the Department directly – an action which was time consuming due to long wait times, and, as it required clients to show documentary evidence that their original earnings declarations were correct, reversed the onus of proof (Whelan, Reference Whelan2020).

While ADM is used for claims processing in several countries, Australia’s proposed online employment service is almost a unique example of welfare-to-work programmes being delivered via a machine bureaucracy. Under the model, which Casey (Reference Casey2021) likens to ‘digital dole parole’, jobseekers assessed as ‘job-ready’ will have access to a ‘jobseeker dashboard’ via an app and government website. The entire suite of employment support services will be contained within this dashboard, which will also be used to verify claimants’ compliance with mutual obligations. These requirements will be detailed in digital job plans that will be generated based on pre-determined rules ‘hard-wired into the system’ (Casey, Reference Casey2021: 7). If jobseekers fail to verify how they are fulfilling their job plan, demerit points will be automatically applied, triggering sanctions at specific thresholds and for certain breaches (e.g., missing job interviews). To this extent, Australia’s online employment services resembles the ‘digital panopticon’ (Wright et al., Reference Wright, Fletcher and Stewart2020: 287) of Britain’s now defunct Universal Jobmatch portal, which primarily served as a surveillance tool for claimants to ‘demonstrate compliance’ (Morris et al., Reference Morris, Coles-Kemp and Jones2020: 27). However, a key difference is that Universal Jobmatch was supplemented by periodic in-person meetings with advisors. This is not the case in the Australian model, raising key questions about the extent to which judgement and discretion are valuable and perhaps essential components of service delivery, or capable of being replicated by machine learning.

Dilemmas of new machine bureaucracies

The move towards the machine delivery of welfare-to-work is driven by multiple factors, although two key purported benefits are that automation will enhance both the efficiency and consistency of services, saving public resources and ‘clos[ing] the gap between “policy as written” and “policy as performed”’ (Busch and Henriksen, Reference Busch and Henriksen2018: 4). In this section, we consider these justifications, arguing that the pursuit of efficiency and consistency must be balanced against other administrative values such as personalisation, flexibility, and inclusion (See Table 1). Indeed, the very project of automating discretion is contested by several street-level bureaucracy theorists, including Lipksy, who maintains that part of the ‘essence’ of street-level bureaucracies is ‘that they require people to make decisions about other people’ using ‘judgement that cannot be programmed’ (2010: 161). While Lipsky’s claims about the irreducibility of human judgement to automation may be debated, behind his objection is a deeper ethical concern about the moral loss that can occur in reducing casework ‘to an entirely objective and decontextualised operation’ (Petersen et al., Reference Petersen, Christensen and Hildebrandt2020: 304). At issue is how we understand the nature of citizens’ engagements with street-level bureaucrats: as transactional exchanges with yes/no assessments, or inter-personal encounters that manage complexity and profoundly influence ‘what citizens think of their state, and of their own standing in it’ (Zacka, Reference Zacka2017: 240). The efficiency and consistency of decisions may be most important from a transactional point of view. However, treating welfare interactions as interpersonal encounters involving ‘explaining, empathising, reassuring and problem solving’ (O’Sullivan and Walker, Reference O’Sullivan and Walker2018: 499) may require decisions to be empathetically responsive to the personalised circumstances of each case, necessitating deviations from formalised rules and generic operating procedures that are difficult to encode.

TABLE 1. Trade-offs in Machine Bureaucracies

Efficiency vs. Inclusion

For governments, the new machine bureaucracies are linked to promised reductions in the costs of welfare services (Malpas, Reference Malpas2020; Pors and Schou, Reference Pors and Schou2020). It is envisaged that the new machine bureaucracies will enable a greater volume of services to be delivered within fixed budgets, affording governments’ greater capacity to respond to unemployment crises such as the Covid-19 Pandemic while bringing the potential to ‘activate’ even more claimants. Scarce case management resources can be rationed so that they can be more efficiently allocated towards those with the most complex needs (Commonwealth of Australia, 2018). One significant area of opportunity is the efficiency gains for jobseekers, such as the time saved from no longer having to periodically travel to, and wait in line for, appointments. Those living in regional areas with poor transport links may especially gain from online services, as well as migrants who may gain access to services in their vernacular. Delivering PES online also offers greater flexibility for jobseekers to avail themselves of services out-of-hours, and at times that can be fitted around their schedule. These are far from trivial benefits, particularly given the high numbers of claimants who are working part-time or have caring duties and who are still obliged to routinely attend appointments, opening them to the risk of being sanctioned for missing appointments. Claimants may also experience online services as less stigmatising, if it means that they are exempted from the ‘ritualised humiliation’ (Charlesworth 2000 cited in Wright et al., Reference Wright, Fletcher and Stewart2020: 284) of having to attend job centres and submit to invasive scrutiny of their continued eligibility for payments. This is an important issue, given the wealth of studies suggesting that jobseekers frequently experience activation encounters as stigmatising and degrading (Peterie et al., Reference Peterie, Ramia, Marston and Patulny2019), and that ‘benefits stigma’ can deter people from accessing welfare services (Baumberg, Reference Baumberg2015).

It is important, however, to differentiate between efficiencies in payments administration and efficiencies in delivering employability services. A significant risk is that, instead of being used to efficiently allocate resources, these technologies are used primarily as a means of ‘recouping and reducing social welfare payments’ (Malpas, Reference Malpas2020: 1073). This is illustrated by Australia’s Robodebt programme. In this case, automation was deployed to restrict access to welfare rather than as a means of enabling services to be delivered more efficiently to more people. Robodebt is also a salutatory reminder that promised efficiencies may seldom materialise, particularly when automation carries significant upfront investments in technology that may be liable to programming errors. In the case of Robodebt, the data-matching method for calculating overpayments relied on the false assumption that earnings declared in tax returns could be averaged over the year to determine eligibility for payments, despite eligibility for payments being legally determined on a fortnightly basis according to claimants’ declared financial circumstances during that period. Robodebt’s averaging approach failed to account for fluctuations in earnings due to casual and intermittent work, resulting in ‘vastly different’ (Carney, Reference Carney2021: 19) estimations of claimants’ eligibility for payments than actual social security determinations. Following a legal challenge, the Australian Government was ordered to repay $720m in collected debts and a further $112m in compensation, making Robodebt an extremely costly ‘social policy fiasco’ (Whiteford, Reference Whiteford2021).

Another risk is the possibility of exclusion (Schou and Pors, Reference Schou and Pors2018). One concern is the impact of ‘IT poverty’ (Morris et al., Reference Morris, Coles-Kemp and Jones2020: 28). As services transition online, machine bureaucracies may become inaccessible to those with low digital literacy, people living in areas with poor ICT infrastructure, and those who simply cannot afford the devices and broadband capacity needed to self-service. This will force some people into relying on family members or third sector organisations ‘stepping in to fill the gap’ by supporting them with IT resources and personalised assistance to set-up and continuously manage their online activation (O’Sullivan and Walker, Reference O’Sullivan and Walker2018: 501).

Maximising efficiency may also come at the expense of reducing representation and voice, weakening citizens’ ability to influence administrative outcomes but also their access to mutual support networks and sources of solidarity in their lives. Using the example of digital-by-default benefits administration, Morris et al. argue that digitalisation occludes ‘the messiness of poverty’ by hiding the unusual circumstances of people’s lives and shifting the costs onto claimants when algorithms misstep (2020: 30). In the specific case of employment services, a key issue is the assessment of jobseekers which is undertaken in Australia via the administration of the JSCI. This process relies on self-disclosure of a range of sensitive personal issues (mental illness, criminal convictions, substance dependency, gendered violence) affecting the intensity of support they are eligible for. Such issues often go undisclosed during initial assessments, leading many jobseekers to be allocated a lower level of support than they are eligible for. These barriers subsequently become visible during face-to-face appointments, resulting in reclassification and referral into higher service streams (O’Sullivan et al., Reference O’Sullivan, McGann and Considine2019). With the move to an online model, the consequences of these mis-categorisations become more permanent since incorrectly streamed jobseekers receive no subsequent personalised assistance to readjust their status in the system.

Consistency vs. Personalisation

Besides efficiency dividends, machine bureaucracies in theory promise higher levels of consistency in policy implementation and ‘greater clarity and transparency about the ingredients and motivations of decisions’ (Kleinberg et al., Reference Kleinberg, Ludwig, Mullainathan and Sunstein2020: 30100). As previously noted, the exercise of discretion by street-level workers is criticised as leading to potentially inconsistent implementations of policy. This inconsistency can, in turn, lead to systematic patterns of exclusion in citizens’ access to benefits and services. A characteristic example afflicting PES quasi-markets is the issue of providers and advisors maximising success against performance metrics and outcome payments by ‘creaming’ and ‘parking’ their clients (Carter and Whitworth, Reference Carter and Whitworth2015). As a result, frontline resources risk being concentrated on those considered easiest to place into jobs while presumed ‘harder-to-help’ clients are denied any meaningful support. Moreover, research on representative bureaucracy shows that advisors’ decision-making can be influenced by unconscious biases and racial and class stereotypes (Harrits, Reference Harrits2019; Soss et al., Reference Soss, Fording and Schram2011). In comparison, ADM is positioned as ‘safeguard[ing] fair and uniform decision-making’ through stricter ‘adherence to rules and procedures’ (Ranerup and Henriksen, Reference Ranerup and Henriksen2020: 11). Casey observes how this is a key motivation behind Australia’s online employment services model, framing it as an extension of long-standing government efforts to achieve greater ‘adherence to policy intent’ (2021: 4). The ‘policy intent’ in this case is ‘a more strict delivery of the conditions under which welfare is accessed and provided’ (Morris et al., Reference Morris, Coles-Kemp and Jones2020: 27). Marston argues that this was a key motivation behind the initial computerisation of employment services in the early 2000s; that requirements for advisors to filter decisions via programs were intended to make ‘it harder for staff not to apply harsh new financial sanctions and penalties to the unemployed’ (Marston, Reference Marston2006: 91). Online employment services remove frontline discretion altogether by automating decisions about the fulfilment of mutual obligations. By auto-populating job plans based on pre-determined rules and automating the triggering of sanctions, conditionality can be more strictly enforced.

The issue of enforcing conditionality raises broader questions about whether administrative justice in welfare decision-making should be judged by how consistent and reliable decisions are, or by other criteria such as whether decisions are ethically sensitive and personalised to the unique features of each case. Sanctioning decisions bring this tension into acute focus because of the hardship and suffering that they can trigger, and the ostensibly moral judgements they often require about the validity of claimants’ reasons for non-compliance. Social security legislation may detail a range of valid reasons for non-compliance, such as illness or personal crisis, but they are rarely fully determinate in these respects and will not cover every conceivable circumstance that we would wish considered. Where valid reasons are codified (such as personal crisis), they are often nebulous categories that require interpretation within the context of each case. Is the death of a pet a personal crisis? Perhaps not if a recently acquired pet died prematurely. But what about a dog that provided companionship to a claimant living alone for ten years? Context is everything in such cases, and administrative justice would seem to require what Zacka (Reference Zacka2017: 242) terms ‘ethical decision-making’. Indeed, experimental research on perceptions of human versus algorithmic discretion suggests that in contexts requiring evaluation of people’s circumstances, decisions made by human intuition are perceived as fairer than decisions made by algorithms even when the outcomes are identical (Lee, Reference Lee2018). That is, the processing of decisions through human intuition contributes to their perceived legitimacy. Compassion rather than consistency may be the more important administrative value, requiring decision-makers to ‘operate as full-fledged moral agents’ (Zacka, Reference Zacka2017: 242) who are attuned to the circumstances of the case, the other people whose lives will be impacted, and the potential to aggravate hardship. Of course, advisors often do not act as ‘fully-fledged moral agents’ in this way, resorting to pathological dispositions of indifference or enforcement. But decision-making could be enhanced if they did. There is a presumption underpinning new machine bureaucracies that flexibility is a prime source of bias and unjust treatment. Yet a lack of flexibility can be equally unjust if services are no longer able to recognise difference in the client experience.

In addition to a lack of flexibility, it is far from apparent that ADM can even eliminate bias and discrimination in the first instance. This presupposes that the assumptions built into algorithms and the datasets they are trained upon are themselves free from bias (Henman, Reference Henman2019). If not, as Eubanks argues, then ADM merely ‘replaces the sometimes-biased decision-making of frontline social workers with the rational discrimination of high-tech tools’ (2018: 192). This is exemplified in Desiere and Struyven’s (2021) study of AI-enabled profiling by the Flemish PES, which used a proxy indicator of ethnicity as a factor for predicting jobseekers’ risk of long-term unemployment. As a result, jobseekers born outside Belgium who subsequently found employment were more than twice as likely to be misclassified as being at ‘high-risk’ of long-term unemployment than their Belgian-born counterparts.

Machine bureaucracies rely on algorithmic profiling to ration services and target interventions. But this automated targeting can embed existing inequalities, as the potential for discretionary bias shifts from street-level workers to the system designers and data scientists responsible for programming and training the algorithms. However, in comparison to street-level workers, whose decisions have been under scrutiny for decades, the level of political control over IT programmers’ ‘hidden discretion’ (Jorna and Wagenaar, Reference Jorna and Wagenaar2007) is considerably more limited (Zouridis et al., Reference Zouridis, Van Eck, Bovens, Evans and Hupe2020). The precise design of an algorithm, the decision of which data to include, and the implementation of the algorithm’s outcomes may all contain errors and inconsistencies, just like work done by frontline staff. But these biases and inconsistences may become hidden from view, as the basis for decisions becomes increasingly opaque. Decision rules wired into software are not always easily extracted or intelligible to non-experts. As the algorithms underpinning machine bureaucracies are in use for longer, the originally programmed decision rules may become forgotten or eclipsed if systems are programmed to learn from experience and adjustment their own decision-making (Zouridis et al., Reference Zouridis, Van Eck, Bovens, Evans and Hupe2020).

A critical component for measuring the value of consistency versus personalisation may be in its impact. In employment services, automated suggestions for opportunities, such as those made through job matching tools, can be valuable and might even expose clients to new possible career pathways. We have all been exposed to these tools in our daily lives, through streaming services, or even advertising. We have equally been exposed to how blunt an instrument they can be. Models such as these are predictive based on past behaviour and improve as we make more decisions. What we select and what we reject help to improve the algorithm over time (O’Neill, Reference O’Neill2016). These are low cost and low risk tools that highlight the benefits of automation, as well as their limitations. When these tools are operationalised in benefits administration for the purposes of compliance, the impact is far more significant.

Conclusion

For decades, the delivery of welfare-to-work has been the subject of ongoing administrative and governance reforms paralleling formal policy shifts towards a more conditional and demanding welfare model. While street-level workers have been central political actors in enacting welfare-to-work policies, this is now being challenged by the move towards the machine-level delivery of welfare-to-work. Australia’s proposal to shift the bulk of employment services from quasi-market to online delivery is emblematic of this reform project, marking a key phase in the evolution from street-level to screen-level to machine (or system)-level bureaucracies. It is a distinct ‘variety of digitalisation’ that comprises an attempt to not just re-orientate the discretion previously held by frontline workers but to automate it. This project of digitalising discretion is animated by the conviction that PES can be delivered more efficiently, and more reliably, by machine bureaucracies – at least as far as most jobseekers are concerned. Service delivery via digital systems promises financial savings and greater fidelity to ‘policy-as-written’, and there may be important benefits for citizens: from no longer wasting time travelling to periodic in-person appointments to no longer wasting time waiting-in-line for them when there. Digital servicing may also alleviate the stigma of being publicly called to account for one’s ‘welfare dependency’ and ongoing efforts to find work. Digitalisation may also reduce barriers to accessing support arising from language issues, or delivery agents’ risk selection practices (e.g., creaming and parking). Nonetheless, these efficiency and consistency gains come with significant trade-offs, as we have argued. Key amongst these is the potential for new patterns of exclusion to arise for jobseekers with poor digital literacy or without access to the financial resources and ICT infrastructure needed to self-service online. Equally it remains unclear whether the ‘consistency’ afforded by digitalisation will be mainly harnessed to widen access to enabling supports for previously excluded groups, or whether the priority will be on deploying digitalisation to more strictly enforce mutual obligations through automating conditionality. In certain instances, particularly where decisions carry the potential to cause hardship and suffering, fairness and administrative justice may be better served by decision-making that is guided by empathy and situated responsiveness rather than consistency and uniformity. Digitalising discretion risks hiding complex ethical decisions and human costs behind a veneer of technical and administrative system requirements. Key questions remain outstanding about how the tensions we have identified between efficiency and inclusiveness, and between consistency and personalisation will be resolved in practice and within different national contexts.

Competing Interests

Competing interests: The authors, including Considine, McGann, Ball, and Nguyen, declare none.

Acknowledgments

This work was funded by an Australian Research Council Linkage Grant (GA85559), supported by our industry partners the National Employment Services Association, and Westgate Community Initiatives Group. The authors thank the ARC and our industry partners for their support. Part of the research undertaken for this paper also received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie-Sklodowska-Curie grant agreement no. 841477. The views expressed are those of the authors alone. Neither the University of Melbourne, La Trobe University, Maynooth University, the European Commission nor the Australian Research Council are responsible for any use that may be made of the information in this article.

References

Baumberg, B. (2015), The stigma of claiming benefits: A quantitative study. Journal of Social Policy 45(2): 181199.CrossRefGoogle Scholar
Bordoloi, S., Fitzsimmons, J. and Fitzsimmons, M. (2019), Service Management: Operations, Strategy, Information Technology, 9th edn., New York: McGraw-Hill.Google Scholar
Bovens, M. and Zouridis, S. (2002), From Street-Level to System-Level Bureaucracies: How Information and Communication Technology Is Transforming Administrative Discretion. Public Administration Reviewdministratio Review 62(2): 174184.CrossRefGoogle Scholar
Breit, E., Egeland, C. and Løberg, I. B. (2019), Cyborg bureaucracy: Frontline work in digitalized labor and welfare services. In Pedersen, J. S. and Wilkinson, A. (eds.), Big Data: Promise, Applications and Pitfalls. Cheltenham: Edward Elgar, pp. 149169.CrossRefGoogle Scholar
Breit, E., Egeland, C., Løberg, I. B. and Røhnebæk, M. T. (2020), Digital coping: How frontline workers cope with digital service encounters. Social Policy and Administration. DOI: 10.1111/spol.12664.CrossRefGoogle Scholar
Brodkin, E. Z. (2011), Policy Work: Street-level Organisations Under New Managerialism. Journal of Pubilc Administration Research and Theory 21: i253i277.CrossRefGoogle Scholar
Bullock, J. B. (2019), Artificial intelligence, discretion, and bureaucracy. American Review of Public Administration 49(7): 751756.CrossRefGoogle Scholar
Busch, P. A. and Henriksen, H. Z. (2018), Digital discretion: A systematic literature review of ICT and street-level discretion. Information Polity 23(1): 328.CrossRefGoogle Scholar
Carney, T. (2021), Artificial Intelligence in Welfare: Striking the vulnerability balance? Monash University Law Review 46(2): 130.Google Scholar
Carter, E. and Whitworth, A. (2015), Creaming and parking in quasi-marktised welfare-to-work schemes: designed out of or designed into the UK Work Programme? Journal of Social Policy 44(2): 277296.CrossRefGoogle ScholarPubMed
Casey, S. (2021), Towards digital dole parole: A review of digital self-service initiatives in Australian employment services. Australian Journal of Social Issues. DOI: 10.1002/ajs4.156.CrossRefGoogle Scholar
Caswell, D., Marston, G. and Larsen, J. E. (2010), Unemployed citizen or ‘at risk’ client? Classification systems and employment services in Denmark and Australia. Critical social policy 30(3): 384404.Google Scholar
Coles-Kemp, L., Ashenden, D., Morris, A. and Yuille, J. (2020), Digital welfare: designing for more nuanced forms of access. Policy Design and Practice 3(2): 177188.CrossRefGoogle Scholar
Commonwealth of Australia (2018), I want to work: employment services 2020 report.Google Scholar
Considine, M., Lewis, J. M., O’Sullivan, S. and Sol, E. (2015), Getting Welfare to Work: Street-Level Governance in Australia, the UK, and the Netherlands. New York: Oxford University Press.CrossRefGoogle Scholar
Desiere, S. and Struyven, L. (2021), Using artificial intelligence to classify jobseekers: the accuracy-equity trade-off. Journal of Social Policy 50(2): 367385.CrossRefGoogle Scholar
Eubanks, V. (2018), Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor. New York: St Martin’s Press.Google Scholar
Fletcher, D. R. (2011), Welfare reform, Jobcentre Plus and the street-level bureaucracy: Towards inconsistent and discriminatory welfare for severely disadvantaged groups. Social Policy & Society 10(4): 445458.CrossRefGoogle Scholar
Harrits, G. S. (2019), Stereotypes in Context: How and When Do Street-Level Bureaucrats Use Class Stereotypes? Public Administration Review 79(1): 93103.CrossRefGoogle Scholar
Hasenfeld, Y. (2010), Organizational respones to social policy: the case of welfare reform. Administration in Social Work 34(1): 148167.CrossRefGoogle Scholar
Henman, P. (2019), Of algorithms, apps and advice: digital social policy and service delivery. Journal of Asian Public Policy 12(1): 7189.CrossRefGoogle Scholar
Jorna, F. and Wagenaar, P. (2007), The ‘iron cage’strengthened? Discretion and digital discipline. Public Administration 85(1): 189214.CrossRefGoogle Scholar
Kleinberg, J., Ludwig, J., Mullainathan, S. and Sunstein, C. R. (2020), Algorithms as discrimination detectors. Proceedings of the National Academy of Sciences 117(48): 3009630100.CrossRefGoogle ScholarPubMed
Lee, M. K. (2018), Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society 5(1): 205395171875668.CrossRefGoogle Scholar
Lipsky, M. (2010), Street-Level Bureaucracy: Dilemmas of the Individual in Public Services. New York: Rusell Sage.Google Scholar
Malpas, J. (2020), The necessity of judgement. AI & Society 35: 10731074.CrossRefGoogle Scholar
Marston, G. (2006), Employment services in an age of e-government. Information, Communication and Society 9(1): 83101.CrossRefGoogle Scholar
Morris, A., Coles-Kemp, L. and Jones, W. (2020), Digitalised welfare: Systems for both seeing and working with mess. In: WebSci 2020 - Companion of the 12th ACM Conference on Web Science, 6 July 2020, pp. 26–31. DOI: 10.1145/3394332.3402825.CrossRefGoogle Scholar
O’Neill, C. (2016), Weapons of Math Destruction. Great Britain: Allen Lane.Google Scholar
O’Sullivan, S., McGann, M. and Considine, M. (2019), The category game and its impact on street-level bureaucrats and jobseekers: An Australian case study. Social Policy & Society 18(4): 631645.CrossRefGoogle Scholar
O’Sullivan, S. and Walker, C. (2018), From the interpersonal to the internet: social service digitalisation and the implications for vulnerable individuals and communities. Australian Journal of Political Science 53(4): 490507.CrossRefGoogle Scholar
Pedersen, J. S. and Wilkinson, A. (2018), The digital society and provision of welfare services. International Jounral of Sociology and Social Policy 38(194–209).Google Scholar
Peterie, M., Ramia, G., Marston, G. and Patulny, R. (2019), Emotional compliance and emotion as resistance: shame and anger among the long-term unemployed. Work, Employment and Society 33(5): 794811.CrossRefGoogle Scholar
Petersen, A. C. M., Christensen, L. R. and Hildebrandt, T. T. (2020), The Role of Discretion in the Age of Automation. Computer Supported Cooperative Work 29: 303333.CrossRefGoogle Scholar
Pors, A. and Schou, J. (2020), Street-level morality at the digital frontlines: An ethnographic study of moral mediation in welfare work. Administrative Theory and Praxis. DOI: 10.1080/10841806.2020.1782137.CrossRefGoogle Scholar
Ranerup, A. and Henriksen, H. Z. (2020), Digital Discretion: Unpacking Human and Technological Agency in Automated Decision Making in Sweden’s Social Services. Social Science Computer Review. DOI: 10.1177/0894439320980434.CrossRefGoogle Scholar
Schou, J. and Pors, A. S. (2018), Digital by default? A qualitative study of exclusion in digitalised welfare. Social Policy & Administration: 1–14. DOI: 10.1111/spol.12470.CrossRefGoogle Scholar
Soss, J., Fording, R. and Schram, S. (2011), Disciplining the Poor: Neoliberal Paternalism and the Persistent Power of Race. Chicage: The University of Chicago Press.CrossRefGoogle Scholar
Whelan, A. (2020), ‘Ask for More Time’: Big Data Chronopolitics in the Australian Welfare Bureaucracy. Critical Sociology 46 (6): 867–80.CrossRefGoogle Scholar
Whiteford, P. (2021), Debt by design: The anatomy of a social policy fiasco – Or was it something worse? Australian Journal of Public Administration 80(2): 340360.CrossRefGoogle Scholar
Wright, S., Fletcher, D. and Stewart, A. (2020), Punitive benefit sanctions, welfare conditionality, and the social abuse of unemployed people in Britain: Transforming claimants into offenders? Social Policy and Administration 54(2): 278294.CrossRefGoogle Scholar
Zacka, B. (2017), When the state meets the street: public service and moral agency. Cambridge, MA.: The Belknap Press of Harvard University Press.CrossRefGoogle Scholar
Zouridis, S., Van Eck, M. and Bovens, M. (2020), Automated Discretion. In Evans, T. and Hupe, P. (eds.), Discretion and the Quest for Controlled Freedom. Cham.: Palgrace Macmillian, pp. 313329.CrossRefGoogle Scholar
Figure 0

FIGURE 1. Models of (digital) service encounters.Source: Adapted from (Bordoloi et al., 2019: 96)

Figure 1

TABLE 1. Trade-offs in Machine Bureaucracies

You have Access Open access
2
Cited by

Save article to Kindle

To save this article to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Can Robots Understand Welfare? Exploring Machine Bureaucracies in Welfare-to-Work
Available formats
×

Save article to Dropbox

To save this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Dropbox account. Find out more about saving content to Dropbox.

Can Robots Understand Welfare? Exploring Machine Bureaucracies in Welfare-to-Work
Available formats
×

Save article to Google Drive

To save this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Google Drive account. Find out more about saving content to Google Drive.

Can Robots Understand Welfare? Exploring Machine Bureaucracies in Welfare-to-Work
Available formats
×
×

Reply to: Submit a response

Please enter your response.

Your details

Please enter a valid email address.

Conflicting interests

Do you have any conflicting interests? *